Is it possible to run out of bigint values?
You can run out of bigints but it’s not practical that you will.
December 7, 2011
Question: I’ve been told that it’s possible to run out of values when using the bigint data type and that I should use GUIDs instead, regardless of the problems they can cause. Is this true?
Answer: This is similar to the question I answered last year on whether it’s possible to run out of virtual log file sequence numbers – where the answer is yes, but it would take 240 million years to do so.
Related: Is It Possible to Run Out of Log Sequence Numbers?
Similarly, yes, you can run out of bigints but it’s not practical that you will.
Bigint allows you to have +/- 2^63 (or +/- 9,223,372,036,854,775,808). Say for argument’s sake that you’re able to process 100 thousand pieces of data per second, and you assign an ever-increasing bigint value for each one. You’d have to be processing continuously for 2^63 / 100,000 / 3,600 hours – which works out to be 2.925 million years.
Now that’s just using the bigints – if you wanted to store them too, you’d run out of storage space before running out of numbers.
Doing a quick test of a heap with a single bigint identity column shows me that I can get 453 rows per 8KB data file page (don’t forget the record overhead, slot array overhead, and that the heap pages won’t be filled completely because of the way the free space caching and searching works). A terabyte of data would store roughly 61 billion rows.
At that rate, actually running out of bigints AND storing them would take roughly 150 thousand petabytes. This is clearly impractical – especially when you consider that simply storing a bigint is pretty pointless – you’d be storing a bigint and some other data too – probably doubling the storage necessary, at least.
So yes, while it is theoretically possibly to run out of bigint values, in reality it’s not going to happen.
Learn more: Performance Effects of Using GUIDs as Primary Keys
About the Author
You May Also Like