I am writing a new program and it will require a database (SQL Server 2008). Everything I am running now for the system is 64-bit, which brings me to this question. For all
You should use the smallest data type that makes sense for the table in question. That includes using smallint
or even tinyint
if there are few enough rows.
You'll save space on both data and indexes and get better index performance. Using a bigint
when all you need is a smallint
is similar to using a varchar(4000)
when all you need is a varchar(50)
.
Even if the machine's native word size is 64 bits, that only means that 64-bit CPU operations won't be any slower than 32-bit operations. Most of the time, they also won't be faster, they'll be the same. But most databases are not going to be CPU bound anyway, they'll be I/O bound and to a lesser extent memory-bound, so a 50%-90% smaller data size is a Very Good Thing when you need to perform an index scan over 200 million rows.