We have a SQL Server 2005 database for which we want to improve performance of bulk delete/insert/selects and I notice it uses decimal(18,0)
for its primary key
DATALENGTH is casting to varchar before counting bytes. So your max value is < 100000.
The 9 bytes can be proved with this. sys.columns has a max_length column (decimal is fixed length so it is always 9 bytes, before you ask otherwise)
CREATE TABLE dbo.foo (bar decimal(18,0))
GO
SELECT * FROM sys.columns WHERE object_id = OBJECT_ID('foo')
GO
DROP TABLE dbo.foo
GO
For legacy reasons, decimal(18, 0)
was often used as a surrogate for "64 bit integer" before bigint was added with SQL Server 2000.
decimal(18, 0)
and bigint
are roughly the same in range: decimal is one byte more at 9 bytes as per the documentation
On top of that, plain integer will be fractionally (maybe not measurable) faster then decimal. Saying that, if expect to have more then 4 billion rows in the next year or 5, then the performance should matter. If it doesn't, then just use int