what's the performance difference between int and varchar for primary keys

六眼飞鱼酱① 提交于 2019-12-04 18:34:19

For MySQL, according to Alexey here, the answer is surprisingly "not much". He concludes:

So, if you have an application and you need to have some table field with a small set of possible values, I'd still suggest you to use ENUM, but now we can see that performance hit may not be as large as you expect. Though again a lot depends on your data and queries.

You will likely not run out of integers.

For example in MySQL, max value for BigInt is 18,446,744,073,709,551,615. So if you insert 100 million rows per second, it will take you 5849 years before you run out of numbers.

  • varchar requires extra storage for length information
  • comparison and sorting requires collation processing
  • varchar may not match across systems because of collation
  • int gives 4 billion rows, bigint (8 bytes) gives 18 trillion rows
  • pre-bigint, I've seen decimal (19, 0) which also gives 18 trillion rows

Using varchar will end in tears...

To be clear: you're developing a system that may have more then 4 billion rows (you don't know), has replication, you don't know what RDBMS you'll use, and you don't understand how varchar differs from an integer?

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!