Very Large Mnesia Tables in Production

自闭症网瘾萝莉.ら 提交于 2019-11-29 21:48:52

The hint of having a power of two number of fragments is simply related to the fact the default fragmentation module mnesia_frag uses linear hashing so using 2^n fragments assures that records are equally distributed (more or less, obviously) between fragments.

Regarding the hardware at disposal, it's more a matter of performance testing. The factors that can reduce performance are many and configuring a database like Mnesia is just one single part of the general problem. I simply advice you to stress test one server and then test the algorithm on both servers to understand if it scales correctly.

Talking about Mnesia fragments number scaling remember that by using disc_only_copies most of the time is spent in two operations:

  • decide which fragment holds which record

  • retrieve the record from corresponding dets table (Mnesia backend)

The first one is not really dependent from the number of fragments considered that by default Mnesia uses linear hashing. The second one is more related to hard disk latency than to other factors.

In the end a good solution could be to have more fragments and less records per fragment but trying at the same time to find the middle ground and not lose the advantages of some hard disk performance boosts like buffers and caches.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!