Neo4j Scalability

倾然丶 夕夏残阳落幕 提交于 2020-01-13 18:38:27

问题


I have read this article. It states, that Neo4j can scale horizontally, but only to increase read performance and fault tolerance... so the stored graph is copied to each server in a cluster. But what if I have a dataset that is larger than one server can store? Does Neo4j fail in this situation? Do I have to scale vertically in this situation and buy larger HDD?

Thank you


回答1:


Yes. You need enough hard drive space to contain the full graph on all nodes of the cluster, no way around that.

If you're instead referring to RAM instead of hard drive space, then it isn't necessary to have all of the db in memory (defined by the pagecache setting in neo4j.conf), but that will mean you'll hit the disk on all pagecache misses.

Here's the memory configuration section in the docs for details.



来源:https://stackoverflow.com/questions/50200734/neo4j-scalability

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!