Does Node.js scalability suffer because of garbage collection when under high load?

被刻印的时光 ゝ 提交于 2019-12-06 02:13:26

问题


Though Node.js is pretty hot topic, I happens to find that it is reported Node.js might not be appropriate for real-time application due to its Garbage Collection model (http://amix.dk/blog/post/19577). And, some benchmark shows that Node.js responds slow compared to RingoJS(http://hns.github.com/2010/09/29/benchmark2.html).

For the time being, Node.js is bound to V8 JavaScript engine which use generational stop-the-world GC.

So, would Node.js be busted when incoming requests are massive? If there is real production statistics, that would be better.

Thanks


回答1:


The cost of garbage collection depends on the number of objects in the heap, particularly the number of long-lived objects. The more you have, the more time will be spent in GC.

Yes, V8 currently can take some sizable GC pauses sometimes if the heap is large. It sounds like the V8 team is working on minimizing the cost of each GC pause by spreading the work out. You can see the cost of GC in your own node programs by starting it with --trace-gc.

For many applications, the cost of GC is offset by the increasingly excellent optimizing compiler. I'd suggest trying a simple program and measuring both the cost of GC as reported by V8 as well as measuring the client to client latency. I've found the GC costs to be almost completely ignorable when the clients are connecting over the open Internet.



来源:https://stackoverflow.com/questions/6176055/does-node-js-scalability-suffer-because-of-garbage-collection-when-under-high-lo

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!