Measuring and benchmarking processing power of a javascript engine in a browser

有些话、适合烂在心里 提交于 2019-12-01 09:14:19

JS performance optimization is a huge area in general, and it's rather ambitious to start from scratch.

If I were you, I'd take a look at some existing projects around this space:

  • Benchmark.js handles the timing and stats analysis (averaging, computing variance) bits.
  • JSPerf lets anyone create and run tests and then look at results for any browser. There's q large repository of tests there that you can peruse.
  • BrowserScope is the results storage for JSPerf tests, and tracks results per-UA.

The Chrome console has a "weird" execution environment that's not quite the web page itself and incurs some performance costs due to that, I would think. That's certainly true for the console in Firefox.

To answer your original question... it really depends on what you want to measure. Different JS engines are good at different things, so depending on the test program you could have Chrome being 5x faster than Firefox, say, or vice versa.

Also, the optimizations browser JITs do can be very heavily dependent on the overall code flow, so the time it takes to do operation A followed by operation B is in general not the same as the sum of the times needed to do A and B separately (it can be much larger, or it can be smaller). As a result, benchmarking anything other than the code you actually want to run is of very limited utility. Running any single piece of code is nearly useless for "ranking web browsers according to performance".

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!