I\'ve been looking at several webgl examples. Consider MDN\'s tutorial. Their vertex shader multiplies the vertex by a perspective matrix and a world position matrix:
it depends ....
If you do it in the shader it's done for either every vertex (vertex shader) or every pixel (fragment shader). Even a GPU does not have infinite speed so let's say you are drawing 1million vertices. It's likely 1 set of matrix math calculations in JavaScript vs 1 million matrix calculations in on the GPU, the JavaScript will win.
Of course your milage may very. Every GPU is different. Some GPUs are faster than others. Some drivers do vertex calculations on the CPU. Some CPUs are faster than others.
You can test, unfortunately since you are writing for the web you have no idea what browser the user is running, nor what CPU speed or GPU or driver etc. So, it really depends.
On top of that, passing matrices to the shader is also a non-free operation. In other words it's faster to call gl.uniformMatrix4fv
once than the 4 times you show in your example. If you were drawing 3000 objects whether 12000 calls to gl.uniformMatrix4fv
(4 matrices each) is significantly slower than 3000 calls (1 matrix each) is something you'd have to test.
Further, the browsers teams are working on making math through JavaScript for matrices faster and trying to get it closer to C/C++.
I guess that means there is no right answer except to test and those results will be different for every platform/browser/gpu/drivers/cpu.