timeit np.multiply(np.multiply(x,x),x)
times the same as x*x*x
. My guess is that np.multiply
is using fast Fortran linear algebra package like BLAS. I know from another issue that numpy.dot
uses BLAS for certain cases.
I have to take that back. np.dot(x,x)
is 3x faster than np.sum(x*x)
. So the speed advantage to np.multiply
is not consistent with using BLAS.
With my numpy (times will vary with machine and available libraries)
np.power(x,3.1)
np.exp(3.1*np.log(x))
take about the same time, but
np.power(x,3)
is 2x as fast. Not as fast as x*x*x
, but still faster than the general power. So it is taking some advantage of the integer power.