sorry for so many questions. I am running Mac OSX 10.6 on Intel core 2 Duo. I am running some benchmarks for my research and I have run into another thing that baffles me. <
very interesting, I was curious to see how it was implemented so I did:
>>> import inspect
>>> import numpy as np
>>> inspect.getmodule(np.dot)
>>>
So it looks like its using the BLAS library.
so:
>>> help(np.core._dotblas)
from which I found this:
When Numpy is built with an accelerated BLAS like ATLAS, these functions are replaced to make use of the faster implementations. The faster implementations only affect float32, float64, complex64, and complex128 arrays. Furthermore, the BLAS API only includes matrix-matrix, matrix-vector, and vector-vector products. Products of arrays with larger dimensionalities use the built in functions and are not accelerated.
So it looks like ATLAS fine tunes certain functions but its only applicable to certain data types, very interesting.
so yeah it looks I'll be using floats more often ...