I am computing the backpropagation algorithm for a sparse autoencoder. I have implemented it in python using numpy and in matlab. The
Difference in performance between numpy and matlab have always frustrated me. They often in the end boil down to the underlying lapack libraries. As far as I know matlab uses the full atlas lapack as a default while numpy uses a lapack light. Matlab reckons people dont care about space and bulk, while numpy reckons people do. Similar question with a good answer.