I need a fast memory transpose algorithm for my Gaussian convolution function in C/C++. What I do now is
convolute_1D
transpose
convolute_1D
transpose
FWIW, on a 3 years old Core i7 M laptop CPU, this naive 4x4 transpose was barely slower than your SSE version, while almost 40% faster on a newer Intel Xeon E5-2630 v2 @ 2.60GHz desktop CPU.
inline void transpose4x4_naive(float *A, float *B, const int lda, const int ldb) {
const float r0[] = { A[0], A[1], A[2], A[3] }; // memcpy instead?
A += lda;
const float r1[] = { A[0], A[1], A[2], A[3] };
A += lda;
const float r2[] = { A[0], A[1], A[2], A[3] };
A += lda;
const float r3[] = { A[0], A[1], A[2], A[3] };
B[0] = r0[0];
B[1] = r1[0];
B[2] = r2[0];
B[3] = r3[0];
B += ldb;
B[0] = r0[1];
B[1] = r1[1];
B[2] = r2[1];
B[3] = r3[1];
B += ldb;
B[0] = r0[2];
B[1] = r1[2];
B[2] = r2[2];
B[3] = r3[2];
B += ldb;
B[0] = r0[3];
B[1] = r1[3];
B[2] = r2[3];
B[3] = r3[3];
}
Strangely enough, the older laptop CPU is faster than the dual E5-2630 v2 desktop with twice the core, but that's a different story :)
Otherwise, you might also be interested in http://research.colfaxinternational.com/file.axd?file=2013%2F8%2FColfax_Transposition-7110P.pdf http://colfaxresearch.com/multithreaded-transposition-of-square-matrices-with-common-code-for-intel-xeon-processors-and-intel-xeon-phi-coprocessors/ (requires login now...)