I am migrating vectorized code written using SSE2 intrinsics to AVX2 intrinsics.
Much to my disappointment, I discover that the shift instructions _mm256_sll
From different inputs, I gathered these solutions. The key to crossing the inter-lane barrier is the align instruction, _mm256_alignr_epi8.
0 < N < 16
_mm256_alignr_epi8(A, _mm256_permute2x128_si256(A, A, _MM_SHUFFLE(0, 0, 2, 0)), 16 - N)
N = 16
_mm256_permute2x128_si256(A, A, _MM_SHUFFLE(0, 0, 2, 0))
16 < N < 32
_mm256_slli_si256(_mm256_permute2x128_si256(A, A, _MM_SHUFFLE(0, 0, 2, 0)), N - 16)
0 < N < 16
_mm256_alignr_epi8(_mm256_permute2x128_si256(A, A, _MM_SHUFFLE(2, 0, 0, 1)), A, N)
N = 16
_mm256_permute2x128_si256(A, A, _MM_SHUFFLE(2, 0, 0, 1))
16 < N < 32
_mm256_srli_si256(_mm256_permute2x128_si256(A, A, _MM_SHUFFLE(2, 0, 0, 1)), N - 16)