Are these functions column-major or row-major?

*爱你&永不变心* 提交于 2019-12-01 11:55:24

To my eyes, these two are using the same format, in that in both the first index is treated as the row, the second index is the column.

The looks may be deceiving, but in fact the first index in linmath.h is the column. C and C++ specify that in a multidimensional array defined like this

sometype a[n][m];

there are n times m elements of sometype in succession. If it is row or column major order solely depends on how you interpret the indices. Now OpenGL defines 4×4 matrices to be indexed in the following linear scheme

0 4 8 c
1 5 9 d
2 6 a e
3 7 b f

If you apply the rules of C++ multidimensional arrays you'd add the following column row designation

   ----> n

|  0 4 8 c
|  1 5 9 d
V  2 6 a e
m  3 7 b f

Which remaps the linear indices into 2-tuples of

0 -> 0,0
1 -> 0,1
2 -> 0,2
3 -> 0,3
4 -> 1,0
5 -> 1,1
6 -> 1,2
7 -> 1,3
8 -> 2,0
9 -> 2,1
a -> 2,2
b -> 2,3
c -> 3,0
d -> 3,1
e -> 3,2
f -> 3,3

Okay, OpenGL and some math libraries use column major ordering, fine. But why do it this way and break with the usual mathematical convention that in Mi,j the index i designates the row and j the column? Because it is make things look nicer. You see, matrix is just a bunch of vectors. Vectors that can and usually do form a coordinate base system.

Have a look at this picture:

The axes X, Y and Z are essentially vectors. They are defined as

X = (1,0,0)
Y = (0,1,0)
Z = (0,0,1)

Moment, does't that up there look like a identity matrix? Indeed it does and in fact it is!

However written as it is the matrix has been formed by stacking row vectors. And the rules for matrix multiplication essentially tell, that a matrix formed by row vectors, transforms row vectors into row vectors by left associative multiplication. Column major matrices transform column vectors into column vectors by right associative multiplication.

Now this is not really a problem, because left associative can do the same stuff as right associative can, you just have to swap rows for columns (i.e. transpose) everything and reverse the order of operands. However left<>right row<>column are just notational conventions in which we write things.

And the typical mathematical notation is (for example)

v_clip = P · V · M · v_local

This notation makes it intuitively visible what's going on. Furthermore in programming the key character = usually designates assignment from right to left. Some programming languages are more mathematically influenced, like Pascal or Delphi and write it :=. Anyway with row major ordering we'd have to write it

v_clip = v_local · M · V · P

and to the majority of mathematical folks this looks unnatural. Because, technically M, V and P are in fact linear operators (yes they're also matrices and linear transforms) and operators always go between the equality / assignment and the variable.

So that's why we use column major format: It looks nicer. Technically it could be done using row major format as well. And what does this have to do with the memory layout of matrices? Well, When you want to use a column major order notation, then you want direct access to the base vectors of the transformation matrices, without having them to extract them element by element. With storing numbers in a column major format, all it takes to access a certain base vector of a matrix is a simple offset in linear memory.

I can't speak for the code example of the other library, but I'd strongly assume, that it treats first index as the slower incrementing index as well, which makes it work in column major if subjected to the notations of OpenGL. Remember: column major & right associativity == row major & left associativity.

The fragments posted are not enough to answer the question. They could be row-major matrices stored in row order, or column-major matrices stored in column order.

It may be more obvious if you look at how a vector is treated when multiplied with an appropriate matrix. In a row-major system, you would expect the vector to be treated as a single row matrix, whereas in a column-major system it would similarly be a single column matrix. That then dictates how a vector and a matrix may be multiplied. You can only multiply a vector with a matrix as either a single column on the right, or a single row on the left.

The GL convention is column-major, so a vector is multiplied to the right. D3D is row-major, so vectors are rows and are multiplied to the left.

This needs to be taken into account when concatenating transforms, so that they are applied in the correct order.

i.e:

GL:
    V' = CAMERA * WORLD * LOCAL * V
D3D:
    V' = V * LOCAL * WORLD * CAMERA

However they choose to store their matrices such that the in-memory representations are actually the same (until we get into shaders and some representations need to be transposed...)

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!