C++ 和 OpenGL 矩阵顺序之间的混淆(行优先 vs 列优先)

2021-12-17 00:00:00 opengl matrix math c++

我对矩阵定义感到非常困惑.我有一个矩阵类,它包含一个 float[16],我认为它是行优先的,基于以下观察:

float matrixA[16] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 };浮点矩阵B[4][4] = { { 0, 1, 2, 3 }, { 4, 5, 6, 7 }, { 8, 9, 10, 11 }, { 12, 13, 14, 15 } };

matrixAmatrixB 在内存中都具有相同的线性布局(即所有数字都按顺序排列).根据 http://en.wikipedia.org/wiki/Row-major_order 这表示行主要布局.

matrixA[0] == matrixB[0][0];矩阵A[3] == 矩阵B[0][3];矩阵A[4] == 矩阵B[1][0];矩阵A[7] == 矩阵B[1][3];

因此,matrixB[0] = 第 0 行,matrixB[1] = 第 1 行,等等.同样,这表示行优先布局.

当我创建一个如下所示的翻译矩阵时,我的问题/困惑就出现了:

1, 0, 0, transX0, 1, 0, transY0, 0, 1, 反Z0, 0, 0, 1

在内存中的布局为,{ 1, 0, 0, transX, 0, 1, 0, transY, 0, 0, 1, transZ, 0, 0, 0, 1 }.

然后当我调用 glUniformMatrix4fv 时,我需要将转置标志设置为 GL_FALSE, 表明它是列优先的,否则转换/缩放等转换无法正确应用:

<块引用>

如果转置为 GL_FALSE,则假定每个矩阵都在列主要订单.如果转置为 GL_TRUE,则假设每个矩阵为以行主要顺序提供.

为什么我的矩阵看起来是行优先的,但需要作为列优先传递给 OpenGL?

解决方案

opengl 文档中使用的矩阵表示法没有描述 OpenGL 矩阵的内存布局

如果您认为放弃/忘记整个行/列主要"的事情会更容易.那是因为除了行/列专业之外,程序员还可以决定他希望如何在内存中布置矩阵(相邻元素形成行还是列),除了符号之外,这增加了混乱.

OpenGL 矩阵具有与 directx 矩阵相同的内存布局.

x.x x.y x.z 0y.x y.y y.z 0z.x z.y z.z 0p.x p.y p.z 1

{ x.x x.y x.z 0 y.x y.y y.z 0 z.x z.y z.z 0 p.x p.y p.z 1 }

  • x、y、z 是描述矩阵坐标系(相对于全局坐标系内的局部坐标系)的 3 分量向量.

  • p 是描述矩阵坐标系原点的 3 分量向量.

这意味着翻译矩阵应该像这样在内存中布局:

{ 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, transX, transY, transZ, 1 }.

就这样吧,剩下的就很简单了.

---引自旧的opengl faq--

<小时><块引用>

9.005 OpenGL 矩阵是列优先还是行优先?

出于编程目的,OpenGL 矩阵是 16 值数组,基向量在内存中连续排列.平移分量占据了 16 元素矩阵的第 13、14 和 15 个元素,其中索引从 1 到 16 编号,如 OpenGL 2.1 规范的第 2.11.2 节所述.

列优先与行优先纯粹是一种符号约定.请注意,使用列主矩阵进行后乘会产生与使用行主矩阵进行预乘相同的结果.OpenGL 规范和 OpenGL 参考手册都使用列优先表示法.您可以使用任何符号,只要清楚说明即可.

遗憾的是,规范和蓝皮书中使用列优先格式导致 OpenGL 编程社区无休止的混乱.列优先表示法表明矩阵并不像程序员所期望的那样在内存中布置.

<小时>

I'm getting thoroughly confused over matrix definitions. I have a matrix class, which holds a float[16] which I assumed is row-major, based on the following observations:

float matrixA[16] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 };
float matrixB[4][4] = { { 0, 1, 2, 3 }, { 4, 5, 6, 7 }, { 8, 9, 10, 11 }, { 12, 13, 14, 15 } };

matrixA and matrixB both have the same linear layout in memory (i.e. all numbers are in order). According to http://en.wikipedia.org/wiki/Row-major_order this indicates a row-major layout.

matrixA[0] == matrixB[0][0];
matrixA[3] == matrixB[0][3];
matrixA[4] == matrixB[1][0];
matrixA[7] == matrixB[1][3];

Therefore, matrixB[0] = row 0, matrixB[1] = row 1, etc. Again, this indicates row-major layout.

My problem / confusion comes when I create a translation matrix which looks like:

1, 0, 0, transX
0, 1, 0, transY
0, 0, 1, transZ
0, 0, 0, 1

Which is laid out in memory as, { 1, 0, 0, transX, 0, 1, 0, transY, 0, 0, 1, transZ, 0, 0, 0, 1 }.

Then when I call glUniformMatrix4fv, I need to set the transpose flag to GL_FALSE, indicating that it's column-major, else transforms such as translate / scale etc don't get applied correctly:

If transpose is GL_FALSE, each matrix is assumed to be supplied in column major order. If transpose is GL_TRUE, each matrix is assumed to be supplied in row major order.

Why does my matrix, which appears to be row-major, need to be passed to OpenGL as column-major?

解决方案

matrix notation used in opengl documentation does not describe in-memory layout for OpenGL matrices

If think it'll be easier if you drop/forget about the entire "row/column-major" thing. That's because in addition to row/column major, the programmer can also decide how he would want to lay out the matrix in the memory (whether adjacent elements form rows or columns), in addition to the notation, which adds to confusion.

OpenGL matrices have same memory layout as directx matrices.

x.x x.y x.z 0
y.x y.y y.z 0
z.x z.y z.z 0
p.x p.y p.z 1

or

{ x.x x.y x.z 0 y.x y.y y.z 0 z.x z.y z.z 0 p.x p.y p.z 1 }

  • x, y, z are 3-component vectors describing the matrix coordinate system (local coordinate system within relative to the global coordinate system).

  • p is a 3-component vector describing the origin of matrix coordinate system.

Which means that the translation matrix should be laid out in memory like this:

{ 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, transX, transY, transZ, 1 }.

Leave it at that, and the rest should be easy.

---citation from old opengl faq--


9.005 Are OpenGL matrices column-major or row-major?

For programming purposes, OpenGL matrices are 16-value arrays with base vectors laid out contiguously in memory. The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix, where indices are numbered from 1 to 16 as described in section 2.11.2 of the OpenGL 2.1 Specification.

Column-major versus row-major is purely a notational convention. Note that post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. The OpenGL Specification and the OpenGL Reference Manual both use column-major notation. You can use any notation, as long as it's clearly stated.

Sadly, the use of column-major format in the spec and blue book has resulted in endless confusion in the OpenGL programming community. Column-major notation suggests that matrices are not laid out in memory as a programmer would expect.


相关文章