# Rotation Matrices

4 replies to this topic

### #1 hit172

hit172

GMC Member

• New Member
• 189 posts
• Version:GM8

Posted 05 February 2012 - 03:07 AM

In a rotation matrix are the rows of the matrix the 3 axes of the specified rotation such that row 1 is the right/left vector, row 2 is the up vector, and row 3 is the out/look vector?
Can you do it this way or is there a better way to construct a rotation matrix?

When I use my method it results in a matrix that causes the cube to become distorted and as you approach being on directly above the cube, it turns into a line.

cube at 0,0,0 from 2,0,5 looking at 0,0,0

cube at 0,0,0 from 1,0,5 looking at 0,0,0

cube at 0,0,0 from 1,0,5 looking at 1,0,0

*note* I am using the method described here http://www.fastgraph...mes/3drotation/

Edited by hit172, 05 February 2012 - 03:07 AM.

• 0

### #2 xshortguy

xshortguy

GMC Member

• Global Moderators
• 4185 posts
• Version:GM:Studio

Posted 05 February 2012 - 03:15 AM

No, matrices do not work like that.

A matrix tells us how to transform an input vector into a corresponding output vector. The nth row of the matrix tells us how to build the nth entry of the output vector from the entries of the input vector.

A rotation matrix is a vector that takes a vector and transforms it in such a way that the length of the output vector is unchanged after the transformation, without causing any of the other vectors to "flip" their directions.
• 0

### #3 hit172

hit172

GMC Member

• New Member
• 189 posts
• Version:GM8

Posted 05 February 2012 - 03:36 AM

A matrix tells us how to transform an input vector into a corresponding output vector. The nth row of the matrix tells us how to build the nth entry of the output vector from the entries of the input vector.

Yes, so you then multiply the input vector by the matrix to get the corresponding output vector.

Can you then create a rotation matrix if you know the input and output vectors or more specifically from a look vector and the zero vector (origin).

Edited by hit172, 05 February 2012 - 03:39 AM.

• 0

### #4 xshortguy

xshortguy

GMC Member

• Global Moderators
• 4185 posts
• Version:GM:Studio

Posted 05 February 2012 - 04:22 PM

Can you then create a rotation matrix if you know the input and output vectors or more specifically from a look vector and the zero vector (origin).

Let $\vec{a_1}, \vec{a_2}, \vec{a_3}$ be three linearly independent vectors, let C be a matrix so that $C\vec{a}_i = \vec{b}_i$. We wish to determine what C is based on the effects of C on those three vectors. The solution is simple:
• Form the augmented matrices $A = [ \vec{a}_1 \quad \vec{a}_2 \quad \vec{a}_3 ], B = [ \vec{b}_1 \quad \vec{b}_2 \quad \vec{b}_3 ]$
• The equation we need to solve for C is CA = B. Since the columns of A are linearly independent, A is invertible, so we can solve for C by computing the inverse of A both sides by it.
• However that can be more computationally expensive, so instead one will form the augmented matrix [ B | A ] and row reduce this matrix until the right block is reduced to the identity matrix. We then have a matrix of the form [ B' | I ], of which C = B'.
• However this most likely will not be a rotation matrix, but one can take each of the columns of C, normalize them, and then reform the augmented matrix of these normalized vectors to get the rotation matrix.

• 0

### #5 Gamer3D

Gamer3D

Human* me = this;

• GMC Member
• 1587 posts
• Version:GM8.1

Posted 11 February 2012 - 05:32 PM

Can you then create a rotation matrix if you know the input and output vectors or more specifically from a look vector and the zero vector (origin).

Let $\vec{a_1}, \vec{a_2}, \vec{a_3}$ be three linearly independent vectors, let C be a matrix so that $C\vec{a}_i = \vec{b}_i$. We wish to determine what C is based on the effects of C on those three vectors. The solution is simple:
• Form the augmented matrices $A = [ \vec{a}_1 \quad \vec{a}_2 \quad \vec{a}_3 ], B = [ \vec{b}_1 \quad \vec{b}_2 \quad \vec{b}_3 ]$
• The equation we need to solve for C is CA = B. Since the columns of A are linearly independent, A is invertible, so we can solve for C by computing the inverse of A both sides by it.
• However that can be more computationally expensive, so instead one will form the augmented matrix [ B | A ] and row reduce this matrix until the right block is reduced to the identity matrix. We then have a matrix of the form [ B' | I ], of which C = B'.
• However this most likely will not be a rotation matrix, but one can take each of the columns of C, normalize them, and then reform the augmented matrix of these normalized vectors to get the rotation matrix.

The zero vector will not help you in any way, because every matrix maps the zero vector to itself.

What you can do is begin with a look vector and up vector. Take the cross product of the look and up vectors to get a third vector that is linearly independent if the first two are (we'll call it the left vector, whether or not it faces left). Normalize the left and look vectors, and set the up vector to their cross product (This cross product will be of unit length because left and up are normal and perpendicular). These 3 vectors can be used as the columns of an orthonormal matrix.

Check the signs of the cross products (I didn't bother). If the determinant is 1, then it's a rotation matrix. Otherwise, multiply the left vector by -1.
• 0

#### 0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users