Up until now all my rendering for UoN has been handled automatically by the Xenko renderer, but those automatic systems (which take game objects in the world space and render them) do not allow for what I want (such as rendering a 3D spatial radar to the screen, without having to build it from game objects in the world space first). To render directly I have to "tell" the rendering engine what I want to render by sending it vertices, uv coords, a shader, and texture information.
The main problem is that the vertices that are sent to the renderer have to be relative to the camera, where a vertex (0,0,1) is one unit (metre) in front of the camera, but the vertices are stored either relative their parent. The parent can be the world space, but it could be another set of vertices, which themselves could be stored relative to another set of vertices (this is the case for spaceships, where some parts of the model are stored relative to other parts).
So in order to send these vertices to the renderer they need to be
transformed from their current space (where they are relative to the world space or another set of vertices) to the camera space (where they are relative to the camera). The most efficient method of doing these transforms is using
transformation matrices.
While I'm in study mode I am also studying some other computer graphics related maths, including Quaternions for storing and manipulating rotations. I've worked with Quaternions before, but without a complete understanding of them.
I'm making good progress with the study, and I have started building prototypes to put the theory into practice.
Thanks man.