Hello again! Last week I had to deliver the second lab of the graphics subject I mentioned in the last post and as I already said, this lab was about creating a two-dimensional image from a three-dimensional coordinates space of an object (graphics pipeline).
We had to program this from scratch in C++ and these are the stages we had to follow:
We had to program this from scratch in C++ and these are the stages we had to follow:
1. Parse the 3d coordinates file and save them to our designed structures.
As an example I attach the file of the coordinates belonging to a 3D pyramid:
5 5
0 0 0
0 1 0
1 1 0
1 0 0
0.5 0.5 1
4 1 2 3 4
3 1 4 5
3 4 3 5
3 3 2 5
3 2 1 5
0 0 0
0 1 0
1 1 0
1 0 0
0.5 0.5 1
4 1 2 3 4
3 1 4 5
3 4 3 5
3 3 2 5
3 2 1 5
And this is the corresponding structure of the file:
Line 1: <V (number of vertices)> <P (number of polygons)>
Line 2: <coordinate x1> <coordinate y1> <coordinate z1>
Line 3: <coordinate x2> <coordinate y2> <coordinate z2>
Line 4: <coordinate x3> <coordinate y3> <coordinate z3>
...
Line V+1: <coordinate xN> <coordinate yN> <coordinate zN>
Line V+2: <number of sides polygon 1> <1st vertex> <2nd vertex> ... <last vertex>
(vertices ordered so the normal is salient)
Line V+3: <number of sides polygon 2> <1st vertex> <2nd vertex> ... <last vertex>
Line V+P+1: <number of sides polygon P> <1st vertex> <2nd vertex> ... <last vertex>
2. Transform object coordinates to world coordinates through matrix transformations. Using rotation, scaling and transposing matrices we can move the object through the world space. This is done when we have more than one object in the scene.
3. Transform world coordinates to view coordinates (camera view) through matrix transformations. This change of coordinate system is split in two components: a translational one T and a rotational one R:
The translational matrix is composed by the spatial coordinates C of the camera in the world space. The rotational matrix is created with the viewing direction N (normal of the view plane) and the V and U vectors that together establish the view coordinate system. We can see that more clearly in the following picture taken from the '3D Computer Graphics' book by Alan Watt:
4. Culling or backface elimination. Which consists on eliminating all the polygons that cannot be seen, comparing their orientation with the view point. We know the orientation of the polygons by calculating their normal (cross product) and examining the sign of the dot product of this normal with the vector from the centre of projection N. We determine if the polygon is visible if the result of the dot product is greater than 0.
5. Perspective Projection. We project all the depth information by another matrix multiplication, specifying the near clip plane and the far clip plane (you can get more information on that in Alan Watt's book). Basically what we do is divide the X and Y coordinates of each vertex by its Z coordinate (which represents the distance from the camera).
6. Viewport transformation. Vertex coordinates transformed once again to window space, multiplying by the width of the window (image size) to map the coordinates to pixels.
7. Edge rasterization. Since all we have so far are the vertices position on the screen, next thing we have to do is to paint all the pixels that conform the edges from the object. We use a DDA algorithm ( http://en.wikipedia.org/wiki/Digital_differential_analyzer_(graphics_algorithm) )
8. Shading and lighting. In order to restore depth sensation we have to enlighten the scene. We were told to use either Flat, Goraud or Phong Shading. We decided to try to implement all three since this was extra credit.
The shading is defined as a function that yields the intensity value of each point on the body of an object from the characteristics of the light source, the object and the position of the observer. We take into consideration that the light received is provided one part by the diffuse reflection and another by the specular reflection.
Prior to that, we need to rasterize all the polygons so we know which points are in or outside the object. This is one of the hardest parts of the pipeline since we have to take a lot of things into consideration. This image from the book summarizes a bit the process:
After that we are ready to compute the lighting in every single point that conforms the object. As we said, we had three different methods to accomplish that:
- Flat Shading. We compute the lighting in a point of the polygon using its normal and assign this same value to all the other points of the polygon.
- Phong Shading. We compute the lighting in every single point of the polygon. For that, we need to have the normal direction in every point. First, we compute the vertices normals by interpolating the normals of the polygons that share that vertex. Then, we compute the normals of each point on the edge by linear interpolation to the normals at the two vertices of the edge. Finally, we interpolate every point inside the polygon by interpolating between both ends (edge normals). We are now ready to compute the lighting and assign that intensity to the unique point/pixel.
- Goraud Shading. Instead of computing the light in every single point we only calculate it in the vertices and interpolate the intensity value through all the other points. As before, first we interpolate in the edges and next to all the other values.
Due to that different strategy, Goraud Shading has a worse approximation on the smooth surfaces of the object. On the other hand, it requires much less computing and so it is much more efficient. If the specular highlight is in the middle of a large polygon, with Goraud we will miss it because we are interpolating with the vertices, but in general we have enough polygons to prevent this from happening.
Without more explanation, I attach some of the pictures we got throughout all the process:
3D pyramid after edge rasterization
3D pyramid after culling or backface elimination
3D sphere wireframe after culling elimination
3D sphere using Flat Shading
3D sphere using Flat Shading and incrementing the specular coefficient
3D sphere using Flat Shading
3D sphere using Flat Shading
3D face with all the printed edges (yellow) and vertices (pink) and using Flat Shading
3D face with Flat Shading with other lighting parameters
3D pyramid using Goraud Shading
3D pyramid using Phong Shading
3D sphere with Goraud Shading
3D sphere with Phong Shading
We tried to render also the face with Goraud or Phong but the results were too smooth to see the shapes of the face (nose, eyes and lips). Even the results obtained for the pyramid and the sphere don't seem very nice so we surely have something wrong. However, this was extra credit and we were already quite happy to get to this point.