Assignment 1 Lecture Notes

The following will act as lecture notes to help you review the material from lecture for the assignment. You can click on the links below to get directed to the appropriate section.


Section 1: Geometric Transformations

Geometric transformations are ubiquitous in computer graphics. The three main geometric transformations that we use are translation, rotation, and scaling. We find it convenient to implement these transformations as matrix operations. This requires us to work with homogeneous coordinates instead of the usual Cartesian coordinates. For a given point p in three dimensional space with Cartesian coordinates (X,Y,Z), we express it in homogeneous coordinates (x,y,z,w) where w is known as the homogeneous component. The following shows the general mapping between Cartesian and homogeneous coordinates for three dimensions:

(X,Y,Z) (x,y,z,w) X = x w Y = y w Z = z w

Homogeneous coordinates allow us to represent geometric transformations in matrix form and their operation on points as simple matrix multiplications. Consider the translation operation. In Cartesian coordinates, the translation of a point p at position (x,y,z) by a vector v = (vx,vy,vz) shifts it to the new position (x + vx,y + vy,z + vz). We can implement this operation as a matrix multiplication by expressing translation in the following matrix form:

T = 100vx 010vy 001vz 000 1

and then multiplying the translation matrix with the homogeneous representation of p (letting the w component equal 1):

100vx 010vy 001vz 000 1 x y z 1 = x + vx y + vy z + vz 1

As you can see, our result is the homogeneous representation of the new position with the homogeneous w component equal to 1.

Rotation and scaling are achieved similarly. The following matirx is the rotation matrix for rotating about an axis in the direction of the unit vector u = (ux,uy,uz) counterclockwise by an angle θ:

R = ux2 + (1 - u x2) cos θ u xuy(1 - cos θ) - uz sin θuxuz(1 - cos θ) + uy sin θ0 uyux(1 - cos θ) + uz sin θ uy2 + (1 - u y2) cos θ u yuz(1 - cos θ) - ux sin θ0 uzux(1 - cos θ) - uy sin θuzuy(1 - cos θ) + ux sin θ uz2 + (1 - u z2) cos θ 0 0 0 0 1

And the following is the scaling matrix for scaling the coordinates of a point by the vector v = (vx,vy,vz):

S = vx 0 0 0 0 vy 0 0 0 0 vz0 0 0 0 1

And of course, consecutive transformations can be combined into a single matrix representation. For instance, a scaling by vector s = (sx,sy,sz) followed by a translation by vector t = (tx,ty,tz) of a point at (x,y,z) can be given as:

100tx 010ty 001tz 0001 sx 0 0 0 0 sy 0 0 0 0 sz0 0 0 0 1 x y z 1 = sx 0 0 tx 0 sy 0 ty 0 0 sztz 0 0 0 1 x y z 1


Section 2: World Space to Camera Space

Rendering in computer graphics is all about turning a mathematical model of a scene into an image that we can perceive. We refer to the coordinate space containing the points in our scene as the world space. Figure 1 shows the world space axes. Note that the coordinate axes here differ slightly from the traditional 3D axes in mathematics.

pict
Figure 1: World space coordiante axes.

In order to obtain an image of our scene, we need to specify in world space a viewing location and angle from which we look. Consider Figure 2:

pict       pict
Figure 2: A wireframe of a face mesh shown from two different viewpoints.

Let the face mesh shown above be centered at the origin of our world space. Our default viewing location is at the origin and our default viewing angle causes us to look straight ahead in the direction of the negative z-axis. We can consider the image on the left as what we would see after moving or translating to some point along the positive z-axis. The image on the right would be what we see after we first rotate ourselves 90 degrees clockwise about the y-axis and then translate to some point along the negative x-axis.

We refer to our viewing location as the camera position and the viewing angle as the camera orientation - i.e. we consider ourselves to be looking at our scene through a camera. From the above example, we can see that it would be intuitive to specify the camera position with a translation vector and the camera orientation with a rotation axis and angle. Hence, whenever we set up a camera, we always rotate it by a specified angle about a specified axis and then translate it to a specified point.

Denote the translation matrix for the camera position as Tc and the rotation matrix for the camera orientation as Rc. Then let C = TcRc be the matrix describing the overall transformation for the camera. Suppose that after we set up the camera, we want to transform the entire world space into a new space with the camera at the origin and in its default orientation. This new space, which we call the camera space, provides some advantages that we will see in the next section. We can intuitively see that in order to go from world space to camera space, we apply the inverse camera transform, C-1, to every point in world space.

Note that the terms eye position, eye orientation, and eye space are sometimes used instead of their respective camera counterparts. Their usage is just a terminology preference, and there are no differences in the meanings. For this class, we will use the camera terms, but we should also be aware that the eye terms exist.


Section 3: Perspective Projection

When we look at a scene in real life with our eyes, objects further in the distance appear smaller than objects close by. The technical term for this effect is perspective. We want to reproduce this effect when we render scenes in computer graphics to make them look realistic. We can do so with the following process. We first surround our scene in camera space with a sideways frustum, which, when upright, is the bottom half of a square pyramid that has been cut in two along a plane perpendicular to the axis connecting the tip of the pyramid and the center of the square base. We map every point in the sideways frustum to a cube with what is called a perspective projection. A visual of the mapping is provided in Figure 3.

pict       pict
Figure 3: The frustum in camera space is mapped to a cube by what is called a perspective projection. The smaller base of the frustrum (outlined in red) is mapped directly to the front face of the cube (also outlined in red). The rest of the frustum has to be shrunken down to form the rest of the cube.

The key idea here is that all of the frustum beyond the smaller base (outlined in red in Figure 3) must shrink in order to form the cube. For the frustum to shrink, points in the frustum must be clustered closer together. The further away a set of points is from the smaller base of the frustum, the closer together the points are clustered. This clustering causes the objects that those points form to shrink and get rendered smaller than their actual size, creating the desired perspective effect.

The cube resides in its own coordinate system known as normalized device coordinates or NDC for short. Note that this coordinate system has an inverted z-axis. This is due to convention.

We specify a perspective projection with parameters for the dimensions of the frustum. l,r,t,b stand for left, right, top, bottom and provide the dimensions for the smaller base of the frustum, which we will now call the near plane or projection plane. n stands for near and specifies in camera space the magnitude of the negative z-coordinate of where the frustum begins. f stands for far and specifies in camera space the magnitude of the negative z-coordinate of where the frustum ends.

We will now do a quick derivation of the perspective projection matrix, which transforms points from camera space into NDC given l,r,t,b,n, and f. We first look at just the x and y coordinate mappings. Let (xc,yc,zc, 1) be the position of a point in our camera space and (xndc,yndc,zndc,wndc) be the same position in homogeneous NDC.

Recall that the projection plane of the frustum is mapped directly to the front of the cube. Since the xc xndc and yc yndc mappings do not need to take depth (i.e. the z-coordinate) into account, we can consider them as first mapping pairs of (xc,yc) onto the projection plane, and then mapping the projection plane onto a cube face. Let xp and yp be the corresponding x and y coordinates on the projection plane. Figure 4 shows a visual of xc xp. A similar idea is used for yc yp.

pict
Figure 4: The perspective projection mapping for the x and y coordinates from camera space to NDC can be thought of as first mapping the coordinates onto the projection plane and then mapping the projection plane onto a cube face.

The values of xp and yp are determined using ratios of similar triangles:

xp xc = -n zc ,yp yc = -n zc

This gives us xc xp = -n zcxc and yc yp = -n zcyc. We then get the direct mappings for xp xndc and yp yndc by solving the linear equations for [l,r] [-1, 1] and [b,t] [-1, 1].

xndc = 1 - (-1) r - l xp + βx,xp = r xndc = 1

yndc = 1 - (-1) t - b yp + βy,yp = t yndc = 1

where βx and βy are constants that we need to solve for. Solving the above equations and substituting in xp = -n zcxc and yp = -n zcyc yields:

xc xndc = 2n r - lxc + r + l r - lzc -zc ,yc yndc = 2n t - byc + t + b t - bzc -zc

Recall that we are working in homogeneous coordinates, hence our perspective projection matrix is going to need to accommodate for the w component. What we can do is take advantage of the fact that both xndc and yndc have a division by - zc and make wndc = -zc while making xndc and yndc equal to just their respective numerator portions from the above equations. We then have:

xndc yndc zndc wndc = 2n r - l 0 r + l r - l0 0 2n t - bt + b t - b0 . . . . 0 0 -1 0 xc yc zc 1

For the third row, we use the fact that z does not depend on x or y to write:

xndc yndc zndc wndc = 2n r - l 0 r + l r - l0 0 2n t - bt + b t - b0 0 0 A B 0 0 -1 0 xc yc zc 1

for coefficients A and B. We then have:

zndc wndc = Azc + B -zc

Using the mappings - n -1 and - f 1 and doing some algebra gives us A and B and:

P = 2n r - l 0 r + l r - l 0 0 2n t - b t + b t - b 0 0 0 -(f + n) f - n -2fn f - n 0 0 -1 0

as the perspective projection matrix. This is the transformation matrix we use to transform a point in camera space to the homogeneous NDC. To go from homogeneous NDC to Cartesian NDC, we would divide our xndc,yndc, and zndc terms by wndc = -zc.

Note that sometimes the frustum we specify does not capture our entire scene, in which case, the points that were outside our frustum in camera space are mapped to the outside of the cube in NDC. These points are outside our field of view and should not be considered for rendering. Hence, when we later render what we see, we need to check for whether a given point is outside the cube. If it is, then we do not render the point.


Section 4: Line Rasterization

We render our 3D scene by mapping all of its points in NDC onto a 2D grid of square pixels that we then display onto the screen. This process is known as rasterization. For each point in NDC, we map its x and y coordinates to a row (r) and column (c) ordered pair with x c and y r. We then fill the pixel at (r,c) to show the point in the display. The z coordinate of our point is used for depth buffering and backface culling, which we will cover on the next assignment. Sometimes, the coordinates for the grid are referred to as screen coordinates.

In this section, we will talk specifically about line rasterization. In order to rasterize a line with a grid of pixels, we must decide which squares around the line form the best approximation of the line when filled. Figure 5 shows a visual of this in action:

pict
Figure 5: Rasterizing a line by approximating it using a grid of pixels.

The fastest and simplest algorithm that does this is Bresenham’s line algorithm.

The basic Bresenham’s algorithm only works in one octant of the 2D coordinate plane, but it can be generalized to account for the other octants. The generalization will be left as an exercise in the assignment. For now, we will restrict ourselves to discussing the algorithm in just the first octant - i.e. line slopes are in the range 0 m 1. Since it is more natural to talk about lines in terms of x and y, we are going to use x to refer to the column values and y to refer to the row values of our grid.

So we wish to connect two points (x0,y0) and (x1,y1) in the first octant of our grid with an approximation of a line. Let us decide to iterate over the x-coordinate. Then during our routine, if we had just filled in the pixel for some integral point (x,y) for x0 x x1 and y0 y y1, then the next pixel we fill in will have to be either at (x + 1,y) or (x + 1,y + 1). We must decide which one of these points results in a better approximation of the line.

Let ε denote the error between y of the integral point (x,y) and the true y in the actual line. The true y is then equal to y + ε. When moving from x to x + 1, we increase the true y by the slope m. Consider the case where y + ε + m is less than the midpoint of y and y + 1 - i.e. y + ε + m < y + 0.5. If that is the case, then the next point in our line is closer to the pixel at (x + 1,y) than the pixel at (x + 1,y + 1). Hence, the next pixel that we would want to fill is at (x + 1,y). We would then update ε as ε (y + ε + m) - y. However, if y + ε + m y + 0.5, then the next pixel should be at (x + 1,y + 1) and we would update ε as ε (y + ε + m) - (y + 1). This leads us to the following algorithm:


Algorithm 1:
1:  function Naive_First_Octant_Bresenham(x0,y0,x1,y1,grid)
2:   ε 0
3:   y y0
4:   m Compute_Slope(x0,y0,x1,y1)
5:   for x x0 to x1 do
6:   Fill(x,y,grid)
7:   if ε + m < 0.5 then
8:   ε ε + m
9:   else
10:   ε ε + m - 1
11:   y y + 1
12:   end if
13:   end for
14:  end function

We can optimize this algorithm by eliminating any dependence on floating point values. Consider the following manipulations on the inequality:

ε + m < 0.5

ε + y1 - y0 x1 - x0 < 0.5

ε + dy dx < 0.5

2εdx + 2dy < dx

Let ε εdx. Our inequality becomes:

2(ε + dy) < dx

And we can express ε ε + m and ε ε + m - 1 as ε ε + dy and ε ε + dy - dx respectively. This leads us to an algorithm with operations that involve only integers:


Algorithm 2:
1:  function First_Octant_Bresenham(x0,y0,x1,y1,grid)
2:   ε 0
3:   y y0
4:   dx x1 - x0
5:   dy y1 - y0
6:   for x x0 to x1 do
7:   Fill(x,y,grid)
8:   if 2(ε + dy) < dx then
9:   ε ε + dy
10:   else
11:   ε ε + dy - dx
12:   y y + 1
13:   end if
14:   end for
15:  end function

Note how the only operations in this algorithm are simple (and fast) arithmetic operations between integers. In addition, the multiplication by 2 can be implemented using a bit-shift. Hence, we end up with a fast and efficient line drawing algorithm for the first octant. The generalization to all octants will also maintain the simplicity and efficiency. As we mentioned before, the generalization will be left as an exercise in the assignment.


Written by Kevin (Kevli) Li (Class of 2016).
Links: Home Assignments Contacts Policies Resources