Assignment 4 Lecture Notes

Part 1: Per pixel Lighting

By default, OpenGL only calculates lighting at vertices, and then interpolates the colors at each pixel on the triangle. With this technique, undesirable artifacts of the mesh representation are quite obvious, especially in specular highlights. Gourad shading is a form of per-vertex lighting and is very similar to the default lighting model used by OpenGL. On the other hand, Phong shading calculates the lighting for each pixel in a polygon. This is a form of per-pixel lighting and can be implemented using OpenGL shaders.

GLSL is a high-level language for writing GPU programs. There are two types of shaders in GLSL, vertex shaders and fragment shaders. These shaders override parts of the default OpenGL pipeline. Since the shaders are replacements for parts of the default pipeline, you sometimes must reimplement the functionality included in the default parts.

Vertex shaders are responsible for all per-vertex calculations. The default pipeline converts vertices from object space to NDC and performs per-vertex lighting and texture coordinate calculations.

Fragment shaders are responsible for all per-pixel calculations. The OpenGL rasterizer automatically interpolates all per-vertex data for each pixel. The default pipeline simply sets output color of the pixel to the interpolated vertex color.

GLSL has very similar syntax to C. The main function is the entry point for both vertex and fragment shaders. The language includes builtin support for vector and matrix operations.

This reference describes all of the built-in functions and variables.

GLSL types:

float, vec2, vec3, vec4  
int, ivec2, ivec3, ivec4  
bool, bvec2, bvec3, bvec4  
mat2, mat3, mat4

GLSL functions:

dot  
sin, cos, pow, abs, etc. (in radians)  
ftransform - Fixed transformation to NDC  
reflect

GLSL includes built-in variables that are used to pass data into and out of the shader.

Vertex Shader Built-in Variables:

gl_Vertex: Object space position  
gl_Normal: Object space normal

Vertex Shader Outputs

gl_FrontColor: Write color here  
gl_Position: Write output NDC position

Pixel Shader

gl_Color: Interpolated per-vertex color  
gl_FragColor: Write output color here

Both Shader Variables:

gl_LightModelParameters.ambient  
gl_LightSource[0].diffuse, etc.  
gl_FrontMaterial.diffuse, etc.  
gl_ModelViewMatrix: Transform points to camera space.  
gl_ProjectionMatrix  
gl_NormalMatrix: Transform for normals to camera space.

GLSL also has special keywords to declare other variables to pass data in or out of the shader. Uniform allows the program to specify a value that will be readable by the shader programs for each vertex and fragment. Attribute allows values to be provided to the shader per vertex. Varying declares data per vertex that will be interpolated automatically for each pixel and passed to the fragment shader. Attributes can’t be defined in the fragment shader.

keyword Source Destination Interpolation




uniform CPU Fragment + Vertex shaders per pass
attribute CPU Vertex shader per vertex
varying Vertex shader Fragment shader per pixel

Sample vertex and fragment shaders:

//vertex shader
varying vec3 normal,lightDir;
void main()
{
    normal = normalize(gl_NormalMatrix  gl_Normal);
    vec3 viewPos = vec3(gl_ModelViewMatrix  gl_Vertex);
    lightDir = normalize(vec3(gl_LightSource[0].position)-viewPos);
    gl_Position = ftransform();
}

//fragment shader
varying vec3 normal,lightDir;
void main()
{
    vec4 color = gl_LightModel.ambient  gl_FrontMaterial.ambient;
    float NdotL = max(dot(normalize(normal),normalize(lightDir)),0.0);
    if (NdotL > 0.0)
    color += (gl_FrontMaterial.diffuse  gl_LightSource[0].diffuse  NdotL);
    gl_FragColor = color;
}

CPU code:

GLuint uniformTestLoc = glGetUniformLocation(simpleProgram, "uniformTestValue");
glUniform1i(uniformTestLoc, 5);

For many shading tasks, it is easiest to perform all of the calculations in camera space, with the eye at the origin of the space. OpenGL provides the builtin gl_ModelViewMatrix and gl_NormalMatrix variables for this purpose. As in Assignment 2, the normal vectors must be transformed with a different transformation in order to remain normal to transformed surfaces.

Part 2: Texturing and Normal Mapping

Texturing is used to get much more detail out of rendering, than just from the vertex data alone. A very common application of (multi)texturing is the use of a combination of a normal map texture and a color texture to render in ’3D’ on a completely flat surface.

Texture mapping is a way to apply surface detail from a raster image to a 3D model. A texture map is applied (mapped) to the surface of a shape or polygon. This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate. Image sampling locations are then interpolated across the face of a polygon to produce a visual result.

pict
Figure 1: Each vertex is assigned a coordinate for a point on the texture. The texture is then ”wrapped” onto the object with the corresponding points in the texture matching up with the vertexes.

If texture coordinates are linearly interpolated in screen space, this results in a distortion of the texture for triangles at an angle to the screen that can be seen as a discontinuity between adjacent triangles. This is not particularly apparent when interpolating colors, but is often very obvious when interpolating texture coordinates depending on the texture. Perspective correct texturing accounts for the 3D position of the triangle, and produces a correct visual effect. This is accomplished by linearly interpolating the value divided by the depth of the vertices, then multiplying by the depth of the corresponding pixel. OpenGL interpolation automatically performs perspective-correct interpolation.

pict
Figure 2: Linearly interpolating texture coordinates results in noticeable artifacts for polygons at an angle to the screen. Perspective correct interpolation corrects for this effect.

GLSL has built-in capabilities for mapping textures to the surface of objects and sampling those textures. In the fragment shader we can write gl_TexCoord[0] = gl_MultiTexCoord0; This takes the per-vertex texture coordinates and stores them in a builtin varying array that will be interpolated for the fragment shader. In the fragment shader we declare uniform sampler2D texture; which will be set by the C++ program. In order to access the corresponding texture color for a pixel, we just need to color = texture2D(texture, gl_TexCoord[0].st); in GLSL. OpenGL assigns coordinates from 0 to 1 for points in a rectangular texture. If a texture should cover an entire face, the texture coordinates for the vertexes should be 0 and 1 for u and v.

For normal mapping, we use 2 textures, one for the color of the material, and another texture as the normal map. The normal map is generally used to hold fine 3D details that would be very computationally expensive to include in the polygon mesh.

A normal map takes the x,y,z components of the surface normals and directly stores them in the RGB values of the texture. Since the RGB values of the normal map are in the range [0, 1], you will need to map them to [-1, 1] and re-normalize the normal. Instead of using the mesh normals for lighting calculations, we use the normal vector from the normal map. This creates the impression of fine 3D texture on the surface.

// Load normal from normal map texture RGB value
normal = texture2D(heightmap, glTexCoord[0].st).rgb;

// Map components [0, 1] -> [-1, 1] here

// Re-normalize normal vector
normal = normalize(normal);

The normal vector is in the coordinate system of the flat surface the texture is mapped onto. In order to calculate lighting, we must either transform the normal into camera space, or transform the lights and camera into the surface coordinates. We can easily transform between these two coordinate systems with a tangent binormal normal (TBN) matrix. The tangent is a vector along the surface in the x direction of the texture, and the binormal is in the y direction.

pict
Figure 3: The tangent vector is a vector in the u direction of the texture tangent to the surface. The binormal is also tangent to the surface but in the v direction. Together, the tangent, binormal, and normal form a coordinate system on the surface of the object. The binormal is also called the bitangent, since it is also tangent to the surface plane.

The following is the formula for our TBN matrix:

s = txtytz bxbybz nxnynz c

where c is the vector in the camera frame and s is the vector in the surface frame. t,b,n are the tangent, binormal, and normal vectors in camera space.

This matrix is equivalent to taking the dot product for each component with t,b,n, projecting the original vector onto each axis of the new coordinate system:

sx = t c sy = b c s z = n c

pict
Figure 4: In order to transform a vector into a new coordinate system, each component is just equal to the dot product of the vector with the unit vector of each axis of the new coordinate system. This is shown with a 2D coordinate transform above.

OpenGL gives the normal vector for each vertex as a builtin. For the tangent, an OpenGL attribute must be defined in the CPU code and set for each vertex. In general, the tangent can be computed from the normal and the texture coordinates, but is often already generated when a texture is first created for an object. Many proprietary formats for 3D models include specification for texture tangent vectors. The binormal is a third orthogonal vector, so it can just be calculated as the cross product of the normal and tangent.

The camera and light vectors should be calculated and transformed into surface space by the vertex shader. These values can be stored as varying, and the interpolated values can be accessed from the fragment shader. The surface normal for each pixel should be sampled from the normal map, and the final color value can be calculated then.


Original assignment by Andrew Zhao (Class of 2016).
Updated by Brian Lee (Class of 2018).
Links: Home Assignments Contacts Policies Resources