# «*DPH 'HYHORSHUV &RQIHUHQFH $GYDQFHG 2SHQ*/ *DPH 'HYHORSPHQW Avoiding 19 Common OpenGL Pitfalls Mark J. Kilgard * NVIDIA Corporation Last updated: ...»

*DPH 'HYHORSHUV &RQIHUHQFH $GYDQFHG 2SHQ*/ *DPH 'HYHORSPHQW

Avoiding 19 Common OpenGL Pitfalls

Mark J. Kilgard *

NVIDIA Corporation

Last updated: July 10, 2000

Every software engineer who has programmed long enough has a war story about some

insidious bug that induced head scratching, late night debugging, and probably even schedule

delays. More often than we programmers care to admit, the bug turns out to be self-inflicted.

The difference between an experienced programmer and a novice is knowing the good practices to use and the bad practices to avoid so those self-inflicted bugs are kept to a minimum.

A programming interface pitfall is a self-inflicted bug that is the result of a misunderstanding about how a particular programming interface behaves. The pitfall may be the fault of the programming interface itself or its documentation, but it is often simply a failure on the programmer’s part to fully appreciate the interface’s specified behavior. Often the same set of basic pitfalls plagues novice programmers because they simply have not yet learned the intricacies of a new programming interface.

You can learn about the programming interface pitfalls in two ways: The hard way and the easy way. The hard way is to experience them one by one, late at night, and with a deadline hanging over your head. As a wise main once explained, “Experience is a good teacher, but her fees are very high.” The easy way is to benefit from the experience of others.

This is your opportunity to learn how to avoid 19 software pitfalls common to beginning and intermediate OpenGL programmers. This is your chance to spend a bit of time reading now to avoid much grief and frustration down the line. I will be honest; many of these pitfalls I learned the hard way instead of the easy way. If you program OpenGL seriously, I am confident that the advice below will make you a better OpenGL programmer.

If you are a beginning OpenGL programmer, some of the discussion below might be about topics that you have not yet encountered. This is not the place for a complete introduction to some of the more complex OpenGL topics covered such as mipmapped texture mapping or OpenGL's pixel transfer modes. Feel free to simply skim over sections that may be too advanced. As you develop as an OpenGL programmer, the advice will become more worthwhile.

1. Improperly Scaling Normals for Lighting Enabling lighting in OpenGL is a way to make your surfaces appear more realistic. Proper use of OpenGL's lighting model provides subtle clues to the viewer about the curvature and orientation of surfaces in your scene.

When you render geometry with lighting enabled, you supply normal vectors that indicate the orientation of the surface at each vertex. Surface normals are usedwhen calculating diffuse and specular lighting effects. For example, here is a single rectangular patch that includes surface

**normals:**

* Mark graduated with B.A. in Computer Science from Rice University and is a graphics software engineer at NVIDIA. Mark is the author of OpenGL Programming for the X Window System (Addison-Wesley, ISBN 0-201-48359-9) and can be reached by electronic mail addressed to mjk@nvidia.com *DPH 'HYHORSHUV &RQIHUHQFH $GYDQFHG 2SHQ*/ *DPH 'HYHORSPHQW glBegin(GL_QUADS);

glNormal3f(0.181636,-0.25,0.951057);

glVertex3f(0.549,-0.756,0.261);

glNormal3f(0.095492,-0.29389,0.95106);

glVertex3f(0.288,-0.889,0.261);

glNormal3f(0.18164,-0.55902,0.80902);

glVertex3f(0.312,-0.962,0.222);

glNormal3f(0.34549,-0.47553,0.80902);

glVertex3f(0.594,-0.818,0.222);

glEnd();

The x, y, and z parameters for each glNormal3f call specify a direction vector. If you do the math, you will find that the length of each normal vector above is essentially 1.0. Using the first

**glNormal3f call as an example, observe that:**

sqrt(0.1816362 + -0.252 + 0.9510572) ≈ 1.0 For OpenGL’s lighting equations to operate properly, the assumption OpenGL makes by default is that the normals passed to it are vectors of length 1.0.

However, consider what happens if before executing the above OpenGL primitive, glScalef is

**used to shrink or enlarge subsequent OpenGL geometric primitives. For example:**

glMatrixMode(GL_MODELVIEW);

glScalef(3.0, 3.0, 3.0);

The above call causes subsequent vertices to be enlarged by a factor of three in each of the x, y, and z directions by scaling OpenGL’s modelview matrix. glScalef can be useful for enlarging or shrinking geometric objects, but you must be careful because OpenGL transforms normals using a version of the modelview matrix called the inverse transpose modelview matrix. Any enlarging or shrinking of vertices due to the modelview transformation also changes the length of normals.

Here is the pitfall: Any modelview scaling that occurs is likely to mess up OpenGL's lighting equations. Remember, the lighting equations assume that normals have a length of 1.0. The symptom of incorrectly scaled normals is that the lit surfaces appear too dim or too bright depending on whether the normals enlarged or shrunk.

**The simplest way to avoid this pitfall is by calling:**

glEnable(GL_NORMALIZE);

This mode is not enabled by default because it involves several additional calculations. Enabling the mode forces OpenGL to normalize transformed normals to be of unit length before using the normals in OpenGL's lighting equations. While this corrects potential lighting problems introduced by scaling, it also slows OpenGL's vertex processing speed since normalization requires extra operations, including several multiplies and an expensive reciprocal square root operation. While you may argue whether this mode should be enabled by default or not, OpenGL's designers thought it better to make the default case be the fast one. Once you are aware of the need for this mode, it is easy to enable when you know you need it.

There are two other ways to avoid problems from scaled normals that may let you avoid the performance penalty of enabling GL_NORMALIZE. One is simply to not use glScalef to scale vertices. If you need to scale vertices, try scaling the vertices before sending them to OpenGL.

Referring to the above example, if the application simply multiplied each glVertex3f by 3, you

could eliminate the need for the above glScalef without having the enable the GL_NORMALIZE mode.

Note that while glScalef is problematic, you can safely use glTranslatef and glRotatef because these routines change the modelview matrix transformation without introducing any scaling effects. Also, be aware that glMatrixMultf can also be a source of normal scaling problems if the matrix you multiply by introduces scaling effects.

The other option is to adjust the normal vectors passed to OpenGL so that after the inverse transpose modelview transformation, the resulting normal will become a unit vector. For example, if the earlier glScalef call tripled the vertex coordinates, we could correct for this corresponding thirding effect on the transformed normals by pre-multiplying each normal component by 3.

** OpenGL 1.2 adds a new glEnable mode called GL_RESCALE_NORMAL that is potentially more efficient than the GL_NORMALIZE mode.**

Instead of performing a true normalization of the transformed normal vector, the transformed normal vector is scaled based on a scale factor computed from the inverse modelview matrix’s diagonal terms. GL_RESCALE_NORMAL can be used when the modelview matrix has a uniform scaling factor.

**2. Poor Tessellation Hurts Lighting**

OpenGL's lighting calculations are done per-vertex. This means that the shading calculations due to light sources interacting with the surface material of a 3D object are only calculated at the object's vertices. Typically, OpenGL just interpolates or smooth shades between vertex colors.

OpenGL's per-vertex lighting works pretty well except when a lighting effect such as a specular highlight or a spotlight is lost or blurred because the effect is not sufficiently sampled by an object's vertices. Such under-sampling of lighting effects occurs when objects are coarsely modeled to use a minimal number of vertices.

Figure 1 shows an example of this problem. The top left and top right cubes each have an identically configured OpenGL spotlight light source shining directly on each cube. The left cube has a nicely defined spotlight pattern; the right cube lacks any clearly defined spotlight pattern.

The key difference between the two models is the number of vertices used to model each cube.

The left cube models each surface with over 120 distinct vertices; the right cube has only 4 vertices.

At the extreme, if you tessellate the cube to the point that each polygon making up the cube is no larger than a pixel, the lighting effect will essentially become per-pixel. The problem is that the rendering will probably no longer be interactive. One good thing about per-vertex lighting is that you decide how to trade off rendering speed for lighting fidelity.

Smooth shading between lit vertices helps when the color changes are gradual and fairly linear.

The problem is that effects such as spotlights, specular highlights, and non-linear light source attenuation are often not gradual. OpenGL's lighting model only does a good job capturing these effects if the objects involved are reasonably tessellated.

Figure 1: Two cubes rendered with identical OpenGL spotlight enabled.

Novice OpenGL programmers are often tempted to enable OpenGL’s spotlight functionality and shine a spotlight on a wall modeled as a single huge polygon. Unfortunately, no sharp spotlight pattern will appear as the novice intended; you probably will not see any spotlight affect at all.

The problem is that the spotlight’s cutoff means that the extreme corners of the wall where the vertices are specified get no contribution from the spotlight and since those are the only vertices the wall has, there will be no spotlight pattern on the wall.

If you use spotlights, make sure that you have sufficiently tessellated the lit objects in your scene with enough vertices to capture the spotlight effect. There is a speed/quality tradeoff here: More vertices mean better lighting effects, but also increases the amount of vertex transformation required to render the scene.

Specular highlights (such as the bright spot you often see on a pool ball) also require sufficiently tessellated objects to capture the specular highlight well.

Keep in mind that if you use more linear lighting effects such as ambient and diffuse lighting effects where there are typically not sharp lighting changes, you can get good lighting effects with even fairly coarse tessellation.

If you do want both high quality and high-speed lighting effects, one option is to try using multipass texturing techniques to texture specular highlights and spotlight patterns onto objects in your scene. Texturing is a per-fragment operation so you can correctly capture per-fragment lighting effects. This can be involved, but such techniques can deliver fast, high-quality lighting effects when used effectively.

3. Remember Your Matrix Mode OpenGL has a number of 4 by 4 matrices that control the transformation of vertices, normals, and texture coordinates. The core OpenGL standard specifies the modelview matrix, the projection matrix, and the texture matrix.

Most OpenGL programmers quickly become familiar with the modelview and projection matrices.

The modelview matrix controls the viewing and modeling transformations for your scene. The projection matrix defines the view frustum and controls the how the 3D scene is projected into a 2D image. The texture matrix may be unfamiliar to some; it allows you to transform texture coordinates to accomplish effects such as projected textures or sliding a texture image across a geometric surface.

**A single set of matrix manipulation commands controls all types of OpenGL matrices:**

glScalef, glTranslatef, glRotatef, glLoadIdentity, glMultMatrixf, and several other commands. For efficient saving and restoring of matrix state, OpenGL provides the glPushMatrix and glPopMatrix commands; each matrix type has its own a stack of matrices.

None of the matrix manipulation commands have an explicit parameter to control which matrix they affect. Instead, OpenGL maintains a current matrix mode that determines which matrix type the previously mentioned matrix manipulation commands actually affects. To change the matrix

**mode, use the glMatrixMode command. For example:**

glMatrixMode(GL_PROJECTION);

/* Now update the projection matrix. */ glLoadIdentity();

glFrustum(-1, 1, -1, 1, 0.0, 40.0);

A common pitfall is forgetting the current setting of the matrix mode and performing operations on the wrong matrix stack. If later code assumes the matrix mode is set to a particular state, you both fail to update the matrix you intended and screw up whatever the actual current matrix is.

If this can trip up the unwary programmer, why would OpenGL have a matrix mode? Would it not make sense for each matrix manipulation routine to also pass in the matrix that it should manipulate? The answer is simple: lower overhead. OpenGL’s design optimizes for the common case. In real programs, matrix manipulations occur more often than matrix mode changes. The common case is a sequence of matrix operations all updating the same matrix type. Therefore, typical OpenGL usage is optimized by controlling which matrix is manipulated based on the current matrix mode. When you call glMatrixMode, OpenGL configures the matrix manipulation commands to efficiently update the current matrix type. This saves time compared to deciding which matrix to update every time a matrix manipulation is performed.

OpenGL’s color matrix extension also provides a way for colors to be transformed via a 4 by 4 matrix. The color matrix helps convert between color spaces and accelerate image processing tasks. The color matrix functionality is a part of the OpenGL 1.2 ARB_imaging subset.