CAP 202 - Computer Animation II

Lesson 2 - Chapter 1


This lesson describes optimizing assets for games. Objectives important to this lesson:

  1. Optimizing
  2. What the artist can do
  3. Particle system recommendations
  4. Level of detail
  5. Collision-based optimizations
  6. Culling occluded polygons
  7. What can be seen

We continue with Chapter 1 of the text (page 1) to get a general idea of what the author means by game assets and what we can to do reduce the load they place on a computer/game console. By asset, he means any graphic art that an artist creates to be used in the game. The more detailed the art, the more processing power it takes to use it in the game.

The author observes that a game must fulfill two goals that are in conflict with each other. To look good, a game should have detailed art (characters, scenery, etc.) that looks impressive. To run well on a system, the game must load and run quickly as the player goes from scene to scene, encountering new characters, new scenes, special effects, etc. To balance these goals, the game developers must optimize the assets to produce the best result for the least overhead.

The author observes that a game's performance can be measured by how many frames per second it can display on a given system. Most of us have played games on systems that have bogged down under the load of a game that put so much detail on the screen that the processor/video card could not maintain the desired frame rate. Waiting for a scene to load ruins the pacing of a game. Locking up the computer in the middle of a battle, a race, or a chase ruins the game experience in general.

As an artist working on a game, you have control over some of the things that affect frame rate, and our author tells us that it is your responsibility to manage those things you can control from the start of your involvement in the game project.

  • MIP (multum in parvo) mapping - The artist should consider at what distance a texture will be seen by the player. For many objects, you will want to have several textures, and will want to use the one that is appropriate for the current distance. In this way, you can use a low resolution texture for long shots, and a detailed texture for close-ups (if needed) and others in between, switching from one to another as the distance from the camera to the object changes.
  • Texture atlas ( texture page, T-page, texture packing) - The author tells us that this technique may not be used much longer. It is similar to the WWII aircraft model we worked on last term, which applied several textures to the same model, saving them all in one layout. The general idea is to put textures for several objects on one large asset, which can then be loaded with information about how to use it with the several objects (or several parts of a single object). This is to save some of the load time you would get with several separate assets.
    Consider the example on page 10 of the text. The illustrated texture atlas seems to have eleven textures on it. (No, I can't see the borders, I counted the references in the xml document that tells how to use the textures.) Ten textures are artificially masked in the image on the page. The author has done this only to emphasize the one he wants to show us. The one that is not masked is referenced in the box he has drawn on the xml document shown below the atlas. (By the way, does the texture for the tiger look odd to you? The tiger has been unwrapped for UV mapping.)
  • Unlit texture - An unlit texture is not affected by light in the scene. It will look the same regardless of any lights added to the scene. It has its own light level included in the image. This is less realistic than a texture that is illuminated by light in the scene, but it saves on processing cycles. The most famous examples of this kind of texture are found in the art of Thomas Kinkaid.
  • Multi-texturing - As you have learned in 3DS Max, you can apply multiple textures to a single object, and you can combine textures on objects as well. This idea lets you create new effects with new combinations of textures that you have already loaded into memory for the scene.
  • Lightmaps - The text explains that these are pre-rendered images (not rendered during game play) of light and shadow effects in a scene. This saves on render time during the game, but adds to the asset load. Note the illustration on page 14 of two lightmaps for the shadow of a set of letters: the lightmap that has a darker and higher resolution is stored in a larger asset file, the lightmap that is less defined, but gives a general impression of the same shadow is stored in a smaller asset file. Consider using a low resolution version where the required shadow is only in the background of a scene.
  • Masking and transparency - It is less clear from the illustrations this time what the author means. Consider the pedate leaf texture shown on page 16. This is a leaf that has several branches from a central stem. The author has chosen a shade of green as the background of this texture that he has not used in the actual texture itself. (It might have been clearer if he had used pink, or any other color farther from the palette he used in the texture.)
    His point is that he wants you to be able to see through the gaps between the branches of the leaf. He discusses two ways to do this.

    By setting a mask, he is telling the program that the background color is "clear" and should not be drawn on the screen. This means that pixels in the background color will not be drawn, and will not occlude whatever they would have overlapped. The drawback to this method is that the borders of the leaf can be harsh, as shown in the second image on page 16.

    The second method (transparency) is to use an alpha channel in the image to specify the opacity level of each pixel in the texture. (Png files, for example, have alpha channels.) This provides smoother images on screen, but it requires more processing power at run time, since each image pixel is actually blended with the pixels it might occlude, even if the pixel is tagged as transparent.

    A third method that should occur to you would be to use a UVW unwrap of a high poly model. This would defeat the purpose of trying to stay on a low polygon budget.
  • Texture size and compression - A texture that is tiled over a large area may have to be more detailed, and perhaps larger, than one that tiles only a few times, especially if it is seen at close range. This concept of the range at which a texture is seen comes up several times in the text. It is less valid in games where the player has the freedom to travel to almost any location in the game. In the driving game example, there will be scenery that will not be seen in close-up, so it is more valid there.
    A texture can also be compressed to save file size. The text observes that you should try out several compression settings before selecting one. Compression is like a box of chocolates: you never know what you're going to get.

The text moves on to describe particle systems, which can be used for fire, fog, rain, and other effects that require many small polygons to be placed or to move around on the screen. The text explains that the polygons used typically have a texture and a transparency value. They may also have several other properties that the artist using the particle system can change:

  • rate of production
  • size
  • speed
  • position
  • life span

The particle system itself may have one or more emitters (where the particles are spawned), one or more textures (applied to the particles), and the particles may be simple polygons (like raindrops) or complex meshes (like birds in a flock). Unlike the textures discussed above, a particle system can also require a sound effect, depending on what it represents. Some advice about particle systems is provided:

  • Use unlit textures for most particle systems
  • Optimize for resolution to get the smallest file size
  • Determine the best masking or transparency method to use
  • Turn off particle collision when possible (more on collisions below)
  • Set the particles for short life spans
  • Use no more particles than are necessary
  • Make each particle look the best you can, to reduce the number needed
  • Tune the settings to display the particles to best effect

On page 24, the author suggests that several versions of objects (meshes, models) should be created, each having fewer polygons than the last, if those objects are going to be seen at various distances in the game. His point is that a high polygon model will require as much rendering when seen from far away as when seen up close, but the low polygon model will look just as good when seen far away, and will take less rendering, reducing the load on the system. He refers to this concept as setting the level of detail for these objects.

The text continues with collision-based optimizations. From the artist's perspective, each object that might touch another object may need a definition of where that touch happens. Consider the illustrations on page 25. The model of the vehicle is shown with two different collision hulls (also called collision meshes). Think of the collision hull as a transparent shell around an object. It sets a boundary around an object, defining how close other objects can come, and at what point they are considered to be touching. The word collision is used to mean making contact, and does not imply a particular amount of force.

In the first example, the collision hull was constructed as a simple box, enclosing most of the vehicle. This is described in the text as constructing a simple primitive. The drawback to this simple approach is that the figure in the scene, meant to be standing on the Humvee's hood, is actually standing two feet above it. Not good for close work, is it?

In the second example, the collision hull has been customized to fit the top and end surfaces of the vehicle much more closely. This allows the character to touch any of those surfaces much more realistically. Note that the author did not make the collision hull hug the lower surface of the Humvee. He is not anticipating the need for anything to touch the lower surface of the object.

Taking the collision hull to the extreme, the author could have created it to match the surface of the object exactly, which would allow collisions on every polygon of the object. This may be necessary when you are working on a game where small differences make big differences. (How about simulated microsurgery?)

Having described where a collision may take place, the text considers what type of collision may take place and what happens as a collision occurs. Page 26 explains that the result of a collision may be different for a player than for a non-player character (NPC). We might create collision hulls to prevent NPCs from going places where we would allow the player to go.

The result of a collision can also vary depending on what collided. Should the Humvee being shot by a pistol generate the same result as it being shot by an anti-tank weapon? No, those events should trigger different results, which may require very different textures being applied to the Humvee in the next moment. The text also points out that it will not be enough to have one texture for each such event. The player will be bored if the targets in the game always look the same when they are hit. Some variation will be needed to maintain interest.

We have considered the concept of occlusion already: one object blocking the player's view of another object. In general, the text recommends that any occluded polygons not be drawn in a scene. The process of removing such polygons is called culling on page 28. It recommends also that we learn to use whatever tool the game engine provides to optimize culling.

The idea of determining what can be seen leads to several new terms:

  • frustum - the player's view of a scene, diagrammed as a pyramid whose apex points at the player or camera. As shown on pages 30 and 31, the angle formed by the legs of the frustum (field of view) may simulate either narrow or wide-angle camera lenses
  • near and far clipping - the near clip value tells the camera where to start seeing, and the far clip value tells the camera where to stop seeing. The planes defined by these distances from the player called the near clipping plane and the far clipping plane on page 29.
  • six planes - the frustum illustrated on page 32 has six planes: near, far, top, bottom, left, and right. If an object is outside these planes, it should be culled from the scene, but this leads to problems that are addressed by the next two ideas.
  • distance fog - If an object crosses the far plane, coming toward the player, it should be visible to the player, but it will seem very unnatural if it suddenly appears. Distance fog is applied to an object at the far plane to gray it out, and it is removed (made clearer) as it comes nearer. This allows an object to appear gradually to the player.
  • cull distance - This is the distance at which the game engine will not render an object. This value can be applied as a different number to different objects. Larger objects, for example, should have higher cull distances than smaller objects.

The text also suggests that rendering problems can be reduced by using rooms in a game. If a player cannot see beyond the walls of a room, the rendering tasks should be reduced. If the player can see through a door or window, this presents a new, limited extension to the frustum.

Exercise 2: Consider a game you have played that has used several of the techniques described above. Write a one page paper about the techniques you recall from that game that are covered in this chapter. Discuss whether any of the techniques above that you did not see used would have helped improve the game, and explain why. This can be a group assignment, but everyone must participate in it.