CAP 161 - Digital Imaging for Animation

Lesson 2 - Graphic Technology, Shaders and Materials


This lesson concludes chapter 2, and discusses elements from chapter 3. Objectives important to this lesson:

  1. Optimizing
  2. What the artist can do
  3. Mapping textures
  4. Mapping types
  5. UV Editing
  6. Shaders
  7. Shader effects

The author observes that a game must fulfill two goals that are in conflict with each other. To look good, a game should have detailed art (characters, scenery, etc.) that looks impressive. To run well on a system, the game must load and run quickly as the player goes from scene to scene, encountering new characters, new scenes, special effects, etc. To balance these goals, the game developers must optimize the assets to produce the best result for the least overhead.

The author observes that a game's performance can be measured by how many frames per second it can display on a given system. Most of us have played games on systems that have bogged down under the load of a game that put so much detail on the screen that the processor/video card could not maintain the desired frame rate. Waiting for a scene to load ruins the pacing of a game. Locking up the computer in the middle of a battle, a race, or a chase ruins the game experience in general.

As an artist working on a game, you have control over some of the things that affect frame rate, and our author tells us that it is your responsibility to manage those things you can control from the start of your involvement in the game project.

  • MIP (multum in parvo) mapping - The artist should consider at what distance a texture will be seen by the player. For many objects, you will want to have several textures, and will want to use the one that is appropriate for the current distance. In this way, you can use a low resolution texture for long shots, and a detailed texture for close-ups (if needed) and others in between, switching from one to another as the distance from the camera to the object changes.
  • Texture atlas ( texture page, T-page, texture packing) - The author tells us that this technique may not be used for much longer. I found a 2009 blog page about using it, so it is not obsolete yet. The general idea is to put textures for several objects on one large asset, which can then be loaded with information about how to use it with the several objects. This is to save some of the load time you would get with several separate assets. So, you load once for several textures, instead of loading several separate times for the same number of textures.
    See an example on page 84 of the text. The illustrated texture atlas seems to have eleven textures on it. (No, I can't see the borders, I counted the references in the xml document.) Ten are artificially masked in the image on the page. The author has done this only to emphasize the one he wants to show us. The one that is not masked is referenced in the box he has drawn on the xml document shown below the atlas. (By the way, does the texture for the tiger look odd to you? The tiger has been unwrapped for UV mapping.)
  • Unlit texture - An unlit texture is not affected by light in the scene, and will look the same regardless of the light level. It has its own light level included in the image. This is less realistic than a texture that is illuminated by light in the scene, but it saves on processing cycles. Follow this link to the web site for Thomas Kinkade, who is known for painting scenes that give the illusion of containing their own light. Think about what he does when he paints a scene, and how you could use that idea with textures that do not need to be dynamically lit.
  • Multi-texturing - As you will see in 3DS Max, you can apply multiple textures to a single object, and you can combine textures on objects as well. This idea lets you create new effects with different combinations of textures that you have already loaded into memory for the scene. The combination of the textures may be handled by a shader program that is part of your modeling program.
  • Lightmaps - The text explains that these are pre-rendered images (not rendered during game play) of light and shadow effects in a scene. This saves on render time during the game, but adds to the asset load. Note the illustration on page 88 of two lightmaps for the shadow of a cannon: the large lightmap is darker and higher resolution, the smaller lightmap is less defined, but gives the general impression of the same shadow. They are both lightmaps, but the dark one contains much more detail. Consider using a low resolution version where the required shadow is only in the background of a scene, or where there is meant to be a lower light level in the scene.
  • Masking and transparency - It is less clear from the illustrations this time what the author means. Consider the pedate leaf texture shown on page 89. This is a leaf that has several branches from a central stem. The author has chosen a shade of green as the background of this texture that he has not used in the actual texture itself. (It might have been clearer if he had used pink, or any other color farther from the palette he used in the texture.)

    His point is that he wants you to be able to see through the gaps between the branches of the leaf. He can do this in either of two ways. In the first method, he sets a mask. He is telling the program that the background color (the mask) is "clear" and should not be drawn on the screen. This means that pixels in the background color will not be drawn, and will not occlude (hide) whatever they would have overlapped. The drawback to this method is that the borders of the leaf can be harsh, as shown in the second image on page 89. Note the jagged edges this method produces when viewed in detail.

    The second method (transparency) is to use an alpha channel in the image to specify the opacity level of each pixel in the texture. (Png files, for example, have alpha channels.) This provides smoother images on screen, but it requires more processing power at run time, since each image pixel is actually blended with the pixels it might occlude, even if the pixel is tagged as transparent. Of the two, only this method allows you to create a texture for an object that is meant to be translucent or transparent to some degree. Masking is not useful for this kind of effect.
  • Texture size and compression - A texture that is tiled over a large area may have to be more detailed, and perhaps larger, than one that tiles only a few times, especially if it is seen at close range. This concept of the range at which a texture is seen comes up several times in the text. It is less valid in games where the player has the freedom to travel to almost any location in the game. In the driving game example, there will be scenery that will not be seen in close-up, so it is more valid there.
    A texture can also be compressed to save file size. The text observes that you should try out several compression settings before selecting one. Compression is like a box of chocolates: you never know what you're going to get.

Chapter 3 is about shaders and materials. A shader is defined on page 94 as "a mini-program that processes graphic effects in real time". This means that the effect can change with the light in a scene, and with the movement of a character. This makes it hard to display the effect of a shader in a still frame, but the author provides some stills that show impressive skin tones on page 94. In the author's terms, a material is created from a collection of textures that work together to create something more than the sum of their parts. Shaders can use materials to create impressive results. When textures are used in this way, they are often called maps by the the modeling program and by the shader program.

The author pauses to discuss two lighting techniques that may be used in games that calculate lighting in real time.

  • vertex lighting - assigns a brightness value to each vertex of a polygon, and creates a gradient of light between those values
  • per-pixel lighting - a brightness value is assigned to every pixel

The concept of using a shader may look more familiar in the illustrations on page 96. This page illustrates three separate maps being combined by a shader to get an interesting result. In this case, we see a diffuse map (for the main colors), a specular map (for the parts of the image that reflect the most light), and a normal bump map (for the parts of the image that show 3D relief).

There is some new information about shaders in various parts of the chapter. On page 103, the author begins a discussion of the blend effect. You are informed that a blend shader can create a look in three basic ways:

  • additive - On page 104, we see an image of a brick wall as the base image, and an image of a beige skull and cross-bones (on a black background) as the image that will be added. The result is that the black areas in the added image are ignored, and the lighter tones are added on top of the first image like a new layer that preserves the feel of the original layer. The skull and cross-bones look like they have been painted on the original wall. The brick nature of the original wall is undisturbed. The original image is mostly there, with some material from the additive image blended into it.
  • subtractive - This time we start with the wall again as the base, and the subtractive image looks like a black explosion (on a white background). This time, the subtractive process ignores the white background, and the black image replaces the pixels in the base image. It's as though there are holes cut in the base image through which we see the subtractive image. You could also say that the new layer blocks the features of the original layer.
  • average - On the upper right corner of page 104, we see an image of what might be a pattern of mold or dirt that will be applied to the base image. The result of the average process is an even blend of each pixel from each of the images. The details of both images are averaged into the result. Neither image is completely the same in the result.

The chapter mentions normal maps, but it does not really define them. A normal map is a map that defines the three dimensional shape of a surface by telling the shader how light is reflected from it, and how shadows fall on it. A normal map made from a complex surface can simulate a complex surface on an object that does not actually have a complex surface.

A normal map makes a low polygon model look and act like a high polygon model. A normal map would just be a map if it did not make the low polygon model reflect light like a high polygon, more sculpted version of the same model.

Page 117 should have explained that a normal is the direction a point on a face is facing. Think of every point on a surface as having a geometric face. That face must be "looking" in some direction. A normal map would include information about the direction that each point is facing (looking).

Imagine a rolling landscape laid out in a grid. At each intersection in the grid, there is an arrow pointing not up, but perpendicular to the ground at that point. Some of the arrows will point up, and some will point at angles based on the curving landscape. These arrows are the normals for all of those points. (Look at the similar illustrations on page 120.) This becomes useful when we think about light. How light reflects from an object, how light is absorbed by an object, and how light is emitted by some objects is greatly affected by the normal map for that object.

On the top of page 121, consider the series of maps that are being combined to create the finished model on the right. If you look closely, you will notice that the grayscale map does not overlay the color or normal maps well, nor do any of those three match the orientation of the final version on the right. This may be an error in the illustration, but we can make it work if we consider that all three maps are being applied to a blank gray sphere. The maps need to be UV unwrapped maps, created with a common starting and ending point with reference to the sphere and containing detail for the entire sphere. When they are combined by the shader, the maps must match properly, or the illusion will fall apart. Assume that the maps were properly constructed, properly applied, and that the model would look like a sphere with color, shadow, and relief when rotated.

On the bottom of page 121, the text gets much more complex for a few pages. It discusses a node based system that reminds me of Autodesk's Maya program. You will want to know that a node is a piece (or several pieces) of information about a model. As the book indicates, nodes can be combined, changed, added, and deleted to reconfigure how a model looks. The image on page 123 is an example of several kinds of nodes that might contribute to a model. In this example, not all nodes in the list have been used, but they could be if they were needed. The complexity of this concept is something you will build up to in your modeling classes.

Assignment 4: Redraw two of your drawings from last week as realizations. Ignore the original art source. Start with the geometric version you made as if it were a "notes" version of the original. Add detail and shading to each new version, so that it could be used in a game or animated scene. Think about what would need to be in your art version for a normal map to be developed for the final model and hand in some remarks about this along with your finished work.

Assignment 5: Answer questions 1, 4, and 6 on page 129. Ignore the part about making a sketch in question 6.