CAP 201a - Computer Animation I

Lab 5 - Shots, Lighting, and Rendering

Objectives:

This lessons will consider the discussions of Shot Design, Lighting, and Rendering in the Wyatt text. Objectives important to this lesson:

  1. Shot Design, pages 114 and 115
  2. Lighting, pages 112 and 113
  3. Rendering, pages 110 and 111
Concepts:

The concepts of designing shots, lighting sequences, and rendering them are related and interdependent. All three of these concepts should be considered each time planning is put into any of them. If you do not light a scene well you may not see anything or may not see what you hope to see. If you do not compose your shots well, you lose the meaning of some actions, you lose the combined effect of multiple actors, and you bore the audience. If you do not render a scene well, everything else may as well not have been done.

Before he starts discussing Shot Design on pages 114 and 115, Mr. Wyatt cautions us that rules and theories are useful, but they can also be broken with good results. Let's start from the idea that we don't want to stifle creativity, but we should choose to be different after we have learned to use a rule or theory effectively.

One of the theories he discusses is a well used theory, the Rule of Thirds. He only writes about it in the picture captions, but it is what all the pictures in this article are about. The theory is that a frame can be divided into thirds, both horizontally and vertically, which gives us nine defined areas, and four major intersections. The theory is that when we place an object of interest at the center of the frame, it is not nearly as effective as when it is placed at any of the four intersections.

Frame divided into thirds vertically and horizontally

 

 

   
 

 

 

 
   

 

 

This theory is not carved in stone. Consider the examples in the text. In each case, the focus points are only near the intersections drawn on the frames. The second frame on page 114 could have been better if the camera had been orbited so the face of the character on screen was closer the upper right intersection point. It may be that putting points of interest at these locations makes a more interesting shot because amateurs typically put their points of interest in the center of a shot. That is natural, isn't it? When you look at something, for example a computer screen, you look so it is in the center of your field of vision, not off to one side. However, in a movie, an ad, or a game, if something looks different, maybe it is more interesting because of that unusual aspect.

Take a trip across the internet to look at some pictures, then come back for a discussion of them.

Would the picture of the bird on the sand be as interesting if it were centered on the bird? Maybe. Is the position of the bird stronger in the image offset from the center as it is? Perhaps, and perhaps it is also strengthened by the line of tracks in the sand leading to the bird, the shadow that points to the bird like an arrowhead, and the fact that there is a color transition from beige to blue in that part of the scene as well. Do you wonder if the bird was asked to pose there? Maybe the bird is trying out for Terns and Tiaras?

Consider the images of people walking and running that follow on the same web page. The frame composition is stronger for the people being placed as they are in each frame. Again, however, there are additional reasons for those placements. I think we see more of their surroundings, telling us more about where the people are when they are photographed this way. This leads to another concept.

Mr. Wyatt offers a tip about not leaving a great deal of dead space in the frame. We might debate what is and isn't "dead" space. Let's consider, however, the two images of a runner on a beach at the last link. The author of that site observes that we are probably more comfortable with the way the runner looks in the shot where we see open space in front of her instead of behind her. Why? Maybe we are more comfortable with thinking about what is ahead, instead of behind. Maybe we expect to be able to see her in the shot longer if she is moving toward the center of the frame instead of away from it. Or maybe you like the other frame better, because the color balance is more interesting.

Let's review some of these concepts with illustrations in a video.

In that video, there is a reinforcement about giving your actor space to move into, and more information about composing shots:

  • Pick your shot distance: Extreme Long shot (ELS), Long shot (LS), Medium shot (MS), Close Up (CU), Extreme Close Up (ECU). How intimate is the shot? Is it close or far enough to support the action?
  • Shot angle: a three-quarter profile is more interesting than a profile or portrait shot, a flat angle looks two dimensional, a Dutch angle looks weird. This is true for people, landscapes, or objects.
  • Shot height: high angle looks down (subject looks smaller, less powerful), low angle looks up (subject looks taller, more powerful)
  • Rule of thirds also leads you to place subjects on a line in the grid, providing lead room/nose room for the subject to move in the frame
  • Place action in the scene where lines in the set lead the viewer to look. Remember the stage dressing that led to the bird in the internet photos.
  • Depth of field also directs the viewer's attention: focus on the part where they should look.
  • Long lens tends to flatten the shot, lose the depth. Short lens tends to exaggerate the depth in the shot, can become a wide angle shot.

On pages 112 and 113, Mr. Wyatt discusses Lighting. He begins a discussion of a classic lighting technique, three-point lighting, by saying that it simulates natural lighting. That is an odd statement, since one of the standard arguments against three-point lighting is that it is very unnatural.

Three-point lighting is a classic, tried and true technique. It is often used in portraits, such as senior pictures. The method uses three kinds of lights, differentiated by their purpose (you can have more than three lights in a scene, but each would be one of these types):

  • key light - the main light for your scene; you should place a key light in front of the subject, and to one side, making the main light in the scene come from that direction, and cast shadows accordingly
    This is essentially a diffuse light source.
  • fill light - a light that fills in shadows that would otherwise make the scene too dark; the fill light is often to one side of the subject, but it may also be placed in front of the subject. It will shine at a different angle, and will be less intense than the key light.
    This is like an ambient light source, but it can be brighter than ambient light would naturally be.
  • back light- also called a rim light; a light that fills the background of the subject to give it depth and separation from the rest of the scene

The back light could be the most objectionable type. People rarely walk around with a light behind them that either gives them a halo, or makes them pop out from the background. (Or perhaps, only rare people do. I remember one...) This makes the technique less realistic for most subjects, and more stylistic. As I wrote, this is a classic approach, but some people prefer other techniques that more closely simulate nature. This article on light offers an opposing point of view and other suggestions.

We will address specific use of light generating objects in 3DS Max in the Derakhshani chapter on that subject.

Rendering is discussed on pages 110 and 111. This ordering of the subjects in the text puzzled me. Rendering is typically the last thing you do with a scene. It is critical to get it right, but you would be foolish to do it before blocking the action for a camera and setting up the lighting in a scene.

Mr. Wyatt begins by observing that highly detailed renders can take a long time to process, which leads to there being clusters of computers used by animation companies to render feature length productions. One of his tips for this section is to buy another computer to do the rendering, so you do not tie up your work computer while long renders are taking place. He does not mention that a second computer might cost nearly as much as another license for 3DS Max, or whatever you are using for your rendering software.

He lists several pixel resolutions that have been used for output for different display technologies on page 112.

Pixel Aspect Ratios
NTSC SD 4:3 720 x 480
NTSC SD 16:9 (widescreen) 853 x 480
PAL SD 4:3 720 x 576
PAL 16:9 (widescreen) 1024 x 576
1080 HD 1920 x 1080
iPod 320 x 240

These are not the only resolutions ever used, of course. In his table, SD is Standard Definition, and HD is High Definition. You should be aware that NTSC and PAL are two of the major television technology standards. SECAM is a third.

Picture links to public domain source on Wikipedia
When you render, you typically have a choice of file formats. The text discusses file formats to save your rendered output in. The choice of output type can be dictated by your audience. Most Windows computers will have a player installed that will read AVI files. Most Macintosh computers (and many Windows computers) will have a Quicktime (MOV files) player installed. Most iPads, at this time, will not play Flash files, which is a departure from a format that has been a standard on the Internet. If you are rendering for another sort of player, consider what happens in the next step.

Mr. Wyatt makes an odd turn in this last paragraph in this section. He advises that you should render movie files only for tests, not for final renders. This presumes that your "final render" is not really final at all, that you are rendering output to be used in a compositing program. This would be true, for example, if your intention is to take output from 3DS Max and use it in a program that will burn a DVD or a Blu-ray disk.

If you knew that your output was not going anywhere but an editing room, you could be comfortable rendering to the output type that the next entity required for their inputs. However, in this environment, rendering to MOV or AVI files is a good choice. Those files can be played quickly and easily, and they can be used in our lab with After Effects to be composited into a larger work as well.