CAP 201a - Computer Animation I

Lesson 10 - Chapter 15, 3DS Max Rendering

Objectives:

 

Chapter 15 discusses rendering 3DS Max scenes. Objectives important to this lesson:

  1. Rendering setup
  2. Cameras
  3. Safe frame
  4. Raytraced reflections and refractions
  5. Rendering the rocket
Concepts:

The chapter begins with a discussion of the importance of rendering. Rendering was listed back in chapter 1 as the first step in post-production. It could be argued that rendering is production, as far as the final product is concerned.

The text continues with parameters for rendering. If you do not already know how to open the Render Setup dialog box, you are given three ways: Click the Render Setup icon in the toolbar, select Rendering, Render Setup from the menus, or press F10. As usual, I recommend the toolbar button, since hot keys can mean other things depending on what you are currently doing.

This dialog box is broken into five tabs, each of which has sections that affect any render, including a quick render. On the Common tab, Common Parameters rollout, several features are defined:

  • Time output - default setting is a Single frame, but you can select the Active Time Segment (all frames of the scene), a Range of frames, or a list of specific frame numbers. The last two are more useful for rendering a portion of a scene. The Every Nth Frame option lets you make a test render of a series of frames, to check whether you need to change any settings. It can alert you to lighting, rigging, and animation mistakes before you spend the time rendering the entire scene.
  • Output size - Settings for the resolution of the output, including preset resolutions on buttons that can be redefined by the user
  • Image Aspect - the ratio of image width to image height; if you change this, the height of the images will be recalculated to fit
  • Options - This is a series of toggles that tell the render engine to consider or ignore specific features of the scene. Some of the options:
    • Atmospherics - Render or ignore atmospheric effects
    • Effects - Render or ignore other kinds of effects
    • Displacement - Render or ignore surface displacement from maps
    • Video Color Check - check for colors that are not safe for NTSC or PAL formats
    • Render Hidden Geometry - Render or ignore objects hidden in the selected view
    • Force 2-Sided - Render or ignore the inside of objects; necessary for an accurate render if the inside of an object is visible
  • Render Output - You use this to save your render as a file. You specify the type of file, the location to store the file, the name of the file, and the codec to use (if you are rendering an animation). You should be aware of the various output formats that are options.

The E-mail Notifications rollout is not discussed in the text, but its existence hints at a problem we have not had yet. The problem is that a render might take a long time. This rollout gives you a way to have 3DS Max send an e-mail message on completion of a render, provided you have an SMTP server to send the message through.

On page 314, the text discusses the Rendered Frame Window, which has a lot of buttons to render in specific ways. We have not discussed the fact the you can leave this window open while you continue to work on a scene, then click the Render button on it to see how your changes work out. You can also use the Viewport to Render drop down list on this screen to change which viewport will be rendered when you initiate a render again. This is handy when you click the Render Production button and realize you rendered the wrong view.

Canceling a render is also a nice feature, especially if it will take a while and you hate it right away. The text describes doing this on page 315.

The Assign Renderer rollout gives you a place to change the renderer used by various parts of 3DS Max. You should do so when you need to, but remember what you have done. This setting is retained until you change it again.

3DS Max 2012 comes with several render engines built in: the default scanline renderer and the mental ray renderer have been around for a while. You have used the scanline renderer in all the quick renders done in this class. The text mentions that it renders a series of horizontal lines. If you watch the render window when this is going on, you can easily see the lines being drawn.

The mental ray render engine does a better job with bouncing light rays in a scene, but it takes longer to render each frame. The mental ray renderer produces more careful rendering of light that is reflected or refracted in your scene. The author describes mental ray as the more powerful renderer. You should be aware that you can buy other rendering engines to install as plug-ins for 3DS Max, which have more capabilities as well.

The VUE renderer is not really a render engine, as such. It produces a text file as the render output, so that the render could be used in other programs.

The iray renderer is a pretty new feature. The link in the last sentence goes to an Autodesk video that compares the mental ray and iray renderers. In the video, the narrator describes various settings she chooses when using the mental ray renderer to obtain good results, without crashing the computer generating the render. Thanks to the miracle of time lapse photography, she shows us that her render actually took almost two and three quarter hours to produce one frame of output. Now, imagine her situation: this was a test render. She needs to change settings and render again. Time to send out for pizza?
She turns to the iray renderer, which has a simpler interface: it only wants to know how long you will let it render. Its render is superior to the settings she used, even before it is finished. If it were obviously a bad choice early on, the operator would be able so save time by stopping sooner, and adjusting problems (like lighting) in the scene before the entire render time had been used.

The Quicksilver Hardware renderer is described in the video this line links to. The features it provides are beyond the scope of this class. Watch the video to get an idea of what it can do, but don't worry about using it for now.

Chapter Exercise 1: Rendering the bouncing ball 

This exercise starts on page 316.

  1. Set the project folder and open the file specified in the text.
  2. Open the Render Setup dialog by one of the methods mentioned above. In the Time Output section, click the radio button for Active Time Segment. (This means all frames on the timeline.)
  3. Find the Output Size section and click the button for 320x240.
  4. Find the Render Output section (about three rollouts below Output Size) and click the Files button. (It may be the only thing you can do in that section at the moment.) This takes you to a window where you will do several things:
    1. Navigate to the folder where you will save your output (This is more important than it sounds. You need space to write a video file, and you need to know where you put it.)
    2. Enter a filename for your output file.
    3. The text says to click the Save As Type drop down, and choose MOV Quick Time File. This selection is not possible on a computer that is running a 64 bit version of Windows. Despite the blessings of Quicktime extolled in the text, Apple does not provide way for 64 bit Windows to write an MOV file. If you are running such an operating system, just render to an AVI file instead.
  5. Click the Save button, and the Compression Settings dialog will appear. What you see here varies depending on what file type you chose in the last step. In this case, use the settings in the text, if possible. This dialog box appears by itself the first time, but you will probably have to click the Setup button if you need to access the settings parameters again.
  6. Look at the bottom of the Render Setup dialog box, and check to see that Production is selected instead of ActiveShade. ActiveShade is an in-between kind of measure that shows semi-rendered views in a viewport.

    At this time, check that the viewport you want to be rendered is the currently selected viewport. The render engine will only render the currently selected viewport, except as noted above. If not, change it! (If you need multiple views in your movie, you make multiple renders and composite them together in another program like After Effects.)
  7. Click the large Render button in the bottom right corner of the Render Setup dialog box. Each frame in the timeline will be rendered separately and added to the output file.

Remember in step 4, when I told you to pick a folder to send the output to? Navigate to it (in Windows Explorer, not 3DS Max) and play the file.

The text turns away from its main topic to consider cameras. The text explains that you will use two types of cameras in 3DS Max:

  • target cameras are linked to a location in the scene to keep the camera looking at that location even when the the camera moves. Have you ever heard of an actor being told to "hit a spot" on a stage or a set? This is like that, in that the camera is set up to look at that particular spot. Well, you have to set it up, but that's the point of it.
  • free cameras require manual settings; they do not automatically look at any object or location. They are pointed at whatever they point at when they are created until they or the objects they point at are moved.
In both cases, a camera only sees what you put in front of it. This adds one more role to your list. You are the director, the set designer, the actor/puppeteer, the gaffer/key grip (head electrician/lighting guy), and the camera operator.

Basic camera shots are not discussed here, but they should be, along with some guidelines to categorize the shots you make. These guidelines are based on having a human being as the focus of the shot. For shots that do not include people, use whatever the main object is as the measuring stick:

  • extreme long shot - people are often not visible in this shot that shows a cityscape or landscape to establish the location of the next shot (also called an establishing shot for the obvious reason)
  • long shot - includes the entire subject, head to foot. Fred Astaire had a clause in his contract with MGM that said he was to be photographed this way every time he danced on screen.
  • medium shot - shows a standing subject from the head to about the knees
  • close-up - shows mostly the head of a subject, may include shoulders, but always ends above the waist
  • extreme close-up - shows a portion of a face, or a portion of an object

Camera shots can also be categorized by the angle of the camera (height above or below the subject), and by the kind of motion that the camera must make in the shot. Follow the link in the last sentence to read a discussion of these kinds of shots, which should help you to think more three dimensionally. Consider this tribute to several Alfred Hitchcock films that shows his use of the camera, and how you might do similar things in an animation.

Camera lenses in the real world have characteristics that are related to their focal length. 3DS Max uses equivalent measurement to simulate the effects of longer and shorter lenses in its cameras. A related concept is Field of View (FOV). This is a measure of the width of the portion of the scene a camera can see. The longer a lens is, the narrower the FOV. The shorter a lens is, the wider the FOV. In the real world, we need to change the lens on a camera to change the focal length. In 3DS Max, we can change to one of several Stock Lenses with a click on the Modify panel.

A rule of thumb for categorizing lenses:

  • focal length 30mm or less - wide angle lens; makes the background look farther away from the foreground, includes more foreground than a regular lens
  • focal length between 30mm and 200mm - standard lens, the extremes (30 and 200) will have some characteristics of the kind of lens they are closer to
  • focal length 200mm or more - telephoto lens; makes the background look closer to the foreground

The text tells us that the default FOV for a 3DS Max camera is 45 degrees. The default focal length is the rather odd 43.456mm.

Chapter Exercise 2: Creating a camera 

This exercise starts on page 319.

  1. Set the project folder and open the file specified in the text. Select the Top viewport.
  2. Click Create, Cameras, Target camera. Use the illustration on page 320 as a guide to placing the camera in the scene. (Like creating a target light, drag from the camera to the target.)
  3. In this step you need to move the camera and the target. You can move them separately, but try the technique explained in the text: select the line connecting the camera to the target icon, and use the Move tool gizmo to move them to the desired height in the Front viewport.
  4. Pick a viewport to change to the camera's point of view. Note the explanation of what you should do and what can happen:
    1. Select a viewport and press the letter C.
    2. If a camera is currently selected, the viewport should change to the point of view of that camera.
    3. If no camera is currently selected, and if there are several cameras in the scene, you will get a dialog box to pick which camera to assign to that viewport.
  5. Quick Render the scene. Move the camera to better placement.

The text briefly mentions moving cameras. There are advantages to the cameras used in 3DS Max (and other virtual environments). We can move them as we like, without regard to the physical limitations a camera crew faces on a movie set. Some basic terms:

  • panning - changing where a camera is pointed by rotating it
  • trucking/tracking - changing where a camera is pointed by changing the camera's location; typically this term is used when a camera moves to follow a subject
  • dolly in/out - moving the camera toward (into) or away from (out of) the scene
  • push-pull - moving the camera toward a subject, then away from it
  • crane effect - in movies, a crane shot places a camera on a crane that moves it above the scene. In 3DS Max, we will create a path for the camera to follow. The path must go where the crane would have put the camera.
  • zolly - zoom in one direction while you dolly in the other, typically doing each at the same speed; keeps the focus where it is, subject stays in place in the frame, but the background changes a lot; see this example from Goodfellas and the example at about 3:57 in this clip from Jaws. When done as a dolly out, zoom in, as in theses examples, the effect illustrates the flattening effect of a telephoto lens.
  • Follow this link to a Yale page about movie terms for more terms and a few example videos

Follow the steps on page 321 to create a simple animation for the camera in this scene. Adjust it to your own design, and render to a video file. Show me the video.

Frustum of the viewerThe text introduces a pair of concepts that it does not have us use.

Clipping planes are features that we find in games as well as in modeling animations.

Think of looking at the scene through a cone (base is a circle) or a pyramid (base is a rectangle), like the field of view cone that is drawn for a camera. Now imagine that the cone is bounded by six planes: top, bottom, right side, left side, near clip, and far clip. What are those last two? Well, lets consider some vocabulary from next term:

  • frustum - the player's view of a scene, diagrammed as a pyramid whose apex points at the player or camera. The angle formed by the legs of the frustum (field of view) may simulate either short or wide-angle camera lenses
  • near and far clipping planes - the near clip value tellls the camera where to start seeing, and the far clip value tells the camera where to stop seeing. The planes defined by these distances from the player are called the near clipping plane and the far clipping plane.
  • six planes - the frustum illustrated here has only four planes: top, bottom, left, and right. You might think that the far plane is the base of the pyramid, but it is actually  farther away, so you can see a background. Imagine the far plane and the near plane as well.
    If an object is outside these six planes, it should be culled from the scene. It will not be rendered, it will not be seen by the audience.

The clipping planes define where an object must be in relation to the camera in order to be rendered.

Safe Frame view is diagrammed on page 322. The image showing the camera here shows part of the idea. The innermost yellow frame is is the title safe frame. It marks the area where the software has determined that it is safe to put text in our view. The blue frame is the action safe frame. Things that happen inside this frame should be visible to most viewers. The outermost yellow frame is the live frame. This is the limit of what the render engine will process.

On page 323, the text discusses using raytracing to get "real" reflection and refraction in a scene. The text suggests that we can get realistic reflections in a scene with either a Raytrace material, or a Raytrace map. It recommends the map solution as taking less calculation (by the computer). It renders faster, but gives you less control and less detail than a Raytrace material. Unless we need the detail of the material, the text says to use the map.

The text proceeds to give us an example of each approach.

Chapter Exercise 3: Creating a Raytrace material 

This exercise starts on page 323.

  1. Open the file specified in the text. Make sure one of the viewports is set for Camera01.
  2. Pick a sample slot in the Compact Material Editor. Click the Get Material button. This time double-click the Raytrace material.
  3. Find the Raytrace Basic Parameters rollout. Change the color of the Reflect swatch from black to white, as instructed, to make the material as reflective as possible.
  4. Change the Diffuse color swatch to black.
  5. Apply the material to the column in the current scene.
  6. Change the render engine from the default scanline renderer to the mental ray renderer, and do a quick render of one frame. Warning: do not overwrite your video file from exercise 1. (Turn off the check box under Files, and you will render just to the screen again.)

The text breaks off the exercise to discuss the results. You should have a reflective material, but may notice jagged edges (jaggies) in the reflections. The text explains that the jaggies are caused by aliasing, an effect that rendering can cause unless antialiasing filters are turned on. In fact, the standard antialiasing filter is on, but the author wants to show you supersampling, which means filtering twice.

An interesting part of this discussion is the introduction of the Clone Rendered Frame Window button, a feature of the Render window. The Clone Rendered Frame button opens a second window which is, at first, identical to the render you just made. However, you can make a change in the scene, and render to the new window, while keeping the old one open on screen. This is useful for trying out a change and seeing which version you should use. (You can undo the new change to return to the state of the previous render. Or can you? Can a render crash the computer? Yes, sometimes.)

The problem with this particular exercise is that the images supplied to show the effects of antialiasing and supersampling don't look any different from the images that show renders without these effects. Examine the images below, and you will have a better idea of what to look for on your screen.

Cherries with jaggies Cherries with Adaptive Halton
Jaggies at reflection edges Adaptive Halton removes most jaggies

 

On page 325, the chapter turns to using a Raytrace map.

Chapter Exercise 4: Creating a Raytrace map 

This exercise starts on page 325.

  1. In the same scene you used above, select a new slot in the Material Editor.
  2. Open the Maps rollout. Click the map button for Reflections. Select Raytrace, as you did in the last exercise.
  3. Click the Go to Parent button.
  4. Find the Blinn Basic parameters section, and change the Diffuse color swatch to black.
  5. Change Specular Level to 98 (bright), and change Glossiness to 90 (small).
  6. Apply this material to the column object and do a render.
  7. If you see aliasing, follow the procedure at the bottom of page 324 to apply supersampling and render again. The results with the Raytrace map are very similar, and a bit faster.

On page 326, the chapter turns to refraction. Refracted light is light that has been changed by passing through a medium like glass or water. In the exercise that is provided, it passes through a curved wine glass. The text warns you that rendering refraction takes significantly longer than reflection.

Chapter Exercise 5: Creating a refraction with a Raytrace material 

This exercise starts on page 326.

  1. Continue in the scene from the last exercise. Switch your Camera01 viewport to show Camera02.
  2. Pick a new sample slot in the Material Editor. Click Get Material.
  3. Choose the Raytrace material.
  4. Find the Raytrace Basic Parameters rollout. Change the color swatch for Transparency to white (transparent).
  5. Uncheck Reflect, and change the value to 20.
  6. The Index of Refr parameter is for the simulated Index of Refraction for the material. (Follow the link for a discussion of why different materials refract differently.) In short, the higher the value, the more the material bends light. Leave the IOR at 1.55.
  7. Open the Extended Parameters rollout (illustrated on page 327). Set the Reflections Type to Additive. Set Gain to 0.7.
  8. Find the Supersampling rollout. Follow the instructions the the text.
  9. Go back to the Raytrace Basic Parameters, and find the Specular Highlights group. Make the changes noted in the text.
  10. Apply the material to the wine glass in the scene. On my computer, the glass seemed to disappear in the Camera02 view.
    Do a Render.

Read the discussion about changing the IOR value at the bottom of page 328. If you were going for realism, you could look up IOR values for various materials on the Internet.

The text repeats the procedure, but uses Raytrace mapping on page 329, to produce reflection and refraction. The render on this will take a LONG time compared to the others. For illustration, the image below followed this procedure for the wine glass, but used the Raytrace material method for the column/table top.

On page 331, the text turns to a project that puts together several skills from this chapter.

Project Exercise 1: Camera movement 

This exercise starts on page 331. Before we begin, consider that some shots cannot be obtained with a stationary camera. The crane shot in High Noon, the tracking crane sequence in Touch of Evil, and the extremely long, single take, tracking shot in Goodfellas are all examples of shots that don't work unless the camera moves.

  1. Change the project folder and open the project file specified in step 1.
  2. Click the Auto Key button to turn on key frame auto capture. Move the Time Slider to frame 45.

    Standard viewport nav Viewport nav for Camera views

    Standard viewport nav icons

    Camera viewport nav icons


  3. Click the Camera viewport. Note the change in the Navigation Tools area, noted in the text. If you missed it, click another viewport, then switch to the Camera viewport again, and watch the change.
  4. Use the Dolly Camera tool as instructed to move the camera closer to the model.
  5. Use the Truck Camera (pan) tool to adjust the scene as instructed. It will look like you are adjusting the scene, not the camera.
  6. Use the Orbit Tool as instructed. You can think of it as adjusting the scene or adjusting the angle of the camera.
  7. The text tells you to use the Truck Camera tool again to get a view like the illustration on page 333. You may find that you need to adjust with the Dolly Camera tool as well. Don't worry if you adjust several times, only the final position is saved in the key frame, as long as you have not moved the time slider without cause.
  8. Run the animation to check your work. Adjust as desired. Turn off Auto Key, and save incrementally.

The lesson continues by adding raytraced reflections to the scene. No more faking it, we want "real" reflections in our virtual studio.

Project Exercise 2: the rocket 

This exercise starts on page 334.

  1. Continue with the scene from the last exercise. Open the Material Editor. Lots of materials are in use in this scene already.
  2. Select the first sample slot. It should already be named Rocket Body Left. The author explains that we do not need to change the material for the other side of the body, since we will not see it. This seems sloppy, but it actually speeds up the renders you will do.
  3. Open the Maps rollout for this material. There should be no map assigned to the Reflection channel. (If one is assigned, drag a button that says None onto the map button for Reflection. Or try the method in the text: right click and select Clear. That's new.)
  4. Change the value for Reflection to 20. Click the Reflection map button, choose Raytrace, and click OK.
  5. Do a quick render. You should see a difference between how the rocket looked at the end of the last exercise and now. (Shiny, captain.)
  6. Repeat the steps to add a Raytrace reflection to the materials for each of the parts listed in the text. These should be:
    • Rocket Body Right? I thought they said to leave it alone?
    • FIN DECAL
    • Nose
    • Control Panel
    • Wheel Bolt
    • Wheel White
    • Seat
    • Fin
    • Wheel Black
  7. Render again to check your work, and save incrementally.

Project Exercise 3: the room 

This exercise starts on page 336.

  1. Continuing with the last scene, in the Material Editor, select the material called FLOORS.
  2. Change the value for Reflection to 50 (50%), and set the map for Reflection to Mask.
  3. Click the button for Map. Choose Bitmap, and click OK. Navigate to the bitmap file specified in the text. It isn't there? The text says it is a TIFF file. It isn't. It is a PNG file.
  4. You should see the Bitmap Parameters section. Find the Coordinates rollout, and change from Environ to Texture. This is the only change for this section.
  5. Click Go to Parent. You should see the panel with two buttons again. Click the Mask button and set it for Raytrace as you did for the materials in the last exercise.
  6. Render to see the change in the scene. Improve the reflection by changing the Reflection value as instructed in the text.

In the rest of the chapter, the authors fail to provide numbered steps. On page 337, they tell us how to turn Special Effects on and off for the current render. (They are turned on presently in your file.)

Another exercise

Starting at the bottom of page 338, they walk through creating a QuickTime movie of the 45 frames you have worked on. It is worth noting that 45 frames will be one and a half seconds at the default frame rate for QuickTime. Create a movie of this scene, in AVI format (because Apple still doesn't make a QuickTime writer for 64-bit Windows). Render it and play it for me.
Oh, by the way: I want something else in the scene as well. I will tell you what to add to the scene in class.