Programming assignment #3 - Z-buffer renderer

CS 248 - Introduction to Computer Graphics
Spring Quarter, 1995
Marc Levoy
Handout #10


Demos on Wednesday, May 31

Writeups due on Thursday, June 1 by 5:00pm


Your assignment is to layer on top of your polygon scan converter a module that implements 3D viewing transformations and a module that implements Z-buffer hidden-surface removal. You will also modify your supersampling postprocess to work together with these two modules. With the addition of shading in assignment #4, you will have a complete rendering pipeline capable of generating realistic (well, sort of) antialiased pictures of 3D scenes.

Required functionality

  1. Matrix package. Write a function package that constructs matrices for translation, rotation, and perspective. (You won't need scaling for this assignment.) Use the 4x4 matrices described in section 5.6 of the textbook and Appendix A of Haeberli and Akeley (handout #9). Note that you must transpose the matrices in Haeberli and Akeley to conform with the textbook's column vector notation. Also provide functions to multiply two matrices and to multiply a column vector by a matrix.

  2. Specifying a view. Implement the view specification method described in class. Specifically, assume a perspective view with the picture plane perpendicular to the central ray. Provide one slider to control the distance from the observer to the center of the 3D viewing frustum (called d1 in lecture), one to control the distance between the near and far clipping planes (called d2 in lecture), and one to control angular field-of-view (called fov in lecture). Also provide slider control over X, Y, and Z rotation of your scene around the center of the frustum (not around the origin!) and slider control over X, Y, and Z translation of your rotated scene. Treat these sliders as modifying any camera specifications you read from .cam files (described below).

  3. Transformation and clipping. Using your matrix package, write a module that sends 3D triangles through the 5-step transformation, clipping, and projection pipeline outlined on page 279 of the textbook. In step 2, your transformation should consist of three rotations followed by three translations followed by the perspective matrix described in Haeberli and Akeley. You are responsible for converting your slider values into the appropriate calls to your matrix package. In step 3, if a triangle vertex falls on or outside the viewing frustum boundaries (for any sample position in the case of supersampling), you may simply discard the entire triangle rather than clipping it to the frustum boundaries. In step 4, your "device coordinates" refers to the coordinates of your canvas.

  4. Hidden-surface removal. Once you have computed 3D normalized canvas coordinates for a triangle, check to see if it is front facing. If it is not, then discard it for this view. If it is, then scan convert it, interpolating (normalized) Z values along triangle edges and across scan lines as described in section 15.4 of the textbook. At each pixel, perform a Z-comparison against a Z-buffer equal in size to your pixel array, conditionally overwrite the contents of your pixel array, and update the contents of the Z-buffer. For this assignment, you should assign a unique 24-bit color (random colors are fine) to each triangle. Alternatively, use the diffuse color you get from Composer for each triangle mesh and perturb it slightly (but enough to be visible) for each triangle within a mesh. If you do this, remember to apply the same perturbation to that triangle for all supersample locations, as in assignment #2. To help you debug your hidden-surface removal, display in a separate canvas a visualization of the depth values in your Z-buffer. This is a requirement.

  5. Supersampling. Allow slider control over the number of samples per pixel. Supersample and average down by shifting and scan converting your scene as in assignment #2. We suggest using the subpixwindow function from appendix A of Haeberli and Akeley to implement your shifts. Set xpixels and ypixels to the width and height of your canvas in pixels, respectively. For pixdx and pixdy, use the subpixel offsets in your sample distribution pattern (e.g. fractions between -0.5 and +0.5 for pixels spaced one unit apart and a box resampling filter of width equal to the pixel spacing). We do not require progressive refinement for this assignment (or assignment #4). To speed up debugging, we suggest either eliminating it or providing a option to disable it, i.e. to display only your final image.

  6. Rapid redrawing. Your interface should contain a button labeled "Redraw." When pushed, you should erase the canvas and render all previously loaded triangles as fast as possible, updating the canvas only once, after all triangles have been drawn. You will be graded on speed. As before, you should provide some means to control your active drawing area. Allow a drawing area of from 100 x 100 pixels to at least 300 x 300 pixels. We will measure your redraw time without antialiasing, so whether or not you support progressive refinement is unimportant. Also, you must perform the entire rendering operation when the redraw button is pressed, including transformation of vertex coordinates. Don't try to game the timings.

Creating scenes with i3dm and Composer

To help you create interesting scenes to render, we are providing two programs: i3dm, an SGI demonstration application, and Composer, written by Brian Curless and Apostolos Lerios using tools from Inventor - SGI's 3D graphics toolkit. i3dm allows you to interactively define several types of curved surfaces, which it outputs as triangle meshes. Composer allows you to interactively assemble scenes composed of 3D boxes and triangle meshes from i3dm. Composer also allows you to specify viewing parameters, define directional or point light sources, and specify reflectance properties for each object in the scene. For scenes of modest complexity, Composer provides real-time shaded renderings as feedback.

To transfer a scene from Composer to your renderer, Composer provides an "Export scene" button, an "Export camera" button, and an "Export both" button. When pressed, the scene description and object properties (for Export scene) or viewing parameters (for Export camera) are written to binary files with extensions ".sc" and ".cam", respectively. A filter program is available to convert these binary files into text files in case you want to edit them. You should provide a "Load scene" button, a "load camera" button, and a "Load both" button in your program. Documentation of routines for loading the binary or text formats of .sc or .cam files, and a description of the text file formats, are given in /usr/class/cs248/support/composer/README. Also contained in that file are some hints on using i3dm and Composer. Please read this file. before asking us questions; it answers many FAQs.

Using the "Export" buttons in Composer and the "Load" buttons in your program, you can model a scene in Composer, specify a viewpoint using Composer's real-time shaded renderings, then transfer the scene and camera specification to your program for rerendering. Although the way you specify a view (using sliders) and the way Composer does it (using a combination of sliders, thumbwheels, and direction manipulation) are different, the .cam file Composer constructs for you will allow you to match the view you see in Composer, assuming that you compose your transformations in the order that we suggest. The views will not match perfectly because of differences in clipping, antialiasing, shading, and scan conversion algorithms between your renderer and Composer. Nevertheless, the ability to compare your view with Composer's view is a powerful debugging tool - use it!

A hint regarding the interface between your program and Composer. When you load viewing parameters, you should reset your sliders to match the loaded values. This may give rise to the following difficulty. The viewing parameters Composer passes you, in particular those that specify parameters of the viewing frustum, are floating point numbers and are not constrained to lie within any particular interval. If you map a large interval of these parameters linearly to your sliders, the views that will typically interest you will occupy only a tiny portion of the slider range, making it difficult to specify a view using your sliders. We suggest one of two strategies to avoid this difficulty: (a) Employ a linear mapping that covers only a small interval surrounding the values given to you by Composer, changing the interval whenever you load a new set from Composer, or (b) Make your sliders cover a large interval that can accommodate any reasonable values given to you by Composer, but employ a nonlinear mapping from each slider to the associated viewing parameter. Experiment a bit to find a strategy that you like. If you choose (a), xsupport doesn't allow you to modify the displayed endpoints dynamically, so just display something generic like -10 to 10, indicating (linear) placement relative to a dynamic middle value (the Composer input).

For off-campus students, the executable of Composer (but not i3dm) is available for copying. The source is not available, so recompilation for other platforms is not an option. If you wish to develop your renderer entirely on another platform, you can create scene descriptions that match Composer's format by hand-generating text files in the documented format. Several example files are available for debugging your scene description reader. These files are in /usr/class/cs284/data/models.

Submission requirements

We will follow the same demo and submission procedure as in the last assignment. The assignment will be graded on correctness (60 points), efficiency (20 points), programming style (10 points), and the elegance of your user interface (10 points).

Your program should be capable of reading the files output by Composer (although you don't need Composer in order to generate them, as discussed above). Be prepared to demonstrate your program on grader data files that we will provide on the spot during the demo. Please provide file loading buttons as already explained. Be prepared to handle up to 10000 triangles. Try to anticipate every nasty scene we will want to test. In particular, test for degenerate triangles twice - once on your 3D scene as you load it (both i3dm and Composer sometimes create degenerate triangles), and again on your 2D coordinates before rasterization (to discard triangles that, after transformation, are being viewed on edge).

Working in teams

You may work alone or in teams of two or three. For a team of two, you must implement two of the following bells and whistles. For a team of three, you must implement five. Alternatives are acceptable if approved in advance. If you have completed the assignment and wish to earn extra credit, you may add more bells and whistles.

  1. Implement motion blur and depth of field as described in Appendix B of Haeberli and Akeley. Note that calling window before translate in GL (SGI's graphics library) causes the translation to be applied before the perspective mapping. Be prepared to show example images. For motion blur, be prepared with an image that shows motion not aligned with the screen X,Y axes.

  2. Implement all of the following kinds of parallel projections: orthographic (both orthogonal and axonometric), and oblique projections (see section 6.1.2 of the textbook). Design a reasonable user interface to control these views. For example, for oblique projections, your interface should allow control over which of the three principal directions of the model is aligned with the picture plane.

  3. Implement a two-view interface (e.g. front and side orthographic views) for interactively moving any triangle vertex in the scene. Also allow the user to move an entire triangle by pointing to its interior. See exercise 15.25 for hints on 3D picking.

  4. Implement a Sutherland-Hodgman clipper to provide proper clipping of your triangles to the perspective viewing frustum. Pay careful attention to triangles that span the observer's position as depicted in figure 6.58 in the textbook.

  5. Implement a class of procedural primitives. Tesselated quadrics are easy; bicubic patches are hard; fractals are relatively easy to implement and fun to use. Look them up in the textbook. If you implement fractals, we suggest (but do not require) implementing some kind of directional shading. Randomly colored fractals are hard to look at (and debug).

  6. Allow the user to paint directly on the 3D scene. Since you don't yet know about texture mapping, use a fine triangle mesh from i3dm and apply color at the vertices. Interpolate the color across the triangle to get a smooth appearance if you like. Think carefully about the mapping from 2D mouse space to 3D. There are several options here. For some ideas, look at Hanrahan and Haeberli, ``Direct WYSIWYG Painting and Texturing on 3D Shapes,'' Computer Graphics (Proc. Siggraph), Vol. 24, No. 4, August, 1990, pp. 215-223.

  7. Implement a keyframe animation system. Allow the user to specify positions for given triangles or vertices at selected frame numbers, and interpolate positions in between. An interactive interface is not required here. Provide both linear and "ease-in-ease-out" interpolation. Render your interpolated positions with motion blur. See us for tools to play back precomputed animations in real time.

  8. Implement transformation hierarchies. Provide an interface to select branches in the hierarchy in your viewing canvas and to modify the transformations at each node. Use 3D widgets and a direct-manipulation interface to modify transformations. Can you improve on the 3D widgets in Composer? Beware: a direct manipulation trackball (like Composer's) will require manipulating relative rotations (as opposed to absolute rotations). Worth two bells.


levoy@cs.stanford.edu
Friday, 20-Feb-1998 13:46:50 PST