Programming assignment #2 - Polygon scan converter

CS 248 - Introduction to Computer Graphics
Autumn Quarter, 2008
Marc Levoy
Handout #7


Demos on Wednesday, October 22

Writeups due on Thursday, October 23 by 2:00pm


Your assignment is to write a rasterizer for scan converting polygons drawn using a graphical user interface that we provide. Our interface also allows you to create simple animations from your polygons. Your rasterizer will be capable of spatial supersampling and motion blurring, to reduce aliasing artifacts in space and time.

What we give you

  1. A graphical user interface for drawing polygons. As with assignment 1, you may use the VM to do this assignment. The starter code for Assignment 2 is under Desktop/assignments/assignment2 in the VM. If you want to get these files off of a myth machine, you can find the same directory under /usr/class/cs248/assignments/assignment2. If you run assignment2/bin/animgui on the VM, it will pop up a user interface based on the GLUI toolkit, which is in turn built on OpenGL. This interface allows you to draw polygons having any number of sides and having vertices that lie at integer pixel coordinates. Each polygon will be assigned a unique random bright color. The interface also allows you to change the shape of each polygon, by dragging individual vertices or by translating the entire polygon, and to associate shapes with specific points in time, called keyframes. Finally, the interface allows you to load or store your polygons to files. This last capability allows you to test your rasterizer repeatedly on the same polygons, it allows us to test your rasterizer on "grader data" at the demos, and most importantly, it allows you to create and store cool animations!

  2. The stub for a rasterizer. Included in the interface is a "Render" button and a field for specifying which frames to render. When you press the button, the interface will loop through each frame to be rendered, calling a routine named "Rasterize" (for which a stub is included in our package) and passing it an empty canvas like the one you used in project #1. Your task is to expand this stub to include a polygon rasterizer. Specifically, Rasterize should loop through each polygon, scan converting it into the canvas array, then return control to the interface, which will display the updated canvas on the screen. If you specify a filename in the "Render To:" field, each frame will also be output to a .ppm file, and a list of frames will be output to a .list file. We explain later how to use this .list file to create movies. If animgui is invoked with certain command-line arguments (see animgui.README for details), it becomes a standalone (non-graphical) driver for your rasterizer. In this mode, it will load polygons from a file you specify, then call Rasterize, perhaps multiple times. Each time you return control to the driver, it will output your canvas to a .ppm file.

  3. A keyframe interpolator. When the interface calls your rasterizer, it will also pass it an integer frame number. As your rasterizer loops through each polygon, it should call our interpolator routine (GetVertices), passing it the polygon number and a frame number. GetVertices will compute for you the shape of that polygon at that moment in time, by interpolating vertex coordinates linearly between the keyframe positions you defined interactively in step 1. What it returns to you is a list of vertex X,Y coordinates for the polygon. It is this interpolated polygon shape that you should scan convert into the canvas array. By the way, the frame number you specify when you call GetVertices can be a floating-point number, allowing you to specify times that fall between frames. You'll need this flexibility for computing motion blur.

What you need to write

  1. Clipping and scan conversion. Once your rasterizer obtains the vertex X,Y coordinates for an interpolated polygon, scan convert it using the algorithm presented in class, which is also covered in section 3.6 of your textbook. In lieu of polygon clipping (as described in section 3.14), simply scissor the polygon to the boundaries of the canvas. In other words, scan convert the polygon even if some of its vertices lie outside the canvas, then test each pixel you generate to make sure it lies within the canvas. If the pixel lies outside the canvas, discard it. You need to be able to handle in this way polygons that are 200 pixels larger than the canvas in all directions. This will require you to handle negative vertex coordinates. It will also probably require an edge table that is larger than the canvas.

  2. Spatial supersampling and averaging down. The user interface contains a check box labeled "antialias". If checked (a flag will be passed into your rasterizer), you should step through a pattern of subpixel positions, shift the vertices of all polygons as appropriate for that subpixel position, scan convert them at the shifted position, and combine the results into the canvas using the accumulation buffer algorithm, as summarized in Appendix A of this handout. It is up to you to choose the pattern of subpixel positions and corresponding weights (summing to 1). Some patterns work well, and some do not, as we have discussed in class. The user interface contains a field in which you can enter the number of supersamples (variable s in Appendix A). Use this value (which will be passed into Rasterize) to control the number of samples per pixel. Feel free to round this number to the nearest square (e.g. 4, 9, 16, 25, etc.), but make sure your code supports at least 50 samples per pixel.

  3. Motion blurring. The user interface also contains a check box labeled "motion blur". If checked, you should step through a sequence of fractional frame times, call GetVertices to obtain the vertex coordinates for each polygon at these times, scan convert the resulting shapes, and combine the results into the canvas using the same accumulation buffer algorithm. Once again, it is up to you to choose the pattern of samples and weights. The user interface contains a field in which you can enter the number of motion blur samples (variable t in Appendix A). Make sure your code supports at least 50 samples per frame. If the "antialiasing" and "motion blur" boxes are both checked, your implementation should perform both spatial supersampling and motion blurring. For both forms of antialiasing, be prepared at the demo to describe your sampling and weighting solutions and to characterize (in informal terms) the quality of antialiasing that they produce. Include these descriptions in your README file when you submit your code.

A few hints

Submission requirements

Like project #1, there are two parts to submitting your rasterizer: giving a demo on Wednesday, October 22, and submitting an online writeup by 2:00pm on Thursday, October 23.

For the demos, signups will be handled electronically, as before. Sign up for a one-hour slot sometime during which you will be called upon to give a 10 minute demo. The evening slots will be reserved for SCPD students. Other students can sign up for these slots only if all other slots are taken. All demos must be given in the graphics teaching lab, which is Gates Hall B08, basement level, where the myth machines are. To keep the demos fair, you must freeze your executable by 7:30am on demo day and give your demo from that executable. To freeze your executable, place the submit Assignment2 script available here (right click save as in firefox from the VM) in Desktop/assignments and run it just as you ran the submit script for Assignment 1. If your code is not in the Desktop/assignments/assignment2 directory on the VM, please copy it there before running the submit script. When asked for the directory where your assignment lives type ‘.’. If you are not using the VM to do the assignment run the /afs/ir/class/cs248/bin/submit script. You may make multiple submissions until 7:30am, after which time submission will be disabled for the remainder of the day.

Since you will be demoing from the executable you submitted, make sure it is a Linux executable that runs on an unmodified version of the VM. When you arrive for your demo, we will have the executable you submitted running under a copy of the VM, probably on our own laptop. That said, if you installed the VM to a laptop, you might want to bring that laptop with you, in case there's a problem with the files you submitted. Be sure to include your compiled binary named 'animgui' in the /bin directory of your submission (this is the same filename generated with the skeleton and default makefile)! If you are having trouble with the VM, you should be able to use the pod machines for development. However, we still require that your executable run on the VMs on demo day.

Writeups consist of a commented copy of your source code, a Linux executable that runs on an unmodified version of the VM, a Makefile, and a README describing the functionality implemented. It should also describe your supersampling solutions, including the weights you assign to each supersample in space or time, as outlined earlier. Be brief but complete; one screenful is enough. If you did something especially clever, tell us about it. By the way, if we need to do anything more than 'make' to compile your code, please include these instructions in your README file. To make your submission, place all these files in the assignment2 directory (and optionally subdirectories) and once again run submitAssignment2. You may make multiple submissions; we will consider only your latest submission while grading. If all goes well with the submission, pat yourself on the back and start gearing up for the next project. If not, and the submit script reports errors, please email the error messages with a gzipped tarball of your submission directory to cs248-aut0809-tas@lists.stanford.edu

The assignment will be graded on correctness (60 points), efficiency (20 points), and programming style (including your writeup) (20 points). As before, the functionality of your program will be graded based only on what you show us during the demo. Functionality you don't show us or that doesn't work will not be considered. Only your source code and your README file will be graded from your submitted writeup. Efficiency will be judged mainly by looking at your code, although if your rasterizer runs horribly slowly on demo day, we'll notice it.

Extra credit

If you have completed the assignment you can earn extra credit by adding bells and whistles. Here are some suggestions. Some are harder than others. Points will be awarded in proportion to difficulty. Feel free to make up your own. (If you aren't sure your idea is worth extra credit, run it by the staff and we'll tell you.)

  1. Modify the user interface to allow interactive scaling and rotation of polygons around an interactively selected point in the canvas as a way of changing their shape. This one is easy.

  2. Modify the user interface to allow insertion and deletion of vertices in an already-defined polygon. This is also easy.

  3. For a bigger challenge than bell/whistle #2, modify the user interface, the underlying polygon data structure, and the linear interpolation routine to allow the number of vertices in a polygon to change at any keyframe. If the additional or deleted vertices are colinear with their two immediate neighbors in the polygon, then the polygon's shape will not change suddenly at the keyframe. Being able to add or delete vertices in this way greatly increases the expressiveness of keyframe interpolation. Show off this improved expressiveness with some animations you create before demo day.

  4. Modify the user interface to allow the input of polygons having "curved" boundaries, where these curves are approximated by a sequence of closely-spaced vertices. We suggest using button-down to initiate a new curve and button-up to terminate it. Line segments can be defined to connect successive mouse positions or successive mouse positions once their distance apart exceeds a specified threshold. This one is relatively easy.

  5. Back to a hard one. Combining the ideas in bell/whistles #3 and #4, allow the user to draw a different curved boundary for each keyframe. Since the number of vertices in your keyframes now differ substantially, you will need to resample these curves so that they have the same number of vertices at the beginning and end of each linear interpolation. This is how professional keyframe animation systems work, and it greatly increases their expressiveness. The resampling procedure is straightforward; we'll explain it during Wednesday's help session.

  6. Linear interpolation tends to produce animations in which the rhythm of the keyframes are visually evident, and sometimes objectionable. To reduce this effect, replace linear interpolation between pairs of keyframes with splined interpolation between larger numbers of keyframes, as described in section 21.1.3 of the textbook. For example, cubic B-splines, described in section 11.2.3 of the textbook, would use the vertex positions of keyframes 1,2,3,4 to drive the interpolation between keyframes 2 and 3, the positions in keyframes 2,3,4,5 to interpolate between 3 and 4, and so on.

  7. Another problem with linear interpolation is that, if two keyframes differ substantially in shape, the polygon tends to collapse and reform during the course of the interpolation. To reduce this effect, define a skeleton consisting of connected line segments, replace the X,Y coordinates of vertices with U,V offsets relative to one of these segments, then transform the skeleton (using rotation, splines, or other techniques), while interpolating vertex offsets, as shown in figure 21.3 (page 1062) of the textbook.

  8. Implement full polygon clipping, thereby allowing your polygons to translate offscreen during the interval between two keyframes. Use any (reasonably efficient) clipping algorithm you like, such as the Sutherland-Hodgman algorithm described in section 3.14.1 of the textbook.

  9. Implement unweighted area sampling as described in section 3.17.2 of the textbook as a user-selectable alternative to accumulation buffer antialiasing. For more fun, implement weighted area sampling as described in section 3.17.3. You may assume (for this bell/whistle only) that your polygons do not overlap.

  10. Last but not least, create a cool animation to show us at the demo. We'll give you extra credit for it!

Appendix A: the accumulation buffer algorithm

Let "canvas" be a 16x3-bit pixel array, "temp" be an 8x3-bit pixel array, and "polygon color" be an 8x3-bit color that is unique to each polygon.

1   clear canvas to black;
2   n = 0  (number of samples taken so far)
3   for (i=1; i<=s; i++)  (for s subpixel positions)
4      for (j=1; j<=t; j++)  (for t fractional frame times)
5         clear temp to black
6         n = n + 1
7         for each polygon
8            translate vertices for this subpixel position and fractional frame time
9            for each pixel in polygon (using your scan converter)
10              temp color <-- polygon color
11        for each pixel in canvas
12           canvas color <-- canvas color + temp color
13  canvas color <-- canvas color / n
14  convert canvas to 8x3 bits and display on screen (by exiting from your rasterizer)

Line 7 causes later polygons to occlude earlier polygons in temp. This "painter's algorithm" occlusion ordering is in lieu of a hidden-surface algorithm, which you will learn about later in the course.

Line 9 implements a box filter, i.e. all subpixel positions are given equal weight. By modifying this interpolation formula to include a weight that varies with subpixel position relative to the center of the filter kernel, you can implement other reconstruction filters.

If your algorithm is working correctly, what you should see on the screen is a nicely antialiased image, like those linked into the October 9 lecture notes. Look also at figure 3.56 in the text. The method they use to produce that figure is different (that is prefiltering, not supersampling and averaging down), but the visual effects are similar.

Finally, for small numbers of samples, (i.e. small values of s and t), it is possible to use an 8x3-bit canvas by replacing lines 12 and 13 with

             canvas color <-- ((n-1)/n)*canvas color + (1/n)*temp color
This approach reduces memory use, it avoids the need for normalization and 16-to-8-bit conversion, and it allows the canvas to be displayed after each sample location has been processed - providing continuous user feedback during rasterization. However, for large numbers of samples, the method yields significant quantization artifacts. We would like your algorithm to be capable of using large numbers of samples, so we require that you employ a 16x3-bit canvas.

Reference:

Haeberli, P., Akeley, K., The Accumulation Buffer: Hardware Support for High-Quality Rendering,
Computer Graphics (Proc. SIGGRAPH), 24:4, pp. 309-318, 1990.


levoy@cs.stanford.edu
Copyright © 2008 Marc Levoy
Last update: October 20, 2008 11:21:33 PM