CS 348B - Computer Graphics: Image Synthesis Techniques

HW3 - Camera simulation

Assigned Thursday, April 25.   Due Tuesday, May 7.

Description

For this assignment, you will add a more sophisticated camera model to lrt. Instead of a pinhole, the improved camera will model a finite aperture and provide typical camera controls for F-stop, focal distance, and shutter speed. This will allow simulated depth of field and motion blur effects. Once the basic camera model is working, you will add automatic focus and exposure controls, similar to those of a handheld automatic camera. This last step is fairly open-ended - you'll need to exercise some ingenuity, and possibly explore more of the internals of lrt.

Step 0

First you need to retrieve the latest updated version of lrt from the class directory (/usr/class/cs348b/files/lrt.tar.gz). Of course, you will probably want to keep your improved heightfield implementation around. The revised lrt provides two new files, mbdofcamera.h and mbdofcamera.cc, as well as various patches into the RIB interface needed to set up this assignment. The file mbdofcamera.cc contains the function GenerateRay - currently empty. You will be adding your code here. You may wish to examine the GenerateRay routine for the perspective camera (in camera.cc) to help get you started.

There will be several new RIB files available for this project. To start with, dof.rib is a simple scene consisting of three cones at varying distances. Some more complex test scenes will probably be available after this weekend.

Step 1 - Simulating a finite aperture (depth of field)

First, you should add depth of field to lrt. The 5-tuple of samples passed to GenerateRay should be interpreted as follows:

Using sample[0] and sample[1] for the film location, generate a sample on the aperture using the random numbers in sample[2] and sample[3]. Armed with these four numbers (x,y,u,v), you can generate the appropriate ray to trace. You should review your notes from the Cameras and Film lecture so that you understand the geometry of this ray generation.

Setting Depth of Field parameters in RIB

The RIB command:

DepthOfField F-stop focallength focaldistance

sets the parameters necessary to compute depth of field. For more detail, check out the documentation under RenderMan Resources.

The three parameters specified by the DepthOfField command are available as members of the Camera class as FStop, FocalLength, and FocalDistance. You should probably not mess with the focal length parameter in the sample scene; just adjust the f-stop and focal distance.

Note that you can issue the command

DepthOfField -

Which will set the aperture to RI_INFINITY. If this is the case, the FocalDistance and FocalLength are undefined, and everything will be in focus (i.e., the aperture is a pinhole).

To test your implementation of depth of field, try some experiments such as:

Step 2 - Simulating finite exposure time (Motion Blur)

The fifth sample parameter is another uniformly distributed random number between 0 and 1, which can be used for sampling a time interval. You will notice that the MbDOFPerspectiveCamera has two WorldToCamera transformations; these correspond to the camera transformations at times t=0 and t=1. There are also ShutterStart and ShutterEnd values, which will be described below.

For this assignment, you may assume that the camera is the only moving object in the scene, and that the camera movement is purely translational.

Setting Shutter parameters

To specify shutter settings (exposure time) in RIB, use the command:

Shutter opentime closetime

For this assignment, we will always use an opentime of 0, and assume that time represents seconds. Hence, a shutter speed of 100 would be specified by:

Shutter 0 0.01

which simply states that the shutter opens at time t=0 and remains open for 1/100th of a second.

Setting Motion parameters in RIB

To set the two camera-to-world transformation matrices in RIB, just enclose them in a MotionBegin/MotionEnd pair:

MotionBegin [0 1]
  Translate 0 10 0
  Translate 0 20 0
MotionEnd
This needs to appear before the WorldBegin command.

You should also simulate the effect of finite apertures and shutter times on the exposure of your images. Long exposures and big apertures produce brighter pictures (because they allow more light through the lens), while short exposures and small apertures produce darker images.

Step 3 - Automatic control

For steps 1 and 2 of this assignment, you have simulated a manual camera - the parameters that govern the aperture and shutter are specified in the RIB file. Your next task is to develop automatic focus and exposure controls for your simulated camera.

Cameras with automatic focusing try to set their focal distance based on an estimate of the distance to important features in the scene (typically those near the center of the image). Within the ray tracing framework of lrt, you should be able to "probe" the scene by tracing some test rays from the camera, and then using the depth information obtained to guess at a good focal distance.

Similarly, automatic cameras attempt to adjust their F-stop and shutter speed based on estimates of the image brightness in order to obtain a level of exposure that looks good - not too dark, but not oversaturated. As discussed in class, three common settings for automatic exposure control are:

Remember, larger apertures give less depth of field (objects nearer or further than the focal distance tend to be less focused). Longer exposure times lead to more motion blur. In experimenting with the programmed mode, you'll need to devise some heuristics to try to strike the right balance automatically.

In the RIB file, we will specify the automatic control settings with the command:

Options "camera" "mode" "cam-mode"

where cam-mode can be: manual, shutter_priority, aperture_priority, or programmed. The MbDOFPerspectiveCamera has a member called Mode which can be compared to the constants LRT_MANUAL, LRT_SHUTTER_PRIORITY, LRT_APERTURE_PRIORITY, and LRT_PROGRAMMED to determine which setting the RIB file specified.

When manual mode is specified, your code should simply behave as described in steps 1 and 2, using the given settings for depth of field and shutter speed. Under the automatic operation modes, you should always try to autofocus, and adjust for exposure according to the chosen mode. Your method should print the final camera settings it uses for a scene.

Implementing step 3 is intended to be a creative exercise - you have a lot of freedom in how you approach the problem. We generally suggest that you start by sending out a few sample rays, and use them as a basis for automatically determining camera settings.

Submission

Turn in your finished homework using the cs348b submit script:
Make sure you hand in all your source code, a Makefile, and a README file. The README should contain instructions for compiling and running your code, and a description of anything non-intuitive you've done. Please do not submit any executables or object (.o) files. We'll be building your submissions ourselves, and those just take up space.

IMPORTANT: You must also create a web page describing your automatic exposure system from step 3. Discuss your implementation, and show some pictures to illustrate the capabilities of your system. Include the URL of your web page in the submitted README

Grading scheme:


Copyright © 2002 Pat Hanrahan