Efficient Image-Based Methods for Rendering Soft Shadows


Maneesh Agrawala Pixar Animation Studios
Ravi Ramamoorthi Computer Graphics Laboratory , Stanford University
Alan Heirich Tandem Laboratories, Compaq Computer Corporation
Laurent Moll Systems Research Center, Compaq Computer Corporation

Abstract

We present two efficient image-based approaches for computation and display of high-quality soft shadows from area light sources. Our methods are related to shadow maps and provide the associated benefits. The computation time and memory requirements for adding soft shadows to an image depend on image size and the number of lights, not geometric scene complexity. We also show that because area light sources are localized in space, soft shadow computations are particularly well suited to image-based rendering techniques. Our first approach---layered attenuation maps---achieves interactive rendering rates, but limits sampling flexibility, while our second method---coherence-based raytracing---of depth images, is not interactive, but removes the limitations on sampling and yields high quality images at a fraction of the cost of conventional raytracers. Combining the two algorithms allows for rapid previewing followed by efficient high-quality rendering.


Summary

Soft shadows from area light sources can greatly enhance the visual realism of computer-generated images. However, accurately computing penumbrae can be very expensive because it requires determining visibility between every surface point and every light. The cost of many soft shadow algorithms grows with the geometric complexity of the scene. Algorithms such as ray tracing, and shadow volumes perform visibility calculations in object-space, against a complete representation of scene geometry. Moreover, some interactive techniques precompute and display soft shadow textures for each object in the scene. Such approaches do not scale very well as scene complexity increases.

We demonstrate two efficient image-based techniques for rendering soft shadows that can be seen as logical extensions of shadow maps. In both methods, shadows are computed in image space. Therefore the time and memory requirements for adding soft shadows to an image are dependent only on image complexity and the number of lights, not geometric scene complexity. Neither algorithm computes per object textures, so texture mapping is not a bottleneck for us. This independence from geometric scene complexity allows us to efficiently compute soft shadows for large scenes, including those which have complex patterns of self-shadowing.

We also show that soft shadows are a particularly good application for image-based rendering approaches. Since area light sources are localized in space, visibility changes relatively little across them. The depth complexity of the visible or partially visible scene as seen from a light (and stored in our shadow maps) is generally very low. Further, shadow maps rendered from the light source sparsely sample surfaces that are oblique to the light source. However, these surfaces are less important to sample well, because they are precisely the surfaces that are dimly lit.

Our contributions are the two algorithms summarized below which represent two ends of a spectrum. These algorithms can be combined in an interactive lighting system; our fast layered attenuation map method can be used to interactively set the viewing transformation, and position the light source and geometry. Our coherence-based raytracing method can then be used to quickly generate final high-quality images.


Layered Attenuation Maps

Our first approach achieves interactive rendering rates but limits sampling flexibility, and can therefore generate undersampling and banding artifacts. We precompute a modified layered depth image (LDI) by warping and combining depth maps rendered from a set of locations on the light source. The LDI stores both depth information and layer-based attenuation maps which can be thought of as projective soft shadow textures. During display, the proper attenuation is selected from the LDI in real time in software, and is used to modulate normal rendering without shadows. The precomputation is performed in a few seconds, and soft shadows are then displayed at several frames a second.

Coherence-Based Raytracing of Depth Images

Our second approach removes limitations on sampling and yields high quality images, suitable for high resolution prerendered animations, but is not interactive. To shade a surface point, we trace shadow rays through precomputed shadow maps rather than the scene geometry. The visible portion of a light source tends to change very little for surface points close to one another. We describe a new image-based technique for exploiting this coherence when sampling visibility along shadow rays. Our image-based raytracing approach with coherence-based sampling produces soft shadows at a fraction of the cost of conventional raytracers. While we combine both the image-based ray-tracing and sampling algorithms in a single renderer, they can be used independently (i.e. a standard geometric ray tracer might incorporate our coherence-based sampling method).

Results

These illustrate some of our results. More information will be found in the paper. Clicking on each of these figures will bring up a high-resolution version.

Figure 1 A plant rendered using our interactive layered attenuation-map approach (top), rayshade (middle), and our efficient high-quality coherence-based raytracing approach (bottom). Note the soft shadows on the leaves.

Figure 2 Visualization of the prediction method for coherence-based raytracing. Gray boxes represent occluded cells, while white boxes are unoccluded cells. Each cell marked by an X is initially traced. The blue X's are cells at edges between occluded and unoccluded regions, while the red X's are on the exterior edges of the light source.

Figure 3 Closeup of plant rendered using layered attenuation maps (left), and coherence-based raytracing (right). A closeup indicates some artifacts for the left image, and also banding because of fixed light source sampling. However, at normal scales such as those of figure 1, the artifacts are barely visible and are usually tolerable for interactive applications.

Figure 4 Flower rendered using layered attenuation maps (left) and coherence-based raytracing (right). Since most geometry is thinner than a pixel, the base hardware renderer for layered attenuation maps has severe aliasing but key features like the bottom of the flower being darker than the top are accurately captured. The shadow on the ground is very accurate in both methods.


Relevant Links

Siggraph 2000 paper
Postscript (27M)
Compressed Postscript (4M)
PDF (11M)
Lowres PDF (1.7M)
Talk Slides (PowerPoint 1.4M)
High-Resolution Images and associated README



Figure 1. A plant rendered using layered attenuation maps (top), rayshade (middle), and coherence-based raytracing (bottom).

Figure 2. Visualization of the prediction step for coherence-based raytracing.

Figure 3. Closeup of plant using layered attenuation maps(left) and coherence-based raytracing (right)

Figure 4. Flower rendered using layered attenuation maps (left) and coherence-based raytracing (right)

Ravi Ramamoorthi
Last modified: Sun Apr 15 04:21:17 PDT 2001