Relativistic Ray Tracing

Frederick Akalin

Original Proposal
Demo page

The above right was rendered with the camera moving forward at .8 the speed of light. The red/blue checkerboard is actually behind the camera, but relativistic effects enable it to be visible anyway. The above left is the image rendered at rest.

Implementation

My project can be split up into three parts:

Moving camera

First, I implemented relativistic_camera, a camera which steps between another camera and the world. Basically, it has a beta vector (which is just a velocity vector divided by the speed of light), and it Lorentz transforms (according to the general equation given in the first paper in the original proposal) all outgoing rays before sending it out into the world. In other words, GenerateRay calls the GenerateRay of the child camera, Lorentz transforms it, and then returns the transformed ray.

This is the only part of the project that worked with almost no problems. All the images in the demo page were generated with this. Each picture has its respective beta vector written below. Some of the effects visible are the "bulging" of the middle of the checkerboard due to Lorentz effects, the expanding field of view due to the finite speed of light, and the Lorentz length contraction.

Moving objects

I wanted to be able to define objects in the world that would have their own separate velocity instead of just having the camera move. I implemented relativistic_shape, which acts as a buffer between another arbitrary shape and the world. This was a bit tricker, as there were more functions to be overridden; for example, the bounding box functions had to return a Lorentztransformed bounding box; also, the differential geometries returned had to be Lorentz transformed.

Theoretically, there should be no difference between the image of a camera moving with beta vector B and an object moving with beta vector -B. However, images rendered with the moving object seem to turn out grainer; I'm guessing because the distribution of rays is different, depending on whether the rays are Lorentz transformed at the source (the camera) or at the destination (the object).

Problems with the above two

The most frustrating problem I've encountered while doing the above is that 3D objects (like cylinders) just don't seem to render correctly; a fraction of the object would sometimes be inexplicably missing, even though the checkerboard images render flawlessly.

The second, more fundamental problem is that I originally intended my code to be able to work transparently with the rest of the pbrt system. It turns out that in order to be fully accurate, the pbrt system needs to be slightly modified; for example, generateRay returns a normalized ray, with an attached time value. However, two essential pieces of information are needed; the time value at which the scene is being rendered, and at what point of the ray that time value is attached to. The behavior differs depending on, say, whether the "zero time" is from the origin of the ray, or the origin + the direction of the ray; indeed, the direction is normalized so there's no way to deduce what point of the ray to use. For now, I assumed that origin + direction is at time 0, which produced well enough results for a projective camera.

Relativistic Doppler shift

I also briefly tried to implement relativistic doppler shifting as follows; I created two integrators: relativistic_surface_integrator and relativistic_volume_integrator, similar to the abovementioned classes. They would take the Spectrum values returned from their respective inner integrators and doppler shift it depending on the speed of the object which first intersects the given ray. However, the Spectrum class is not really sophisticated enough to show this effect realistically. In a nutshell, the procedure was: 1) Convert the spectrum to X, Y, Z values; 2) scale the X, Y, Z values to the Doppler factor D; 3) Convert back to Spectrum. Since the Spectrum class currently only has three samples, where the Doppler shift has to shift all frequencies by the Doppler factor, it didn't really work well.

The code

Here is the code.

Build instructions

1) Say you have a fresh pbrt install in $PBRTDIR. Replace $PBRTDIR/Makefile with Makefile.pbrt.
2) In build.sh, modify ~/class/cs348b/pbrtsrc/ to point to $PBRTDIR. 3) typing "make" should call build.sh, which would symlink the files in core, integrators, shapes, and cameras to the pbrt dir and start the build. 4) example files are in checkerboard.pbrt.template and tram.pbrt.template. "./checkerboard.sh [beta_x] [beta_y], [beta_z]" generates a checkerboard.pbrt file with the camera moving at the given speed. Most of the parameters for relativistic_camera are self explanatory; "relativistic" should be set to "true", "beta" is the velocity vector, and "c" is the speed of light. (This should generally be a low number to see good effects; leaving it as the default 3e12 mm/s tends to introduce floating point errors.)