Assignment 2 Frequently Asked Questions (FAQ)


Q. Do we ever, under any circumstances, modify the resolution of the output image?

A. No.


Q. How do we translate two layers by different amounts?

A. You can't. Don't worry about it.


Q. How come my alpha values are gone when I save my movie?

A. Quicktime has no notion of alpha, so they're going to go away when you save your movie. Not much we can do about this.


Q. Does the rotate plug-in perform the rotation clockwise or counterclockwise?

A. Counterclockwise.


Q. I want to write some functions that will be shared among multiple plugins; how do I do this?

A. The best solution is to put them in a new .h file and #include that in your plugins. The submit script will grab all .h files when you submit your assignment, so we'll be able to build your plugins with your new functions.


Q. When translating or rotating the image, some pixels in the destination image won't have values written to them. What should I do with them? Should they be black?

A. The extra pixels should be clear, not black. Reread Porter & Duff if you're not sure what the difference is.


Q. When rotating, is the center of rotation at (0,0) or in the center of the frame or somewhere else?

A. The center of the frame.


Q. You never told us how to rotate a point around another point. How in the world do we do the rotation?

A. We could just tell you to work out the math yourselves, but we're so nice, here's some helpful tips to get you started:

You can start by representing your 2D points as a 3x1 vector. Initially, the point (x,y) would be represented as the vector [x,y,1]. In general, the 3x1 vector [x,y,z] represents the 2D point (x/z, y/z). This representation is called "homogeneous coordinates". We'll talk more about this in the lecture on transformations. For now, this is all you need to know.

Once you've done that, you can construct 3x3 matricies to represent translation and rotation. The matrix that represents a translation of (tx, ty) is:

       [ 1 0 tx ]
       [ 0 1 ty ]
       [ 0 0  1 ]

The matrix that represents a counterclockwise rotation of A degrees in the plane is:

[  cos(A) -sin(A)  0 ]
[  sin(A)  cos(A)  0 ]
[    0       0     1 ]

Just for completeness (and in case you find it useful for magnify), the matrix that represents a scaling factor of (sx, sy) is:

[ sx  0  0 ]
[  0 sy  0 ]
[  0  0  1 ]
      

You can compute matricies like these for doing shears, skews, or some other types of transformations. More information is available in Foley Van Dam, chapter 5.

Now, remember how you drew a rectangle rotated around some arbitrary point in space in assignment 1? It looked something like:

translate( x, y, 0 );
rotate( theta, 0, 0, 1 );
translate( -x, -y, 0 );
      

Each time you make one of these calls, one of those matricies above is constructed, and multiplied on the left (you do remember how to multiply matricies, right?) by the "current" transformation matrix, resulting in a single matrix that performs those three transformations in reverse order.

In other words, you would compute some matrix like:

transform_matrix = translate_matrix( x, y, 0 ) * rotate_matrix( theta, 0, 0, 1 ) * translate_matrix( -x, -y, 0 );

Now, just left multiply your 3x1 point vector by this matrix to get a new 3x1 vector, and enjoy your new point. Don't forget to divide through by the third coordinate before using. Shake well.

This looks like:

new_point = transform_matrix * old_point;
new_point.X /= new_point.Z;
new_point.Y /= new_point.Z;
      

Q. No, I DON'T remember how to multiply matricies, smarty pants.

A. Okay, okay.

  dest[0][0] = m1[0][0] * m2[0][0] + m1[0][1] * m2[1][0] + m1[0][2] * m2[2][0];
  dest[0][1] = m1[0][0] * m2[0][1] + m1[0][1] * m2[1][1] + m1[0][2] * m2[2][1];
  dest[0][2] = m1[0][0] * m2[0][2] + m1[0][1] * m2[1][2] + m1[0][2] * m2[2][2];

  dest[1][0] = m1[1][0] * m2[0][0] + m1[1][1] * m2[1][0] + m1[1][2] * m2[2][0];
  dest[1][1] = m1[1][0] * m2[0][1] + m1[1][1] * m2[1][1] + m1[1][2] * m2[2][1];
  dest[1][2] = m1[1][0] * m2[0][2] + m1[1][1] * m2[1][2] + m1[1][2] * m2[2][2];

  dest[2][0] = m1[2][0] * m2[0][0] + m1[2][1] * m2[1][0] + m1[2][2] * m2[2][0];
  dest[2][1] = m1[2][0] * m2[0][1] + m1[2][1] * m2[1][1] + m1[2][2] * m2[2][1];
  dest[2][2] = m1[2][0] * m2[0][2] + m1[2][1] * m2[1][2] + m1[2][2] * m2[2][2];
is the computation for dest=m1*m2, where dest, m1, and m2 are all 3x3. Remember that matrix multiplication is not commutative (try reversing the order of the transformation matricies in your assignment 1, and see what I mean).
Q. What about muliplying a matrix times a vector?

A. Jeez, you really don't remember your linear algebra, do you?

  dest[0] = m[0][0] * v[0] + m[0][1] * v[1] + m[0][2] * v[2];
  dest[1] = m[1][0] * v[0] + m[1][1] * v[1] + m[1][2] * v[2];
  dest[2] = m[2][0] * v[0] + m[2][1] * v[1] + m[2][2] * v[2];
    

Q. I did just what you said, and I think the sin and cos math functions are giving me weird values.

A. sin and cos take their arguments in radians, not degrees. Try multiplying your angles by M_PI/180.0 (don't forget to #include <math.h>)


Q. For the blur, what about pixels at the edge of the image; what do I do for pixels under the filter that are outside the image? center of the frame or somewhere else?

A. One of two things is typically done in this situation: you can either (virtually) mirror the image across the boundaries and use those mirrored pixels for out-of-range accesses. Alternatively, ignore those pixels computing the filter's value.


Q. How do we handle the case where a plugin takes two images as input and the two images are different resolutions?

A. Don't. You can assume that all input images in the entire pipeline are the same resolution.


Q. How do I compile the EFP_ example filters?

A. Try make examples. Once you compile the example filters, you should be able to use them from within the effects program.


Q. For the translation plugin, which directions correspond to positive translations?

A. Positive translations in X correspond to moving the image to the right; positive translations in Y correspond to moving the image up.


Q. When magnifying an image, what point do we magnify around?

A. The center of the frame.


Q. I'm trying to use a two-input stage in the pipeline, but things come up blank! Help!

A. Here is how you might go about using the Average example filter:

Now, if things still don't make sense, reread "Using the Effects Framework", and hopefully that will help you figure things out.


Q. Do we assume premultiplied alpha for the pixels that are given to our plugins?

A. Yes.


Q. Remind me again; what's the deal with which image is A and which is B for the compositing operators.

A. For OVER, you should compute image2 OVER image1. For XOR and OUT, you should compute image1 {XOR,OUT} image2. We apologize for this being non-intuitive.


Copyright © 1998 Pat Hanrahan