Notes on Reflection Mapping from Gene Miller:

Here is most of what I remember about illumination and reflection mapping in the 70's and 80's. Given the limits of my memory, I welcome corrections/additions from anyone who was present during that period.

Note: The images below were computed and recorded on Ektachrome slides at resolutions that varied from 512x512 to 3000x2000 pixels. The slides were recently transferred to the Kodak PhotoCD format. Click on the thumbnails below to view images as high as1536x1024 resolution.

Jim Blinn's Ph.D. dissertation laid the groundwork for most of the realistic effects that were known in the 1970's, including reflection mapping.

Digital Effects Inc. (DEI) in New York, produced a fair share of the animation and effects for TV in the late 1970's to mid 1980's. Some of the advertising work required chrome or gold "flying logos". DEI was able to simulate this effect by computing an image of the 3-D tumbling logo in a synthetic environment, along with the following two elements for each reflective surface: (1) an optical matte or cut-out for that surface; (2) a venetian blind effect that moved faster than the tumbling surface. An optical printer was used to: (1) cut out the venetian blind effect to fit each surface; (2) optically flare the cut-out venetian blind to make it look really bright; (3) matte the effects onto reflective surfaces. This was state-of-the-art CGI at the time, and it made advertising people very happy. I am sure that Judson Rosebush or Jeff Kleiser can provide a more accurate explanation.

Around 1980 Bob Hoffman started to develop texture mapping and bump mapping algorithms at Digital Effects, and I helped him integrate this code into the Digital Effects shader. We also started to examine the problem of simulating mirror reflections of extended light sources on curved surfaces. I read the 1980 Scientific American article describing the geometry of reflections on curved surfaces, and I tried to compute the shape of these reflections directly, but I sensed that a solution for general surfaces was beyond my ability.

We then started to look at the texture mapping approach, and we programmed an approximation of a mirror reflection into the Digital Effects shader. The normalization was not done correctly at first, but the results looked interesting. I then came across Jim Blinn's Ph.D. dissertation, and this helped us fix the math.

We also tested Jim Blinn's technique for the real-time animation of the illumination of static objects. This entailed loading a 16x16 representation of the illumination into an 8-bit color look-up table (CLUT), and applying this color map to a frame buffer that stores 8-bit encoded normals of the object. The lighting is animated by repeatedly updating the values in the CLUT. The first slide shows the L4 and L5 vertebrae. The refection map is simply a 16x16 image of a woman's face (tiny image in lower left corner). The 2nd and 3rd slides are of a Greek Athlete's head. The third slide shows a synthesized reflection map.

<--Click images for higher resolution view.

I saw the possibility of using the reflection in a sphere to color simulated objects starting in 1982 at MAGI SynthaVsion. With the help of Christine Chang, I photographed a christmas tree ornament in the MAGI parking lot.

.<--Click images for higher resolution view.

Around that time, Ken Perlin was adding blending extensions the MAGI solids modeling ray-tracer. I saved the geometry and normals from one of Ken's early tests-- the "blobby dog" (glitches and all). The following three slides show the dog with simple flat shading, a blow-up of the scanned christmas tree ornament (8-bits per pixel), and the reflection-mapped dog.

<--Click images for higher resolution view.

(Note: The geometry glitches in the algorithm were subsequently fixed by Ken.)

Soon after that, Ken Perlin and Josh Pines worked on adding image an reflection mapping mapping to the SynthaVision shader using the optimization presented by Ken in a Siggraph paper. They ensured that the this reflection table had enough dynamic range to store extremely bright light sources.

The following slides illustrate the first test of reflection mapping with the SynthaVision shader. I added a sunset and some luminous heavenly bodies to the Norelco Shaver commercial background. I set up the SynthaVision Camera to shoot 6 orthogonal views: front, back, left, right, up, down. I then stitched these 6 images into a single image for mapping. The Norelco Shaver Head is rendered with reflection map above. This short animation was shown as part of the MAGI sampler at Siggraph '84.

<--Click images for higher resolution view.

Here are two frames from the Dow Scrubbing Bubbles TV commercial using reflection mapping on all of the shiny curved surfaces.

<--Click images for higher resolution view.

Note: Lots of people deserve credit for the Scrubbing Bubbles spot, including Tom Bisogno, Dick Walsh, Larry Elin, Tom Miller, and Elyse Vaintraub, and others.

In 1986 I specified and built the animation and video studio for Arcca Animation and Mattell in Toronto Canada. Reflection mapping was used extensively to produce over 5 minutes per week of animated chrome robots matted into live action.

-- Gene Miller


Paul E. Debevec / debevec@cs.berkeley.edu