Conference and Journal Papers
|
Temporal Upsampling of Performance Geometry using Photometric Alignment.
Cyrus A. Wilson, Abhijeet Ghosh, Pieter Peers, Jen-Yuang Chang, Jay Busch, Paul Debevec. to appear in ACM Transactions on Graphics, 2010.
Dynamic shape capture using multi-view photometric stereo.
Daniel Vlasic, Pieter Peers, Ilya Baran, Paul Debevec, Jovan Popovic', Szymon Rusinkiewicz, Wojciech Matusik.
ACM Transactions on Graphics, 28(5), December 2009, pp. 1-11.
Creating a Photoreal Digital Actor : The Digital Emily Project.
Alexander O., Rogers M., Lambeth W., Chiang M., Debevec P.
IEEE European Conference on Visual Media Production (CVMP), November 2009. (also to appear in IEEE Computer Graphics and Applications.)
Cosine Lobe Based Relighting from Gradient Illumination Photographs.
Graham Fyffe, Cyrus A. Wilson, Paul Debevec.
IEEE European Conference on Visual Media Production (CVMP), November 2009. (also to appear in Journal of Virtual Reality and Broadcasting.)
Image-based Separation of Diffuse and Specular Reflections using Environmental Structured Illumination.
Lamond B., Peers P., Ghosh A., Debevec P.
IEEE International Conference on Computational Photography (ICCP), April 2009.
Compressive Light Transport Sensing.
Pieter Peers, Dhruv K. Mahajan, Bruce Lamond, Abhijeet Ghosh, Wojciech Matusik, Ravi Ramamoorthi, Paul Debevec.
ACM Transactions on Graphics, 28(1), January 2009, pp. 3:1-3:18.
Achieving Eye Contact in a One-to-Many 3D Video Teleconferencing System.
Andrew Jones, Magnus Lang, Graham Fyffe, Xueming Yu, Jay Busch, Ian McDowall, Mark Bolas, Paul Debevec.
ACM Transactions on Graphics, 28(3), July 2009, pp. 64:1-64:8.
Estimating Specular Roughness and Anisotropy from Second Order Spherical Gradient Illumination.
Abhijeet Ghosh, Tongbo Chen, Pieter Peers, Cyrus A. Wilson, Paul Debevec.
Computer Graphics Forum, 28(4), June-July 2009, pp. 1161-1170.
Facial Performance Synthesis Using Deformation-Driven Polynomial Displacement Maps.
Wan-Chun Ma, Andrew Jones, Jen-Yuan Chiang, Tim Hawkins, Sune Frederiksen, Pieter Peers, Marko Vukovic, Ming Ouhyoung, Paul Debevec.
ACM Transactions on Graphics, 27(5), December 2008, pp. 121:1-121:10.
Practical Modeling and Acquisition of Layered Facial Reflectance.
Abhijeet Ghosh, Tim Hawkins, Pieter Peers, Sune Frederiksen, Paul Debevec.
ACM Transactions on Graphics, 27(5), December 2008, pp. 139:1-139:10.
Rendering for an Interactive 360° Light Field Display.
Andrew Jones, Ian McDowall, Hideshi Yamada, Mark Bolas, Paul Debevec.
ACM Transactions on Graphics, 26(3), July 2007, pp. 40:1-40:10.
Post-production Facial Performance Relighting using Reflectance Transfer.
Pieter Peers, Naoki Tamura, Wojciech Matusik, Paul Debevec.
ACM Transactions on Graphics, 26(3), July 2007, pp. 52:1-52:10.
Rapid Acquisition of Specular and Diffuse Normal Maps from Polarized Spherical Gradient Illumination.
Wan-Chun Ma, Tim Hawkins, Pieter Peers, Charles-Felix Chabert, Malte Weiss, Paul Debevec.
Rendering Techniques 2007: 18th Eurographics Workshop on Rendering, June 2007, pp. 183-194.
|
Andrew Jones, Andrew Gardner, Mark Bolas, Ian McDowall, and Paul Debevec. Performance Geometry Capture for Spatially Varying Relighting. 3rd European Conference on Visual Media Production (CVMP 2006), November 2006.
Abstract
We present an image-based technique for relighting dynamic human performances under spatially varying illumination. Our system generates a time-multiplexed LED basis and a geometric model recovered from high-speed structured light patterns. The geometric model is used to scale the intensity of each pixel differently according to its 3D position within the spatially varying illumination volume. This yields a first-order approximation of the correct appearance under the spatially varying illumination. A global illumination process removes indirect illumination from the original lighting basis and simulates spatially varying indirect illumination. We demonstrate this technique for a human performance under several spatially varying lighting environments.
Online Paper (PDF) / Web Site
|
|
Andrew Jones, Paul Debevec, Mark Bolas, and Ian McDowall, Concave Surround Optics for Rapid Multiview Imaging, 25th Army Science Conference, Orlando, Florida, November 2006.
Abstract
Many image-based modeling and rendering
techniques involve photographing a scene from an array
of different viewpoints. Usually, this is achieved by
moving the camera or the subject to successive positions,
or by photographing the scene with an array of cameras.
In this work, we present a system of mirrors to simulate
the appearance of camera movement around a scene
while the physical camera remains stationary. The
system thus is amenable to capturing dynamic events
avoiding the need to construct and calibrate an array of
cameras. We demonstrate the system with a high speed
video of a dynamic scene. We show smooth camera
motion rotating 360 degrees around the scene. We
discuss the optical performance of our system and
compare with alternate setups.
Online Paper (PDF) / Web Site
|
|
Sarah Tariq, Andrew Gardner, Ignacio Llamas, Andrew Jones, Paul Debevec, and Greg Turk. Efficient Estimation of Spatially Varying Subsurface Scattering Parameters, Vision, Modeling, and Visualzation (VMV) 2006, Aachen, Germany, November 2006.
Abstract
We present an image-based technique to efficiently acquire spatially varying subsurface reflectance properties of a human face. The estimated properties can be used directly to render faces with spatially varying scattering, or can be used to estimate a robust average across the face. We demonstrate our technique with renderings of peoples' faces under novel, spatially-varying illumination and provide comparisons with current techniques. Our captured data consists of images of the face from a single viewpoint under two small sets of projected images. The first set, a sequence of phase-shifted periodic stripe patterns, provides a per-pixel profile of how light scatters from adjacent locations. The second set of structured light patterns is used to obtain face geometry. We subtract the minimum of each profile to remove the contribution of interreflected light from the rest of the face, and then match the observed reflectance profiles to scattering properties predicted by a scattering model using a lookup table. From these properties we can generate images of the subsurface reflectance of the face under any incident illumination, including local lighting. The rendered images exhibit realistic subsurface transport, including light bleeding across shadow edges. Our method works more than an order of magnitude faster than current techniques for capturing subsurface scattering information, and makes it possible for the first time to capture these properties over an entire face.
Online Paper (PDF) / Web Site
|
|
Paul Debevec, Virtual Cinematography: Relighting through
Computation, Computer, vol. 39, no. 8, pp. 57-65, August 2006.
Abstract
Recording how scenes transform
incident illumination into radiant light is an active topic in
computational photography.Such techniques are making it possible to
create virtual images of a person or place from new viewpoints and in
any form of illumination.
Online Article (PDF) / IEEE Computer Web Site |
|
Per Einarsson, Charles-Felix Chabert, Andrew Jones, Wan-Chun Ma, Bruce Lamond, Tim Hawkins, Mark Bolas, Sebastian Sylwan, and Paul Debevec Relighting Human Locomotion with Flowed Reflectance Fields, 17th Eurographics Symposium on Rendering (EGSR), Nicosia, Cyprus, June 2006.
Abstract
We present an image-based approach for capturing the appearance of a walking or running person so they can be
rendered realistically under variable viewpoint and illumination. In our approach, a person walks on a treadmill
at a regular rate as a turntable slowly rotates the person’s direction. As this happens, the person is filmed with
a vertical array of high-speed cameras under a time-multiplexed lighting basis, acquiring a seven-dimensional
dataset of the person under variable time, illumination, and viewing direction in approximately forty seconds. We
process this data into a flowed reflectance field using an optical flow algorithm to correspond pixels in neighboring
camera views and time samples to each other, and we use image compression to reduce the size of this data.We then
use image-based relighting and a hardware-accelerated combination of view morphing and light field rendering to
render the subject under user-specified viewpoint and lighting conditions. To composite the person into a scene, we
use an alpha channel derived from back lighting and a retroreflective treadmill surface and a visual hull process
to render the shadows the person would cast onto the ground. We demonstrate realistic composites of several
subjects into real and virtual environments using our technique.
Online Paper (PDF) / Web Site
|
|
Andreas Wenger, Andrew Gardner, Chris Tchou, Jonas Unger, Tim Hawkins, Paul Debevec. Performance Relighting and Reflectance Transformation with Time-Multiplexed Illumination, ACM Transactions on Graphics (Proceedings of SIGGRAPH 2005). 24(3), pp. 756-764, July 2005.
Abstract
We present a technique for capturing an actor's live-action performance in such a way that the lighting and reflectance of the actor can be designed and modified in postproduction. Our approach is to illuminate the subject with a sequence of time-multiplexed basis lighting conditions, and to record these conditions with a high-speed video camera so that many conditions are recorded in the span of the desired output frame interval. We investigate several lighting bases for representing the sphere of incident illumination using a set of discrete LED light sources, and we estimate and compensate for subject motion using optical flow and image warping based on a set of tracking frames inserted into the lighting basis. To composite the illuminated performance into a new background, we include a time-multiplexed matte within the basis. We also show that the acquired data enables time-varying surface normals, albedo, and ambient occlusion to be estimated, which can be used to transform the actor's reflectance to produce both subtle and stylistic effects.
Online Paper (PDF) / Web Site
|
|
Tim Hawkins, Per Einarsson, Paul Debevec. Acquisition of time-varying participating media, ACM Transactions on Graphics (Proceedings of SIGGRAPH 2005). 24(3), pp. 812-815, July 2005.
Abstract
We present a technique for capturing time-varying volumetric data of participating media. A laser sheet is swept repeatedly through the volume, and the scattered light is imaged using a high-speed camera. Each sweep of the laser provides a near-simultaneous volume of density values. We demonstrate rendered animations under changing viewpoint and illumination, making use of measured values for the scattering phase function and albedo.
Online Paper (PDF) / Web Site
|
|
Tim Hawkins, Per Einarsson, Paul Debevec. A Dual Light Stage, Rendering Techniques 2005: 16th Eurographics Workshop on Rendering. pp. 91-98, June 2005.
Abstract
We present a technique for capturing high-resolution 4D reflectance fields using the reciprocity property of light transport. In our technique we place the object inside a diffuse spherical shell and scan a laser across its surface. For each incident ray, the object scatters a pattern of light onto the inner surface of the sphere, and we photograph the resulting radiance from the sphereu2019s interior using a camera with a fisheye lens. Because of reciprocity, the image of the inside of the sphere corresponds to the reflectance function of the surface point illuminated by the laser, that is, the color that point would appear to a camera along the laser ray when the object is lit from each direction on the surface of the sphere. The measured reflectance functions allow the object to be photorealistically rendered from the laser's viewpoint under arbitrary directional illumination conditions. Since each captured re- flectance function is a high-resolution image, our data reproduces sharp specular reflections and self-shadowing more accurately than previous approaches. We demonstrate our technique by scanning objects with a wide range of reflectance properties and show accurate renderings of the objects under novel illumination conditions.
Online Paper (PDF) / Web Site
|
|
Paul Debevec, Chris Tchou, Andrew Gardner, Tim Hawkins,
Charis Poullis, Jessi Stumpfel, Andrew Jones, Nathaniel Yun,
Per Einarsson, Therese Lundgren, Marcos Fajardo, and Philippe Martinez. Estimating Surface Reflectance Properties of a Complex Scene under
Captured Natural Illumination, USC ICT Technical Report ICT-TR-06.2004, December 2004.
Abstract
We present a process for estimating spatially-varying surface re-
flectance of a complex scene observed under natural illumination
conditions. The process uses a laser-scanned model of the scene’s
geometry, a set of digital images viewing the scene’s surfaces under
a variety of natural illumination conditions, and a set of corresponding
measurements of the scene's incident illumination in each photograph.
The process then employs an iterative inverse global illumination
technique to compute surface colors for the scene which,
when rendered under the recorded illumination conditions, best reproduce
the scene's appearance in the photographs. In our process
we measure BRDFs of representative surfaces in the scene to better
model the non-Lambertian surface reflectance. Our process uses a
novel lighting measurement apparatus to record the full dynamic
range of both sunlit and cloudy natural illumination conditions. We
employ Monte-Carlo global illumination, multiresolution geometry,
and a texture atlas system to perform inverse global illumination
on the scene. The result is a lighting-independent model of the
scene that can be re-illuminated under any form of lighting. We
demonstrate the process on a real-world archaeological site, showing
that the technique can produce novel illumination renderings
consistent with real photographs as well as reflectance properties
that are consistent with ground-truth reflectance measurements.
Online Paper (PDF) / Web Site
|
|
Jessi Stumpfel, Andrew Jones, Andreas Wenger, and Paul Debevec. Direct HDR Capture of the Sun and Sky, 3rd International Conference on Virtual Reality, Computer Graphics, Visualization and Interaction in Africa, Stellenbosch (Cape Town), South Africa, November 2004.
Abstract
We present a technique for capturing the extreme dynamic range of
natural illumination environments that include the sun and sky,
which has presented a challenge for traditional high dynamic range
photography processes. We find that through careful selection of
exposure times, aperture, and neutral density filters that this
full range can be covered in seven exposures with a standard
digital camera. We discuss the particular calibration issues such
as lens vignetting, infrared sensitivity, and spectral
transmission of neutral density filters which must be addressed.
We present an adaptive exposure range adjustment technique for
minimizing the number of exposures necessary. We demonstrate our
results by showing time-lapse renderings of a complex scene
illuminated by high-resolution, high dynamic range natural
illumination environments.
Online Paper (PDF) / Web Site
|
|
Timothy Hawkins, Andreas Wenger, Chris Tchou, Andrew Gardner, Frederik Goransson, and Paul Debevec. Animatable Facial Reflectance Fields, 15th Eurographics Symposium on Rendering, Norrkoping, Sweden, June 2004.
Abstract
We present a technique for creating an animatable image-based appearance model of a human face, able to
capture appearance variation over changing facial expression, head pose, view direction, and lighting condition.
Our capture process makes use of a specialized lighting apparatus designed to rapidly illuminate the subject
sequentially from many different directions in just a few seconds. For each pose, the subject remains still while
six video cameras capture their appearance under each of the directions of lighting. We repeat this process for
approximately 60 different poses, capturing different expressions, visemes, head poses, and eye positions. The
images for each of the poses and camera views are registered to each other semi-automatically with the help of
fiducial markers. The result is a model which can be rendered realistically under any linear blend of the captured
poses and under any desired lighting condition by warping, scaling, and blending data from the original images.
Finally, we show how to drive the model with performance capture data, where the pose is not necessarily a linear
combination of the original captured poses.
Online Paper (PDF) / Web Site
|
|
Jessi Stumpfel, Christopher Tchou, Nathan Yun, Philippe Martinez, Timothy Hawkins, Andrew Jones, Brian Emerson, Paul Debevec. Digital Reunification of the Parthenon and its Sculptures, 4th International Symposium on Virtual Reality, Archaeology and Intelligent Cultural Heritage, Brighton, UK, 2003.
Abstract
The location, condition, and number of the Parthenon sculptures
present a considerable challenge to archeologists and researchers
studying this monument. Although the Parthenon proudly stands on the
Athenian Acropolis after nearly 2,500 years, many of its sculptures
have been damaged or lost. Since the end of the 18th century, its
surviving sculptural decorations have been scattered to museums around
the world. We propose a strategy for digitally capturing a large
number of sculptures while minimizing impact on site and working under
time and resource constraints. Our system employs a custom structured
light scanner and adapted techniques for organizing, aligning and
merging the data. In particular this paper details our effort to
digitally record the Parthenon sculpture collection in the Basel
Skulpturhalle museum, which exhibits plaster casts of most of the
known existing pediments, metopes, and frieze. We demonstrate our
results by virtually placing the scanned sculptures on the Parthenon.
Online Paper (PDF) / Web Site |
|
Paul Debevec. Image-Based Techniques for Digitizing Environments and Artifacts, Invited paper for The 4th International Conference on 3-D Digital Imaging and Modeling (3DIM), October 2003.
Abstract This paper presents an overview of techniques for generating
photoreal computer graphics models of real-world
places and objects. Our group's early efforts in modeling
scenes involved the development of Facade, an interactive
photogrammetric modeling system that uses geometric
primitives to model the scene, and projective texture
mapping to produce the scene appearance properties. Subsequent
work has produced techniques to model the incident
illumination within scenes, which we have shown to be
useful for realistically adding computer-generated objects
to image-based models. More recently, our work has focussed
on recovering lighting-independent models of scenes
and objects, capturing how each point on an object reflects
light. Our latest work combines three-dimensional range
scans, digital photographs, and incident illumination measurements
to produce lighting-independent models of complex
objects and environments.
|
|
Andrew Gardner, Chris Tchou, Tim Hawkins, Paul Debevec. Linear Light Source Reflectometry, ACM Transactions on Graphics (SIGGRAPH 2003). 22(3), pp. 749-758, 2003.
Abstract
This paper presents a technique for estimating the spatially-varying reflectance properties of a surface based on its appearance during a single pass of a linear light source. By using a linear light rather than a point light source as the illuminant, we are able to reliably observe and estimate the diffuse color, specular color, and specular roughness of each point of the surface. The reflectometry apparatus we use is simple and inexpensive to build, requiring a single direction of motion for the light source and a fixed camera viewpoint. Our model fitting technique first renders a reflectance table of how diffuse and specular reflectance lobes would appear under moving linear light source illumination. Then, for each pixel we compare its series of intensity values to the tabulated reflectance lobes to determine which reflectance model parameters most closely produce the observed reflectance values. Using two passes of the linear light source at different angles, we can also estimate per-pixel surface normals as well as the reflectance parameters. Additionally our system records a per-pixel height map for the object and estimates its per-pixel translucency. We produce real-time renderings of the captured objects using a custom hardware shading algorithm. We apply the technique to a test object exhibiting a variety of materials as well as to an illuminated manuscript with gold lettering. To demonstrate the technique's accuracy, we compare renderings of the captured models to real photographs of the original objects.
Online Paper (PDF) / Web Site |
|
Jonas Unger, Andreas Wenger, Tim Hawkins, Andrew Gardner, Paul Debevec. Capturing and Rendering with Incident Light Fields, Eurographics Symposium on Rendering: 14th Eurographics Workshop on Rendering. pp. 141-149, June 2003.
Abstract This paper presents a process for capturing spatially and directionally varying illumination from a real-world scene and using this lighting to illuminate computer-generated objects. We use two devices for capturing such illumination. In the first we photograph an array of mirrored spheres in high dynamic range to capture the spatially varying illumination. In the second, we obtain higher resolution data by capturing images with an high dynamic range omnidirectional camera as it traverses across a plane. For both methods we apply the light eld technique to extrapolate the incident illumination to a volume. We render computer-generated objects as illuminated by this captured illumination using a custom shader within an existing global illumination rendering system. To demonstrate our technique we capture several spatially-varying lighting environments with spotlights, shadows, and dappled lighting and use them to illuminate synthetic scenes. We also show comparisons to real objects under the same illumination.
|
|
Andreas Wenger, Tim Hawkins, Paul Debevec. Optimizing Color Matching in a Lighting Reproduction System for Complex Subject and Illuminant Spectra, Eurographics Symposium on Rendering: 14th Eurographics Workshop on Rendering, pp. 249-259, June 2003.
Abstract
This paper presents a technique for improving color matching results in an LED-based lighting reproduction system for complex light source spectra. In our technique, we use measurements of the spectral response curve of the camera system as well as one or more spectral reflectance measurements of the illuminated object to optimize the color matching. We demonstrate our technique using two LED-based light sources: an off-the-shelf 3-channel RGB LED light source and a custom-built 9-channel multi-spectral LED light source. We use our technique to reproduce complex lighting spectra including both fluorescent and tungsten illumination using a Macbeth color checker chart and a human face as test subjects. We show that by using knowledge of the camera spectral response and/or the spectral reflectance of the subjects that we can significantly improve the accuracy of the color matching using either the 3-channel or the 9-channel light, achieving acceptable matches for the 3-channel source and very close matches for the multi-spectral 9-channel source.
Online Paper (PDF) / Web Site |
|
Paul Debevec, Andreas Wenger, Chris Tchou, Andrew Gardner, Jamie Waese, and Tim Hawkins. A Lighting Reproduction Approach to Live-Action Compositing, ACM Transactions on Graphics (SIGGRAPH 2002). 21(3), pp. 547-556, 2002.
Abstract We describe a process for compositing a live performance of an actor
into a virtual set wherein the actor is consistently illuminated by
the virtual environment. The Light Stage used in this work is a
two-meter sphere of inward-pointing RGB light emitting diodes focused
on the actor, where each light can be set to an arbitrary color and
intensity to replicate a real-world or virtual lighting environment.
We implement a digital two-camera infrared matting system to composite
the actor into the background plate of the environment without
affecting the visible-spectrum illumination on the actor. The color
reponse of the system is calibrated to produce correct color
renditions of the actor as illuminated by the environment. We
demonstrate moving-camera composites of actors into real-world
environments and virtual sets such that the actor is properly
illuminated by the environment into which they are composited.
|
|
Paul Debevec. A Tutorial on Image-Based Lighting, In IEEE Computer Graphics and Applications, Jan/Feb 2002.
Summary The basic steps in this tutorial shows how image-based lighting can illuminate synthetic objects with measurements of real light, making objects appear as if they're actually in a real-world scene.
Online Paper (PDF) |
|
Tim
Hawkins, Jonathan Cohen, and Paul Debevec. A Photometric Approach to Digitizing Cultural
Artifacts, In 2nd International Symposium on
Virtual Reality, Archaeology, and Cultural Heritage (VAST
2001), Glyfada, Greece, November 2001.
Abstract In this paper we present a photometry-based approach to the digital
documentation of cultural artifacts. Rather than representing an
artifact as a geometric model with spatially varying reflectance
properties, we instead propose directly representing the artifact in
terms of its reflectance field -- the manner in which it
transforms light into images. The principal device employed in our
technique is a computer-controlled lighting apparatus which quickly
illuminates an artifact from an exhaustive set of incident
illumination directions and a set of digital video cameras which
record the artifact's appearance under these forms of illumination.
From this database of recorded images, we compute linear combinations
of the captured images to synthetically illuminate the object under
arbitrary forms of complex incident illumination, correctly capturing
the effects of specular reflection, subsurface scattering,
self-shadowing, mutual illumination, and complex BRDF's often present
in cultural artifacts. We also describe a computer application that
allows users to realistically and interactively relight digitized
artifacts.
Online Paper (PDF) / Web Site |
|
Jonathan Cohen, Chris Tchou, Tim Hawkins, and Paul Debevec. Real-time High Dynamic Range Texture Mapping, Eurographics Rendering Workshop 2001, London, England, June 2001.
Abstract This paper presents a technique for representing and displaying high
dynamic-range texture maps (HDRTMs) using current graphics hardware. Dynamic
range in real-world environments often far exceeds the range representable
in 8-bit per-channel texture maps. The increased realism afforded by a high-dynamic
range representation provides improved fidelity and expressiveness for
interactive visualization of image-based models. Our technique allows for real-time
rendering of scenes with arbitrary dynamic range, limited only by available
texture memory.
In our technique, high-dynamic range textures are decomposed into sets of 8-
bit textures. These 8-bit textures are dynamically reassembled by the graphics
hardware’s programmable multitexturing system or using multipass techniques
and framebuffer image processing. These operations allow the exposure level of
the texture to be adjusted continuously and arbitrarily at the time of rendering,
correctly accounting for the gamma curve and dynamic range restrictions of the
display device. Further, for any given exposure only two 8-bit textures must be
resident in texture memory simultaneously.
We present implementation details of this technique on various 3D graphics hardware
architectures. We demonstrate several applications, including high-dynamic
range panoramic viewing with simulated auto-exposure, real-time radiance environment
mapping, and simulated Fresnel reflection.
|
|
Paul E. Debevec. Pursuing Reality with Image-Based Modeling, Rendering, and Lighting, Keynote paper for the Second Workshop on 3D Structure from Multiple Images of Large-scale Environments and
applications to Virtual and Augmented Reality (SMILE2), Dublin, Ireland, June 2000.
Abstract This paper presents techniques and animations developed from 1991
to 2000 that use digital photographs of the real world to create 3D models, virtual
camera moves, and realistic computer animations. In these projects, images are
used to determine the structure, appearance, and lighting conditions of the scenes.
Early work in recovering geometry (and generating novel views) from silhouettes
and stereo correspondence are presented, which motivate Façade, an interactive
photogrammetric modeling system that uses geometric primitives to model the
scene. Subsequent work has been done to recover lighting and reflectance properties
of real scenes, to illuminate synthetic objects with light captured from the
real world, and to directly capture reflectance fields of real-world objects and people.
The projects presented include The Chevette Project (1991), Immersion 94
(1994), Rouen Revisited (1996), The Campanile Movie (1997), Rendering with
Natural Light (1998), Fiat Lux (1999), and the Light Stage (2000).
Online Paper (PDF) |
|
Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar. Acquiring the Reflectance Field of a Human Face, Proceedings of SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, pp. 145-156 (July 2000). ACM Press / ACM SIGGRAPH / Addison Wesley Longman. Edited by Kurt Akeley. ISBN 1-58113-208-5.
Abstract We present a method to acquire the reflectance field of a human
face and use these measurements to render the face under arbitrary
changes in lighting and viewpoint. We first acquire images of the
face from a small set of viewpoints under a dense sampling of incident
illumination directions using a light stage. We then construct
a reflectance function image for each observed image pixel
from its values over the space of illumination directions. From the
reflectance functions, we can directly generate images of the face
from the original viewpoints in any form of sampled or computed
illumination. To change the viewpoint, we use a model of skin reflectance
to estimate the appearance of the reflectance functions for
novel viewpoints. We demonstrate the technique with synthetic renderings
of a person's face under novel illumination and viewpoints.
|
|
Yizhou Yu, Paul Debevec, Jitendra Malik, and Tim Hawkins. Inverse Global Illumination: Recovering Reflectance Models of Real Scenes
From Photographs, Proceedings of SIGGRAPH 99, Computer Graphics Proceedings, Annual Conference Series, pp. 215-224 (August 1999, Los Angeles, California). Addison Wesley Longman. Edited by Alyn Rockwood. ISBN 0-20148-560-5.
Abstract In this paper we present a method for recovering the reflectance
properties of all surfaces in a real scene from a sparse set of
photographs, taking into account both direct and indirect
illumination.
The result is a lighting-independent model of the
scene's geometry and reflectance properties, which can be
rendered with arbitrary modifications to structure and lighting
via traditional rendering methods. Our technique models
reflectance with a low-parameter reflectance model, and allows
diffuse albedo to vary arbitrarily over surfaces while assuming
that non-diffuse characteristics remain constant across
particular regions. The method's input is a geometric model of
the scene and a set of calibrated high dynamic range photographs
taken with known direct illumination. The algorithm
hierarchically partitions the scene into a polygonal mesh, and
uses image-based rendering to construct estimates of both the
radiance and irradiance of each patch from the photographic data.
The algorithm computes the expected location of specular
highlights, and then analyzes the highlight areas in the images
by running a novel iterative optimization procedure to recover
the diffuse and specular reflectance parameters for each region.
Lastly, these parameters are used in constructing high-resolution
diffuse albedo maps for each surface.
The algorithm has been
applied to both real and synthetic data, including a synthetic
cubical room and a real meeting room. Re-renderings are produced
using a global illumination system under both original and novel
lighting, and with the addition of synthetic objects.
Side-by-side comparisons show success at predicting the
appearance of the scene under novel lighting conditions.
Online Paper (PDF) / Web Site |
|
Paul Debevec. Rendering Synthetic Objects Into Real Scenes: Bridging Traditional and
Image-Based Graphics With Global Illumination and High Dynamic Range
Photography, Proceedings of SIGGRAPH 98, Computer Graphics Proceedings, Annual Conference Series, pp. 189-198 (July 1998, Orlando, Florida). Addison Wesley. Edited by Michael Cohen. ISBN 0-89791-999-8.
Abstract We present a method that uses measured scene radiance and global
illumination in order to add new objects to light-based models with
correct lighting. The method uses a high dynamic range image-based model
of the scene, rather than synthetic light sources, to illuminate the new
objects. To compute the illumination, the scene is considered as three
components: the distant scene, the local scene, and the synthetic
objects. The distant scene is assumed to be photometrically unaffected
by the objects, obviating the need for reflectance model information.
The local scene is endowed with estimated reflectance model information
so that it can catch shadows and receive reflected light from the new
objects. Renderings are created with a standard global illumination
method by simulating the interaction of light amongst the three
components. A differential rendering technique allows for good results
to be obtained when only an estimate of the local scene reflectance
properties is known. We apply the general method to the problem of
rendering synthetic objects into real scenes. The light-based model is
constructed from an approximate geometric model of the scene and by
using a light probe to measure the incident illumination at the location
of the synthetic objects. The global illumination solution is then
composited into a photograph of the scene using the differential
rendering technique. We conclude by discussing the relevance of the
technique to recovering surface reflectance properties in uncontrolled
lighting situations. Applications of the method include visual effects,
interior design, and architectural visualization.
|
|
Paul E. Debevec and Yizhou Yu and George D. Borshukov. Efficient View-Dependent Image-Based Rendering with Projective
Texture-Mapping, Eurographics Rendering Workshop 1998, pp. 105-116 (June 1998, Vienna, Austria). Eurographics. Edited by George Drettakis and Nelson Max. ISBN 3-211-83213-0.
Abstract This paper presents how the image-based rendering technique of
view-dependent texture-mapping (VDTM) can be efficiently implemented
using projective texture mapping, a feature commonly available in
polygon graphics hardware. VDTM is a technique for generating novel
views of a scene with approximately known geometry making maximal use
of a sparse set of original views. The original presentation of VDTM
by Debevec, Taylor, and Malik required significant per-pixel
computation and did not scale well with the number of original
images. In our technique, we precompute for each polygon the set of
original images in which it is visibile and create a "view
map" data structure that encodes the best texture map to use for
a regularly sampled set of possible viewing directions. To generate a
novel view, the view map for each polygon is queried to determine a
set of no more than three original images to blend together in order
to render the polygon with projective texture-mapping. Invisible
triangles are shaded using an object-space hole-filling method. We
show how the rendering process can be streamlined for implementation
on standard polygon graphics hardware. We present results of using the
method to render a large-scale model of the Berkeley bell tower and
its surrounding campus enironment.
|
|
Paul E. Debevec, Camillo J. Taylor, Jitendra Malik, Golan Levin, George Borshukov, Yizhou Yu. Modeling and Rendering of Architecture with Interactive Photogrammetry and View-Dependent Texture Mapping, International Symposium on Circuits and Systems (ISCAS), Moneterey, CA, June 1998.
Abstract In this paper we overview an approach to modeling and rendering architectural scenes from a sparse set of photographs. The approach is designed to require no special hardware and to produce real-time renderings on standard graphics hardware. The modeling method is an interactive photogrammetric modeling tool for recovering geometric models of architectural scenes from photographs. The tool is practical as an interactive method because it solves directly for architectural dimensions rather than for a multitude of vertex coordinates or depth measurements. The technique also determines the camera positions of the photographs, allowing the photographs to be taken from arbitrary unrecorded locations. The rendering method creates novel views of the scene based on the recovered model and the original photographs. A view-dependent texture mapping method allows all available image information to be used to produce the most photorealistic renderings. We focus on a number of results and
applications of the method.
Online Paper (PDF) |
|
Paul E. Debevec and Jitendra Malik. Recovering High Dynamic Range Radiance Maps from Photographs, Proceedings of SIGGRAPH 97, Computer Graphics Proceedings, Annual Conference Series, pp. 369-378 (August 1997, Los Angeles, California). Addison Wesley. Edited by Turner Whitted. ISBN 0-89791-896-7.
Abstract We present a method of recovering high dynamic range radiance maps from
photographs taken with conventional imaging equipment. In our method,
multiple photographs of the scene are taken with different amounts of
exposure. Our algorithm uses these differently exposed photographs to
recover the response function of the imaging process, up to factor of
scale, using the assumption of reciprocity. With the known response
function, the algorithm can fuse the multiple photographs into a single,
high dynamic range radiance map whose pixel values are proportional to
the true radiance values in the scene. We demonstrate our method on
images acquired with both photochemical and digital imaging processes.
We discuss how this work is applicable in many areas of computer
graphics involving digitized photographs, including image-based
modeling, image compositing, and image processing. Lastly, we
demonstrate a few applications of having high dynamic range radiance
maps, such as synthesizing realistic motion blur and simulating the
response of the human visual system.
|
|
Paul E. Debevec. Modeling and Rendering Architecture from Photographs, Ph.D. Thesis, University of California at Berkeley, 1996.
Abstract This thesis presents an approach for modeling and rendering existing architectural scenes from sparse sets of still photographs. The modeling approach, which combines both geometry-based and
image-based techniques, has two components. The first component is an interactive photogrammetric modeling method which facilitates the recovery of the basic geometry of the photographed scene. The
photogrammetric modeling approach is effective, convenient, and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo
algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, this new technique robustly recovers accurate depth from widely-spaced image pairs.
Consequently, this approach can model large architectural environments with far fewer photographs than current image-based modeling approaches. For producing renderings, this thesis presents
view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models.
This approach can be used to recover models for use in either geometry-based or image-based rendering systems. This work presents results that demonstrate the approach's ability to create realistic
renderings of architectural scenes from viewpoints far from the original photographs. This thesis concludes with a presentation of how these modeling and rendering techniques were used to create the
interactive art installation Rouen Revisited, presented at the SIGGRAPH '96 art show.
Online Thesis (PDF) / Web Site |
|
Paul E. Debevec and Camillo J. Taylor and Jitendra Malik. Modeling and Rendering Architecture from Photographs: A Hybrid
Geometry- and Image-Based Approach, Proceedings of SIGGRAPH 96, Computer Graphics Proceedings, Annual Conference Series, pp. 11-20 (August 1996, New Orleans, Louisiana). Addison Wesley. Edited by Holly Rushmeier. ISBN 0-201-94800-1.
Abstract We present a new approach for modeling and rendering existing
architectural scenes from a sparse set of still photographs. Our
modeling approach, which combines both geometry-based and image-based
techniques, has two components. The first component is a photogrammetric
modeling method which facilitates the recovery of the basic geometry of
the photographed scene. Our photogrammetric modeling approach is
effective, convenient, and robust because it exploits the constraints
that are characteristic of architectural scenes. The second component is
a model-based stereo algorithm, which recovers how the real scene
deviates from the basic model. By making use of the model, our stereo
technique robustly recovers accurate depth from widely-spaced image
pairs. Consequently, our approach can model large architectural
environments with far fewer photographs than current image-based
modeling approaches. For producing renderings, we present view-dependent
texture mapping, a method of compositing multiple views of a scene that
better simulates geometric detail on basic models. Our approach can be
used to recover models for use in either geometry-based or image-based
rendering systems. We present results that demonstrate our approach's
ability to create realistic renderings of architectural scenes from
viewpoints far from the original photographs.
|
|
Paul E. Debevec. How Far Can a Cantilevered Hightower Project?, Journal of Mathematical Behavior, Robert B. Davis, ed. June 1988.
|
|