Modeling and Rendering Architecture from Photographs
A hybrid geometry- and image-based approach
ABSTRACT
We present an approach for creating realistic synthetic views of
existing architectural scenes from a sparse set of still photographs.
Our approach, which combines both geometry-based and image-based
modeling and rendering techniques, has two components. The first
component is an easy-to-use photogrammetric modeling system which
facilitates the recovery of a basic geometric model of the
photographed scene. The modeling system is effective and robust
because it exploits the constraints that are characteristic of
architectural scenes. The second component is a model-based
stereo algorithm, which recovers how the real scene deviates from the
basic model. By making use of the model, our stereo approach can
robustly recover accurate depth from image pairs with large baselines.
Consequently, our approach can model large architectural environments
with far fewer photographs than current image-based modeling
approaches. As an intermediate result, we present view-dependent
texture mapping, a method of better simulating geometric detail on
basic models. Our approach can recover models for use in either
geometry-based or image-based rendering systems. We present results
that demonstrate our approach's abilty to create realistic renderings
of architectural scenes from viewpoints far from the original
photographs.
Back to Modeling and Rendering Architecture from Photographs
Paul E. Debevec / debevec@cs.berkeley.edu