Debevec Siggraph98
Debevec Siggraph98
Debevec Siggraph98
Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography
Paul Debevec
University of California at Berkeley1
ABSTRACT
We present a method that uses measured scene radiance and global illumination in order to add new objects to light-based models with correct lighting. The method uses a high dynamic range imagebased model of the scene, rather than synthetic light sources, to illuminate the new objects. To compute the illumination, the scene is considered as three components: the distant scene, the local scene, and the synthetic objects. The distant scene is assumed to be photometrically unaffected by the objects, obviating the need for reectance model information. The local scene is endowed with estimated reectance model information so that it can catch shadows and receive reected light from the new objects. Renderings are created with a standard global illumination method by simulating the interaction of light amongst the three components. A differential rendering technique allows for good results to be obtained when only an estimate of the local scene reectance properties is known. We apply the general method to the problem of rendering synthetic objects into real scenes. The light-based model is constructed from an approximate geometric model of the scene and by using a light probe to measure the incident illumination at the location of the synthetic objects. The global illumination solution is then composited into a photograph of the scene using the differential rendering technique. We conclude by discussing the relevance of the technique to recovering surface reectance properties in uncontrolled lighting situations. Applications of the method include visual effects, interior design, and architectural visualization. CR Descriptors: I.2.10 [Articial Intelligence]: Vision and Scene Understanding - Intensity, color, photometry and thresholding; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism - Color, shading, shadowing, and texture; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism - Radiosity; I.4.1 [Image Processing]: Digitization - Scanning; I.4.8 [Image Processing]: Scene Analysis - Photometry, Sensor Fusion.
1 Introduction
Rendering synthetic objects into real-world scenes is an important application of computer graphics, particularly in architectural and visual effects domains. Oftentimes, a piece of furniture, a prop, or a digital creature or actor needs to be rendered seamlessly into a real scene. This difcult task requires that the objects be lit consistently with the surfaces in their vicinity, and that the interplay of light between the objects and their surroundings be properly simulated. Specically, the objects should cast shadows, appear in reections, and refract, focus, and emit light just as real objects would.
Distant Scene
lightbased (no reflectance model)
light
Local Scene
estimated reflectance model
Synthetic Objects
known reflectance model
1 Computer Science Division, University of California at Berkeley, Berkeley, CA 947201776. Email: debevec@cs.berkeley.edu. More information and additional results may be found at: http://www.cs.berkeley.edu/debevec/Research
Figure 1: The General Method In our method for adding synthetic objects into light-based scenes, the scene is partitioned into three components: the distant scene, the local scene, and the synthetic objects. Global illumination is used to simulate the interplay of light amongst all three components, except that light reected back at the distant scene is ignored. As a result, BRDF information for the distant scene is unnecessary. Estimates of the geometry and material properties of the local scene are used to simulate the interaction of light between it and the synthetic objects. Currently available techniques for realistically rendering synthetic objects into scenes are labor intensive and not always successful. A common technique is to manually survey the positions of the light sources, and to instantiate a virtual light of equal color and intensity for each real light to illuminate the synthetic objects. Another technique is to photograph a reference object (such as a gray sphere) in the scene where the new object is to be rendered, and use its appearance as a qualitative guide in manually conguring the lighting environment. Lastly, the technique of reection mapping is useful for mirror-like reections. These methods typically require considerable hand-renement and none of them easily simulates the effects of indirect illumination from the environment.
technique that produces perceptually accurate results even when the estimated BRDF is somewhat inaccurate. We demonstrate the general method for the specic case of rendering synthetic objects into particular views of a scene (such as background plates) rather than into a general image-based model. In this method, a light probe is used to acquire a high dynamic range panoramic radiance map near the location where the object will be rendered. A simple example of a light probe is a camera aimed at a mirrored sphere, a conguration commonly used for acquiring environment maps. An approximate geometric model of the scene is created (via surveying, photogrammetry, or 3D scanning) and mapped with radiance values measured with the light probe. The distant scene, local scene, and synthetic objects are rendered with global illumination from the same point of view as the background plate, and the results are composited into the background plate with a differential rendering technique.
1.1 Overview
The rest of this paper is organized as follows. In the next section we discuss work related to this paper. Section 3 introduces the basic technique of using acquired maps of scene radiance to illuminate synthetic objects. Section 4 presents the general method we will use to render synthetic objects into real scenes. Section 5 describes a practical technique based on this method using a light probe to measure incident illumination. Section 6 presents a differential rendering technique for rendering the local environment with only an approximate description of its reectance. Section 7 presents a simple method to approximately recover the diffuse reectance characteristics of the local environment. Section 8 presents results obtained with the technique. Section 9 discusses future directions for this work, and we conclude in Section 10.
(11700,7300,2600) (5700,8400,11800)
Figure 2: An omnidirectional radiance map This full dynamic range lighting environment was acquired by photographing a mirrored ball balanced on the cap of a pen sitting on a table. The environment contains natural, electric, and indirect light. The three views of this image adjusted to (a) +0 stops, (b) -3.5 stops, and (c) -7.0 stops show that the full dynamic range of the scene has been captured without saturation. As a result, the image usefully records the direction, color, and intensity of all forms of incident light.
Figure 3: Illuminating synthetic objects with real light (Top row: a,b,c,d,e) With full dynamic range measurements of scene radiance from Fig. 2. (Bottom row: f,g,h,i,j) With low dynamic range information from a single photograph of the ball. The right sides of images (h,i,j) have been brightened by a factor of six to allow qualitative comparison to (c,d,e). The high dynamic range measurements of scene radiance are necessary to produce proper lighting on the objects.
Figure 4: Synthetic objects lit by two different environments (a) A collection of objects is illuminated by the radiance information in 2. The objects exhibit appropriate interreection. (b) The same objects are illuminated by different radiance information obtained in an outdoor urban environment on an overcast day. The radiance map used for the illumination is shown in the upper left of each image. Candle holder model courtesy of Gregory Ward Larson.
(a)
(b) Figure 6: Rendering with a Combined Probe Image The full dynamic range environment map shown at the top was assembled from two light probe images taken ninety degrees apart from each other. As a result, the only visible artifact is small amount of the probe support visible on the oor. The map is shown at -4.5, 0, and +4.5 stops. The bottom rendering was produced using this lighting information, and exhibits diffuse and specular reections, shadows from different sources of light, reections, and caustics.
LSfinal = LSobj
, Errls , LSnoobj
In this form, we see that whenever LSobj and LSnoobj are the same (i.e. the addition of the objects to the scene had no effect on the local scene) the nal rendering of the local scene is equivalent to LSb (e.g. the background plate). When LSobj is darker than LSnoobj , light is subtracted from the background to form shadows,
Up East South
spatial variation) 2. Choose approximate initial values for the parameters of the reectance model 3. Compute a global illumination solution for the local scene with the current parameters using the observed lighting conguration or congurations. 4. Compare the appearance of the rendered local scene to its actual appearance in one or more views. 5. If the renderings are not consistent, adjust the parameters of the reectance model and return to step 3. Efcient methods of performing the adjustment in step 5 that exploit the properties of particular reectance models are left as future work. However, assuming a diffuse-only model of the local scene in step 1 makes the adjustment in step 5 straightforward. We have:
Z
Lr1 r ; r =
2 0
Z
0
=2
d Li i ; i cos i sin i di d i =
Z
d
0
2
Z
0
=2
Li i ; i cos i sin i di d i
Z
Lr2 r ; r =
0
2
Z
0
=2
Li i ; i cos i sin i di d i
The updated diffuse reectance coefcient for each part of the local scene can be computed as:
0
d=
Lr1 r ; r Lr2 r ; r
In this manner, we use the global illumination calculation to render each patch as a perfectly diffuse reector, and compare the resulting radiance to the observed value. Dividing the two quantities yields the next estimate of the diffuse reection coefcient 0 d . If there is no interreection within the local scene, then the 0 d estimates will make the renderings consistent. If there is interreection, then the algorithm should be iterated until there is convergence. For a trichromatic image, the red, green, and blue diffuse reectance values are computed independently. The diffuse characteristics of the background material used to produce Fig. 8(c) were
8 Compositing Results
Fig. 5 shows a simple light-based model of a room constructed using the panoramic radiance map from Fig. 2. The room model begins at the height of the table and continues to the ceiling; its measurements and the position of the ball within it were measured manually. The table surface is visible on the bottom face. Since the room model is nite in size, the light sources are effectively local rather than innite. The stretching on the south wall is due to the poor sampling toward the silhouette edge of the ball. Figs. 4 and 6 show complex arrangements of synthetic objects lit entirely by a variety of light-based models. The selection and composition of the objects in the scene was chosen to exhibit a wide variety of light interactions, including diffuse and specular reectance, multiple soft shadows, and reected and focused light. Each rendering was produced using the RADIANCE system with two diffuse light bounces and a relatively high density of ambient sample points. Fig. 8(a) is a background plate image into which the synthetic objects will be rendered. In 8(b) a calibration grid was placed on the table in order to determine the camera pose relative to the scene and to the mirrored ball, which can also be seen. The poses were determined using the photogrammetric method in [10]. In 8(c), a model of the local scene as well as the synthetic objects is geometrically matched and composited onto the background image. Note that the local scene, while the same average color as the table, is readily distinguishable at its edges and because it lacks the correct variations in albedo. Fig. 8(d) shows the results of lighting the local scene model with the light-based model of the room, without the objects. This image will be compared to 8(c) in order to determine the effect the synthetic objects have on the local scene. Fig. 8(e) is a mask image in which the white areas indicate the location of the synthetic objects. If the distant or local scene were to occlude the objects, such regions would be dark in this image. Fig. 8(f) shows the difference between the appearance of the local scene rendered with (8(c)) and without (8(d)) the objects. For illustration purposes, the difference in radiance values have been offset so that zero difference is shown in gray. The objects have been masked out using image 8(e). This difference image encodes both the shadowing (dark areas) and reected and focussed light (light areas) imposed on the local scene by the addition of the synthetic objects. Fig. 8(g) shows the nal result using the differential rendering method described in Section 6. The synthetic objects are copied directly from the global illumination solution 8(c) using the object mask 8(e). The effects the objects have on the local scene are included by adding the difference image 8(f) (without offset) to the background image. The remainder of the scene is copied directly from the background image 8(a). Note that in the mirror balls reection, the modeled local scene can be observed without the effects of differential rendering a limitation of the compositing technique. In this nal rendering, the synthetic objects exhibit a consistent appearance with the real objects present in the background image 8(a) in both their diffuse and specular shading, as well as the direction and coloration of their shadows. The somewhat speckled nature of the object reections seen in the table surface is due to
light probe
distant scene
synthetic objects
distant scene
local scene
Figure 7: Using a light probe (a) The background plate of the scene (some objects on a table) is taken. (b) A light probe (in this case, the camera photographing a steel ball) records the incident radiance near the location of where the synthetic objects are to be placed. (c) A simplied light-based model of the distant scene is created as a planar surface for the table and a nite box to represent the rest of the room. The scene is texture-mapped in high dynamic range with the radiance map from the light probe. The objects on the table, which were not explicitly modeled, become projected onto the table. (d) Synthetic objects and a BRDF model of the local scene are added to the light-based model of the distant scene. A global illumination solution of this conguration is computed with light coming from the distant scene and interacting with the local scene and synthetic objects. Light reected back to the distant scene is ignored. The results of this rendering are composited (possibly with differential rendering) into the background plate from (a) to achieve the nal result.
(g) Final result with differential rendering Figure 8: Compositing synthetic objects into a real scene using a light probe and differential rendering
References
[1] A DELSON , E. H., AND B ERGEN , J. R. Computational Models of Visual Processing. MIT Press, Cambridge, Mass., 1991, ch. 1. The Plenoptic Function and the Elements of Early Vision. [2] A ZARMI, M. Optical Effects Cinematography: Its Development, Methods, and Techniques. University Microlms International, Ann Arbor, Michigan, 1973. [3] B LINN , J. F. Texture and reection in computer generated images. Communications of the ACM 19, 10 (October 1976), 542547. [4] C HEN , E. QuickTime VR - an image-based approach to virtual environment navigation. In SIGGRAPH 95 (1995). [5] C HEN , S. E. Incremental radiosity: An extension of progressive radiosity to an interactive synthesis system. In SIGGRAPH 90 (1990), pp. 135144. [6] C OHEN , M. F., C HEN , S. E., WALLACE , J. R., AND G REENBERG , D. P. A progressive renement approach to fast radiosity image generation. In SIGGRAPH 88 (1988), pp. 7584. [7] C URLESS , B., AND L EVOY, M. A volumetric method for building complex models from range images. In SIGGRAPH 96 (1996), pp. 303312. [8] D ANA , K. J., G INNEKEN , B., N AYAR , S. K., AND K OENDERINK , J. J. Reectance and texture of real-world surfaces. In Proc. IEEE Conf. on Comp. Vision and Patt. Recog. (1997), pp. 151157. [9] D EBEVEC , P. E., AND M ALIK , J. Recovering high dynamic range radiance maps from photographs. In SIGGRAPH 97 (August 1997), pp. 369378. [10] D EBEVEC , P. E., TAYLOR , C. J., AND M ALIK , J. Modeling and rendering architecture from photographs: A hybrid geometry- and image-based approach. In SIGGRAPH 96 (August 1996), pp. 1120. [11] D EBEVEC , P. E., Y U , Y., AND B ORSHUKOV, G. D. Efcient view-dependent image-based rendering with projective texture-mapping. Tech. Rep. UCB//CSD98-1003, University of California at Berkeley, 1998. [12] D RETTAKIS , G., R OBERT, L., AND B OUGNOUX , S. Interactive common illumination for computer augmented reality. In 8th Eurographics workshop on Rendering, St. Etienne, France (May 1997), J. Dorsey and P. Slusallek, Eds., pp. 45 57. [13] FIELDING , R. The Technique of Special Effects Cinematography. Hastings House, New York, 1968. [14] FOURNIER , A., G UNAWAN , A., AND R OMANZIN , C. Common illumination between real and computer generated scenes. In Graphics Interface (May 1993), pp. 254262. [15] G ERSHBEIN , R., SCHRODER , P., AND H ANRAHAN , P. Textures and radiosity: Controlling emission and reection with texture maps. In SIGGRAPH 94 (1994). [16] G ORAL, C. M., TORRANCE , K. E., G REENBERG , D. P., AND B ATTAILE , B. Modeling the interaction of light between diffuse surfaces. In SIGGRAPH 84 (1984), pp. 213222. [17] G ORTLER , S. J., G RZESZCZUK , R., SZELISKI , R., AND C OHEN , M. F. The Lumigraph. In SIGGRAPH 96 (1996), pp. 4354. [18] H ECKBERT, P. S. Survey of texture mapping. IEEE Computer Graphics and Applications 6, 11 (November 1986), 5667. [19] K AJIYA , J. The rendering equation. In SIGGRAPH 86 (1986), pp. 143150. [20] K ARNER , K. F., M AYER , H., AND G ERVAUTZ , M. An image based measurement system for anisotropic reection. In EUROGRAPHICS Annual Conference Proceedings (1996). [21] K OENDERINK , J. J., AND VAN D OORN , A. J. Illuminance texture due to surface mesostructure. J. Opt. Soc. Am. 13, 3 (1996). [22] L AVEAU , S., AND FAUGERAS , O. 3-D scene representation as a collection of images. In Proceedings of 12th International Conference on Pattern Recognition (1994), vol. 1, pp. 689691. [23] L EVOY, M., AND H ANRAHAN , P. Light eld rendering. In SIGGRAPH 96 (1996), pp. 3142. [24] M C M ILLAN , L., AND B ISHOP, G. Plenoptic Modeling: An image-based rendering system. In SIGGRAPH 95 (1995). [25] N AKAMAE, E., H ARADA , K., AND ISHIZAKI , T. A montage method: The overlaying of the computer generated images onto a background photograph. In SIGGRAPH 86 (1986), pp. 207214. [26] PORTER , T., AND D UFF , T. Compositing digital images. In SIGGRAPH 84 (July 1984), pp. 253259. [27] SATO , Y., W HEELER , M. D., AND IKEUCHI, K. Object shape and reectance modeling from observation. In SIGGRAPH 97 (1997), pp. 379387. [28] SMITH , T. G. Industrial Light and Magic: The Art of Special Effects. Ballantine Books, New York, 1986. [29] SZELISKI , R. Image mosaicing for tele-reality applications. In IEEE Computer Graphics and Applications (1996). [30] T URK , G., AND L EVOY, M. Zippered polygon meshes from range images. In SIGGRAPH 94 (1994), pp. 311318. [31] V EACH , E., AND G UIBAS , L. J. Metropolis light transport. In SIGGRAPH 97 (August 1997), pp. 6576. [32] WARD , G. J. Measuring and modeling anisotropic reection. In SIGGRAPH 92 (July 1992), pp. 265272. [33] WARD , G. J. The radiance lighting simulation and rendering system. In SIGGRAPH 94 (July 1994), pp. 459472. [34] WATANABE , M., AND N AYAR , S. K. Telecentric optics for computational vision. In Proceedings of Image Understanding Workshop (IUW 96) (February 1996). [35] Y.C HEN , AND M EDIONI, G. Object modeling from multiple range images. Image and Vision Computing 10, 3 (April 1992), 145155.
9 Future work
The method proposed here suggests a number of areas for future work. One area is to investigate methods of automatically recovering more general reectance models for the local scene geometry, as proposed in Section 7. With such information available, the program might also also be able to suggest which areas of the scene should be considered as part of the local scene and which can safely be considered distant, given the position and reectance characteristics of the desired synthetic objects. Some additional work could be done to allow the global illumination algorithm to compute the ilumination solution more efciently. One technique would be to have an algorithm automatically locate and identify concentrated light sources in the light-based model of the scene. With such knowledge, the algorithm could compute most of the direct illumination in a forward manner, which could dramatically increase the efciency with which an accurate solution could be calculated. To the same end, use of the method presented in [15] to expedite the solution could be investigated. For the case of compositing moving objects into scenes, greatly increased efciency could be obtained by adapting incremental radiosity methods to the current framework.
10 Conclusion
We have presented a general framework for adding new objects to light-based models with correct illumination. The method leverages a technique of using high dynamic range images of real scene radiance to synthetically illuminate new objects with arbitrary reectance characteristics. We leverage this technique in a general method to simulate interplay of light between synthetic objects and the light-based environment, including shadows, reections, and caustics. The method can be implemented with standard global illumination techniques. For the particular case of rendering synthetic objects into real scenes (rather than general light-based models), we have presented a practical instance of the method that uses a light probe to record incident illumination in the vicinity of the synthetic objects. In addition, we have described a differential rendering technique that can convincingly render the interplay of light between objects and the local scene when only approximate reectance information for the local scene is available. Lastly, we presented an iterative approach for determining reectance characteristics of the local scene based on measured geometry and observed radiance in uncontrolled lighting conditions. It is our hope that the techniques presented here will be useful in practice as well as comprise a useful framework for combining material-based and light-based graphics.
Acknowledgments
The author wishes to thank Chris Bregler, David Forsyth, Jianbo Shi, Charles Ying, Steve Chenney, and Andrean Kalemis for the various forms of help and advice they provided. Special gratitude is also due to Jitendra Malik for helping make this work possible. Discussions with Michael Naimark and Steve Saunders helped motivate this work. Tim Hawkins provided extensive assistance on improving and revising this paper and provided invaluable assistance with image acquisition. Gregory Ward Larson deserves great thanks for the RADIANCE lighting simulation system and his invaluable assistance and advice in using RADIANCE in this research, for assisting with reectance measurements, and for very helpful comments and suggestions on the paper. This research was supported by a Multidisciplinary University Research Initiative on
10