3D20Rendering20 20Techniques20and20Challenges
3D20Rendering20 20Techniques20and20Challenges
3D20Rendering20 20Techniques20and20Challenges
net/publication/50422357
CITATIONS READS
9 4,713
3 authors, including:
Ekta Walia
Maharishi Markandeshwar University, Mullana
46 PUBLICATIONS 826 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Ekta Walia on 22 May 2014.
Abstract: Computer generated images and animations are getting more and more common. They are used in many
different contexts such as movies, mobiles, medical visualization, architectural visualization and CAD. Advanced
ways of describing surface and light source properties are important to ensure that artists are able to create realistic
and stylish looking images. Even when using advanced rendering algorithms such as ray tracing, time required for
shading may contribute towards a large part of the image creation time. Therefore both performance and flexibility is
important in a rendering system. This paper gives a comparative study of various 3D Rendering techniques and their
challenges in a complete and systematic manner.
72
NCCI 2010 -National Conference on Computational Instrumentation
CSIO Chandigarh, INDIA, 19-20 March 2010
photographed, an interpolation scheme may be applied. field of a scene in appropriate directions. The
Rendering using images as a modeling primitive is Lumigraph [12] is similar to light field rendering [11].
called image-based rendering. In addition to features of light field rendering, it also
Computer graphics researchers recently have turned to allows us to include any geometric knowledge we may
image-based rendering due to following reasons: capture to improve rendering performance. Unlike the
• Close to photo realism. light field and Lumigraph where cameras are placed on
• Rendering time is decoupled from scene a two-dimensional grid, the 3D Concentric Mosaics
complexity. [13] representation reduces the amount of data by
• Images are used as input. capturing a sequence of images along a circular path.
• Exploits coherence. Challenges: Because such rendering techniques do not
• Pre-calculation of scene data/ images. rely on any geometric impostors, they have a tendency
to rely on oversampling to counter undesirable aliasing
Instead of constructing a scene with millions of effects in output display. Oversampling means more
polygons, in Image Based Rendering the scene is intensive data acquisition, more storage, and higher
represented by a collection of photographs along with a redundancy.
greatly simplified geometric model. This simple 3.2 Rendering with implicit geometry
representation allows traditional light transport These techniques for rendering rely on positional
simulations to be replaced with basic image-processing correspondences (typically across a small number of
routines that combine multiple images together to images) to render new views. This class has the term
produce never-before-seen images from new vantage implicit to express the fact that geometry is not directly
points. available. Common approaches under this class are
There have been many IBR representations invented in • View Interpolation [14],
the literature. They basically have following three • View Morphing [15],
categories [9]: • Joint View Interpolation [16].
• Rendering with no geometry View interpolation [14] uses optical flow (i.e. Relative
• Rendering with implicit geometry transforms between cameras) to directly generate
• Rendering with explicit geometry intermediate views. But the problem with this method
3.1 Rendering with no geometry is that the intermediate view may not necessarily be
We start with representative techniques for rendering geometrically correct. View morphing [15] is a
with unknown scene geometry. These techniques specialized version of view interpolation, except that
typically rely on many input images and also on the the interpolated views are always geometrically correct.
characterization of the 7D plenoptic function [10]. The geometric correctness is ensured because of the
Common approaches under this class are linear camera motion. Computer vision techniques are
• Light field [11] usually used to generate such correspondences. M.
• Lumigraph [12] Lhuillier et al. proposed a new method [16] which
• Concentric mosaics [13] automatically interpolating two images and tackle two
most difficult problems of morphing due to the lack of
The lightfield is the radiance density function
depth information: pixel matching and visibility
describing the flow of energy along all rays in 3D
handling.
space. Since the description of a ray’s position and
Challenges: Representations that rely on implicit
orientation requires four parameters (e.g., two-
geometry require accurate image registration for high-
dimensional positional information and two-
quality view synthesis.
dimensional angular information), the radiance is a 4D
3.3 Rendering with explicit geometry
function. Image, on the other hand, is only two
Representations that do not rely on geometry typically
dimensional and lightfield imagery must therefore be
require a lot of images for rendering, and
captured and represented in 2D form. A variety of
representations that rely on implicit geometry require
techniques have been developed to transform and
accurate image registration for high-quality view
capture the 4D radiance in a manner compatible with
synthesis. IBR representations that use explicit
2D [11] [12].
geometry have generally source descriptions. Such
In Light Field Rendering [11], the light fields are
descriptions can be the scene geometry, the texture
created from large arrays of both rendered and digitized
maps, the surface reflection model etc.
images. The latter are acquired using a video camera
3.3.1 Scene Geometry as Depth Maps
mounted on a computer-controlled gantry. Once a light
These approaches use depth maps as scene
field has been created, new views may be constructed
representation. Depth maps indicate the per-pixel depth
in real time by extracting 2D slices from the 4D light
73
NCCI 2010 -National Conference on Computational Instrumentation
CSIO Chandigarh, INDIA, 19-20 March 2010
74
NCCI 2010 -National Conference on Computational Instrumentation
CSIO Chandigarh, INDIA, 19-20 March 2010
75
NCCI 2010 -National Conference on Computational Instrumentation
CSIO Chandigarh, INDIA, 19-20 March 2010
hardware is not difficult to develop and can [10] E. H. Adelson, and J. R. Bergen, “The plenoptic
dramatically increase the rendering speed. function and the elements of early vision”,
Computational Models of Visual Processing,
No matter how much the storage and memory increase Edited by Michael Landy and J. Anthony
in the future, sampling and compression are always Movshon. The MIT Press, Cambridge, Mass.
useful to keep the IBR data at a manageable size. The Chapter 1. 1991.
work on sampling and compression, however, has just [11] M. Levoy and P. Hanrahan, “Light field
started. There are still many problems which remain rendering”, Computer Graphics (SIGGRAPH’96),
unsolved, such as the sampling rate when certain pp. 31-42. August 1996.
source description is available. A high compression [12] S. J. Gortler, R. Grzeszczuk, R. Szeliski and M. F.
ratio in IBR seems to rely heavily on how good the Cohen, “The Lumigraph”, Computer Graphics
images can be predicted, which depends on, e.g., how (SIGGRAPH’96), pp. 43-54. August 1996.
good a certain source description can be reconstructed.
[13] H.Y. Shum and L.W. He, “Rendering with
Joint work between the signal processing community
concentric mosaics”, Computer Graphics
and the computer vision community is highly expected
(SIGGRAPH’99), pp.299-306. August 1999.
in this regard.
[14] S. E. Chen and L. Williams, “View interpolation
6. REFERENCES for image synthesis”, Computer Graphics
[1] H. Gouraud. “Continuous Shading of Curved (SIGGRAPH’93), pp. 279-288. August 1993.
Surfaces.” IEEE Transactions on Computers, C- [15] S. M. Seitz and C. M. Dyer, “View morphing”,
20(6):623–629, June 1971. Computer Graphics (SIGGRAPH’96), pp. 21-30.
[2] B.T. Phong. “Illumination for Computer Generated August 1996.
Pictures.” Communications of the ACM, [16] M. Lhuillier and L. Quan. “Image interpolation by
18(6):311–317, June 1975. joint view triangulation.” In IEEE Conference on
[3] Chandan Singh, Ekta Walia, "Shading by Fast Bi- Computer Vision and Pattern Recognition, volume
Quadratic Normal Vector Interpolation", ICGST 2, pages 139-145, Fort Collins, CO, June 1999.
International Journal on Graphics, Vision and [17] L. McMillan, “An Image-Based Approach to
Image Processing, Vol. 5. Issue 9, pp. 49-54, 2005. Three-Dimensional Computer Graphics”, Ph.D.
[4] Chandan Singh, Ekta Walia, "Fast Hybrid Shading: Thesis, Department of Computer Science,
An Application of Finite Element Methods in 3D University of North Carolina at Chapel Hill, 1997.
Rendering", International Journal of Image and [18] M. Oliveira and G. Bishop. “Relief textures”.
Graphics, Vol. 5, No. 4, pp-789-810, 2005. Technical report, UNC Computer Science TR99-
[5] Ekta Walia, Chandan Singh. “An Analysis of Linear 015, March 1999.
and Non-Linear Interpolation Techniques for [19] J. Shade, S. Gortler, L. W. He and R. Szeliski,
Three-Dimensional Rendering.” IEEE Proceedings “Layered depth images”, In Computer Graphics
of the Geometric Modeling and Imaging― New (SIGGRAPH), pages 231-242, August 1998.
Trends (GMAI'06). 2006. [20] C. Chang, G. Bishop and A. Lastra, “LDI tree: A
[6] Cohen, Greenberg, Immel, and Brock. “An efficient hierarchical representation for image-based
radiosity approach for realistic image synthesis.” rendering”, Computer Graphics (SIGGRAPH’99),
IEEE Computer Graphics and Applications 6, pp. 291-298. August 1999.
pages 23–35, 1986. [21] C. Buehler, M. Bosse, L. McMillan, S. Gortler,
[7] Turner Whitted. “An improved illumination model and M. Cohen, “Unstructured Lumigraph
for shaded display.” Communications of the ACM, Rendering”, Computer Graphics (SIG-
23(6):343–349, June 1980. GRAPH’01), pp 425-432. August 2001.
[8] P. Dutre, P. Bekaert, and K. Bala. “Advanced [22] S. Vedula, S. Baker and T. Kanade, “Spatio-
Global Illumination.” AK Peters, Natick, MA, Temporal View Interpolation”, Proc. of the 13th
2003. ACM Eurographics Workshop on Rendering, June,
[9] H.Y. Shum, S.B. Kang, and S.C. Chan. "Survey of 2002.
image-based representations and compression [23] S. Vedula, S. Baker, and T. Kanade. “Image-based
techniques." IEEE Trans. On Circuits and Systems spatio-temporal modeling and view interpolation
for Video Technology, vol. 13, no. 11, pp. 1020- of dynamic events.” ACM Transaction on
1037. Nov. 2003. Graphics, 24(2):240-261, April 2005.
76
NCCI 2010 -National Conference on Computational Instrumentation
CSIO Chandigarh, INDIA, 19-20 March 2010
77