3D20Rendering20 20Techniques20and20Challenges

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/50422357

3D Rendering - Techniques and Challenges

Article  in  International Journal of Engineering and Technology · April 2010


Source: DOAJ

CITATIONS READS
9 4,713

3 authors, including:

Ekta Walia
Maharishi Markandeshwar University, Mullana
46 PUBLICATIONS   826 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

part of PHD work View project

All content following this page was uploaded by Ekta Walia on 22 May 2014.

The user has requested enhancement of the downloaded file.


NCCI 2010 -National Conference on Computational Instrumentation
CSIO Chandigarh, INDIA, 19-20 March 2010

3D RENDERING - TECHNIQUES AND CHALLENGES

Vishal Verma, Ekta Walia*


M L N College, Yamuna Nagar
*Maharishi Markandeshwar University, Mullana
me_vishaal@hotmail.com

Abstract: Computer generated images and animations are getting more and more common. They are used in many
different contexts such as movies, mobiles, medical visualization, architectural visualization and CAD. Advanced
ways of describing surface and light source properties are important to ensure that artists are able to create realistic
and stylish looking images. Even when using advanced rendering algorithms such as ray tracing, time required for
shading may contribute towards a large part of the image creation time. Therefore both performance and flexibility is
important in a rendering system. This paper gives a comparative study of various 3D Rendering techniques and their
challenges in a complete and systematic manner.

Keywords: 3D Rendering; Geometry Based Rendering; Image Based Rendering.

1. INTRODUCTION global illumination effects such as reflection. A


In the real world, light sources emit photons that comparative study of local illumination methods in
normally travel in straight lines until they interact with terms of speed and visual quality is done by Walia and
a surface or a volume. When a photon encounters a Singh [5].
surface, it may be absorbed, reflected, or transmitted. There is a second class of illumination models that can
Some of these photons may hit the retina of an observer be applied to polygonal scenes, the so called global
where they are converted into a signal that is then illumination models. Unlike the local methods, these
processed by the brain, thus forming an image. methods are able to simulate the inter-reflections
Similarly, photons may be caught by the sensor of a between surfaces. Diffuse inter-reflections can be
camera. In either case, the image is a 2D representation simulated by the radiosity method [6], and specular
of the environment. reflections are handled by recursive ray-tracing
The formation of an image as a result of photons techniques [7]. Many more advanced global
interacting with a 3D environment may be simulated on illumination models [8] are also available. However
the computer. The environment is then replaced by a they are computationally too complex to be used for
3D geometric model and the interaction of light with real time image synthesis on available hardware.
this model is simulated with one of a large number of
available algorithms. The process of image synthesis by The major problems with Geometry Based Rendering
simulating light behavior is called rendering. are:
• No Guarantee for the rightness of the models.
2. GEOMETRY BASED RENDERING • A lot of computation time is needed.
ALGORITHMS • Rendering algorithms are complex and
In geometry based rendering the illumination of a scene therefore call for special hardware if
has to be simulated by applying a shading model. As interactive speeds are needed.
hardware systems provided more and more computing • Even if special hardware is used, the
power, those models became more sophisticated. performance of the system is hard to measure
Gouraud shading [1] is a very simple technique that since the rendering time is highly dependent
linearly interpolates color intensities calculated at the on the scene complexity.
vertices of a rendered polygon across the interior of the
polygon. Phong introduced a more accurate model [2] 3. IMAGE BASED RENDERING
that is able to simulate specular highlights. He also ALGORITHMS
proposed to interpolate normals instead of intensities Traditionally, a description of the 3D scene being
on rendered polygons, thus enabling more accurate rendered is provided by a detailed and complex model
evaluations of the actual shading model. Many fast of the scene. To avoid the expense of modeling a
methods [3] [4] have also been proposed that complicated scene, it is sometimes more convenient to
approximate the quality of Phong Shading. All of these photograph a scene from different viewpoints. To
models are local in the sense that they fail to model create images for novel viewpoints that were not

72
NCCI 2010 -National Conference on Computational Instrumentation
CSIO Chandigarh, INDIA, 19-20 March 2010

photographed, an interpolation scheme may be applied. field of a scene in appropriate directions. The
Rendering using images as a modeling primitive is Lumigraph [12] is similar to light field rendering [11].
called image-based rendering. In addition to features of light field rendering, it also
Computer graphics researchers recently have turned to allows us to include any geometric knowledge we may
image-based rendering due to following reasons: capture to improve rendering performance. Unlike the
• Close to photo realism. light field and Lumigraph where cameras are placed on
• Rendering time is decoupled from scene a two-dimensional grid, the 3D Concentric Mosaics
complexity. [13] representation reduces the amount of data by
• Images are used as input. capturing a sequence of images along a circular path.
• Exploits coherence. Challenges: Because such rendering techniques do not
• Pre-calculation of scene data/ images. rely on any geometric impostors, they have a tendency
to rely on oversampling to counter undesirable aliasing
Instead of constructing a scene with millions of effects in output display. Oversampling means more
polygons, in Image Based Rendering the scene is intensive data acquisition, more storage, and higher
represented by a collection of photographs along with a redundancy.
greatly simplified geometric model. This simple 3.2 Rendering with implicit geometry
representation allows traditional light transport These techniques for rendering rely on positional
simulations to be replaced with basic image-processing correspondences (typically across a small number of
routines that combine multiple images together to images) to render new views. This class has the term
produce never-before-seen images from new vantage implicit to express the fact that geometry is not directly
points. available. Common approaches under this class are
There have been many IBR representations invented in • View Interpolation [14],
the literature. They basically have following three • View Morphing [15],
categories [9]: • Joint View Interpolation [16].
• Rendering with no geometry View interpolation [14] uses optical flow (i.e. Relative
• Rendering with implicit geometry transforms between cameras) to directly generate
• Rendering with explicit geometry intermediate views. But the problem with this method
3.1 Rendering with no geometry is that the intermediate view may not necessarily be
We start with representative techniques for rendering geometrically correct. View morphing [15] is a
with unknown scene geometry. These techniques specialized version of view interpolation, except that
typically rely on many input images and also on the the interpolated views are always geometrically correct.
characterization of the 7D plenoptic function [10]. The geometric correctness is ensured because of the
Common approaches under this class are linear camera motion. Computer vision techniques are
• Light field [11] usually used to generate such correspondences. M.
• Lumigraph [12] Lhuillier et al. proposed a new method [16] which
• Concentric mosaics [13] automatically interpolating two images and tackle two
most difficult problems of morphing due to the lack of
The lightfield is the radiance density function
depth information: pixel matching and visibility
describing the flow of energy along all rays in 3D
handling.
space. Since the description of a ray’s position and
Challenges: Representations that rely on implicit
orientation requires four parameters (e.g., two-
geometry require accurate image registration for high-
dimensional positional information and two-
quality view synthesis.
dimensional angular information), the radiance is a 4D
3.3 Rendering with explicit geometry
function. Image, on the other hand, is only two
Representations that do not rely on geometry typically
dimensional and lightfield imagery must therefore be
require a lot of images for rendering, and
captured and represented in 2D form. A variety of
representations that rely on implicit geometry require
techniques have been developed to transform and
accurate image registration for high-quality view
capture the 4D radiance in a manner compatible with
synthesis. IBR representations that use explicit
2D [11] [12].
geometry have generally source descriptions. Such
In Light Field Rendering [11], the light fields are
descriptions can be the scene geometry, the texture
created from large arrays of both rendered and digitized
maps, the surface reflection model etc.
images. The latter are acquired using a video camera
3.3.1 Scene Geometry as Depth Maps
mounted on a computer-controlled gantry. Once a light
These approaches use depth maps as scene
field has been created, new views may be constructed
representation. Depth maps indicate the per-pixel depth
in real time by extracting 2D slices from the 4D light

73
NCCI 2010 -National Conference on Computational Instrumentation
CSIO Chandigarh, INDIA, 19-20 March 2010

values of the reference views. Such a depth map is


easily available for synthetic scenes, and can be Debevec et al. proposed view dependent texture
obtained for real scenes via a range finder. Common mapping (VDTM) [24], in which the reference views
approaches under this class are are generated from the texture map through a weighting
• 3D Warping [17] scheme. The weights are determined by the angular
• Relief Texture [18] deviation from the reference views to the virtual view
to be rendered. Later a more efficient implementation
• Layered Depth Images (LDI) [19]
of VDTM was proposed in [25], where the per-pixel
• LDI tree [20] weight calculation was replaced by a per-polygon
When the depth information is available for every point search in a pre-computed lookup table. The image-
in one or more images, 3D warping [17] techniques can based visual hull (IBVH) algorithm [26] can be
be used to render nearly viewpoints. To improve the considered as another example of VDTM. In IBVH, the
rendering speed of 3D warping, the warping process scene geometry was reconstructed through an image
can be factored into a relatively simple pre-warping space visual hull [27] algorithm. Note that VDTM is in
step and a traditional texture mapping step. The texture fact a special case of the later proposed unstructured
mapping step can be performed by standard graphics Lumigraph rendering [21].
hardware. This is the idea behind relief texture [18]. A 3.3.4 Scene Geometry with Reflection Model
similar factoring algorithm was performed for the LDI Other than the texture map, the appearance of an object
[19], where the depth map is first warped to the output is also determined by the interaction of the light
image with visibility check, and colors are pasted sources in the environment and the surface reflection
afterwards. LDI store a view of the scene from a single model. Common approaches that use Reflection model
input camera view, but with multiple pixels along each with scene geometry as scene representation are
line of sight. Though LDI has the simplicity of warping • Reflection space IBR [28]
a single image, it does not consider the issue of • Surface light field [29]
sampling rate. Chang et al. [20] proposed LDI trees so
In [28], Cabral et al. proposed reflection space image-
that the sampling rates of the reference images are
based rendering. Reflection space IBR records the total
preserved by adaptively selecting an LDI in the LDI
reflected radiance for each possible surface direction.
tree for each pixel.
The above method assumes that if two surface points
3.3.2 Scene Geometry as Mesh Model
share the same surface direction, they have the same
Mesh model is the most widely used component in
reflection pattern. This might not be true due to
model-based rendering. Despite the difficulty to obtain
multiple reasons such as inter reflections. Wood et al.
such a model, if it is available in image-based
proposed improved surface light field [29] which also
rendering, we should make use of it to improve the
considers the concept of inter reflections.
rendering quality. Common approaches that use mesh
Challenges: Obtaining source descriptions from real
models as scene representation are
images is hard even with state-of-art vision algorithms.
• Unstructured Lumigraph [21]
3.4 Sampling and Compression
• Spatial-temporal view interpolation [22] [23] Once the IBR representation of the scene has been
Buchler et al. proposed the unstructured Lumigraph determined, one may further reduce the data size
rendering [21], where weighted light ray interpolation through sampling and compression [9] [30]. The
was used to obtain light rays in the novel view. One sampling analysis can tell the minimum number of
concern about the mesh model is that it has a finite images / light rays that is necessary to render the scene
resolution. To remove the granular effects in the at a satisfactory quality. Compression, on the other
rendered image due to finite resolution, a model hand, can further remove the redundancy inside and
smoothing algorithm was applied during the rendering, between the captured images. Due to the high
which greatly improved the resultant image quality redundancy in many IBR representations, an efficient
[22] [23]. IBR compression algorithm can easily reduce the data
3.3.2 Scene Geometry with Texture Maps size by tens or hundreds of times.
As texture maps are often obtained from real objects, a
geometric model with texture mapping can produce 4. IMPORTANCE OF IBR
very realistic scenes. Common approaches that use Traditionally, virtual reality environments have been
texture maps with scene geometry as scene generated by rendering a geometrical model of the
representation are environment using techniques such as polygon
• View dependent texture map [24] [25] rendering or ray-tracing. In order to get a convincing
• Image-based visual hull [26] image, both the geometrical model and the rendering

74
NCCI 2010 -National Conference on Computational Instrumentation
CSIO Chandigarh, INDIA, 19-20 March 2010

algorithms have to be complex and therefore call for


special hardware if interactive speeds are needed. Even
if special hardware is used, the performance of the
system is hard to measure since the rendering time is
highly dependent on the scene complexity. In
simulators, for example, it is not acceptable to have a
low frame rate when the view is complex, since it Figure 1 (b) Image
Figure 1 (a) Image
introduces a time lag in the control loop where persons Generation using
Generation using
play an important role. Creating systems able to Phong Shading
Gouraud Shading
perform well in worst cases is expensive.
Image-based rendering is a better approach as by
In addition these models do not simulate the global
sampling the light distribution in the scene to be
illumination effects. The global illumination models are
rendered, typically by taking photographs from
computationally too complex to be used for real time
different positions and in different directions, it is able
image synthesis on available hardware.
to present new views of the scene. The algorithms used
Alternative approach is Image Based Rendering. We
are relatively fast and several commercial
found that all the IBR representations originate from
implementations for use on ordinary personal
the 7D plenoptic function [10], which describes the
computers exist, of which the QuickTime VR system
appearance of the world. As the 7D plenoptic function
[31] from Apple Computer is the best known today.
has too much data to handle, various approaches have
Image-based systems have a fixed rendering time,
been proposed to reduce the data size while still giving
independent of the scene complexity, which simplify
the viewer a good browsing experience. Such
system construction.
techniques are widely adopted in the real world. For
In addition to the above, many other forces have
example, Figure 2(a) shows the original image. Figure
contributed to the recent research work in the area of
2(b) and Figure 2(c) shows the two views of original
image-based rendering. Among these are:
image generated by 3D warping.
• Our ability to render models has begun to
outpace our capacity to create high-quality
models.
• The limited computational capabilities and
lack of powerful 3D graphics hardware
support in mobile/ hand held devices.
• The availability of inexpensive digital image
acquisition hardware. Figure 2(a) Original Image
• Recent trends in computer graphics
accelerator architectures.
Image Based Rendering approach to visualize real-
world or synthetic scenes on mobile devices, has been
proposed in [32] [33] [34]. For the mobile devices that
are equipped with wireless network, client-server
framework with IBR can be utilized to increase the
performance.
Figure 2(b) Generated Figure 2(c) Generated
view through 3D view through 3D
5. RESULTS AND CONCLUSION Warping Warping
We have surveyed the rendering techniques which have
two main classifications: Geometry Based Rendering As compared to Geometry Based Rendering, the
and Image Based Rendering. rendering process in IBR is usually very fast and can be
In geometry based rendering techniques, we found that implemented with software. However, hardware
the shading quality obtained from Phong shading acceleration will be definitely helpful for future high-
Model is better as compared to Gouraud shading Model resolution IBR rendering. As most operations in IBR
but it is computationally more expensive. A rendering are simple mathematical operations such as
comparison is shown below in Figure 1. linear interpolation, and most IBR rendering process
can be performed in parallel, we expect that such

75
NCCI 2010 -National Conference on Computational Instrumentation
CSIO Chandigarh, INDIA, 19-20 March 2010

hardware is not difficult to develop and can [10] E. H. Adelson, and J. R. Bergen, “The plenoptic
dramatically increase the rendering speed. function and the elements of early vision”,
Computational Models of Visual Processing,
No matter how much the storage and memory increase Edited by Michael Landy and J. Anthony
in the future, sampling and compression are always Movshon. The MIT Press, Cambridge, Mass.
useful to keep the IBR data at a manageable size. The Chapter 1. 1991.
work on sampling and compression, however, has just [11] M. Levoy and P. Hanrahan, “Light field
started. There are still many problems which remain rendering”, Computer Graphics (SIGGRAPH’96),
unsolved, such as the sampling rate when certain pp. 31-42. August 1996.
source description is available. A high compression [12] S. J. Gortler, R. Grzeszczuk, R. Szeliski and M. F.
ratio in IBR seems to rely heavily on how good the Cohen, “The Lumigraph”, Computer Graphics
images can be predicted, which depends on, e.g., how (SIGGRAPH’96), pp. 43-54. August 1996.
good a certain source description can be reconstructed.
[13] H.Y. Shum and L.W. He, “Rendering with
Joint work between the signal processing community
concentric mosaics”, Computer Graphics
and the computer vision community is highly expected
(SIGGRAPH’99), pp.299-306. August 1999.
in this regard.
[14] S. E. Chen and L. Williams, “View interpolation
6. REFERENCES for image synthesis”, Computer Graphics
[1] H. Gouraud. “Continuous Shading of Curved (SIGGRAPH’93), pp. 279-288. August 1993.
Surfaces.” IEEE Transactions on Computers, C- [15] S. M. Seitz and C. M. Dyer, “View morphing”,
20(6):623–629, June 1971. Computer Graphics (SIGGRAPH’96), pp. 21-30.
[2] B.T. Phong. “Illumination for Computer Generated August 1996.
Pictures.” Communications of the ACM, [16] M. Lhuillier and L. Quan. “Image interpolation by
18(6):311–317, June 1975. joint view triangulation.” In IEEE Conference on
[3] Chandan Singh, Ekta Walia, "Shading by Fast Bi- Computer Vision and Pattern Recognition, volume
Quadratic Normal Vector Interpolation", ICGST 2, pages 139-145, Fort Collins, CO, June 1999.
International Journal on Graphics, Vision and [17] L. McMillan, “An Image-Based Approach to
Image Processing, Vol. 5. Issue 9, pp. 49-54, 2005. Three-Dimensional Computer Graphics”, Ph.D.
[4] Chandan Singh, Ekta Walia, "Fast Hybrid Shading: Thesis, Department of Computer Science,
An Application of Finite Element Methods in 3D University of North Carolina at Chapel Hill, 1997.
Rendering", International Journal of Image and [18] M. Oliveira and G. Bishop. “Relief textures”.
Graphics, Vol. 5, No. 4, pp-789-810, 2005. Technical report, UNC Computer Science TR99-
[5] Ekta Walia, Chandan Singh. “An Analysis of Linear 015, March 1999.
and Non-Linear Interpolation Techniques for [19] J. Shade, S. Gortler, L. W. He and R. Szeliski,
Three-Dimensional Rendering.” IEEE Proceedings “Layered depth images”, In Computer Graphics
of the Geometric Modeling and Imaging― New (SIGGRAPH), pages 231-242, August 1998.
Trends (GMAI'06). 2006. [20] C. Chang, G. Bishop and A. Lastra, “LDI tree: A
[6] Cohen, Greenberg, Immel, and Brock. “An efficient hierarchical representation for image-based
radiosity approach for realistic image synthesis.” rendering”, Computer Graphics (SIGGRAPH’99),
IEEE Computer Graphics and Applications 6, pp. 291-298. August 1999.
pages 23–35, 1986. [21] C. Buehler, M. Bosse, L. McMillan, S. Gortler,
[7] Turner Whitted. “An improved illumination model and M. Cohen, “Unstructured Lumigraph
for shaded display.” Communications of the ACM, Rendering”, Computer Graphics (SIG-
23(6):343–349, June 1980. GRAPH’01), pp 425-432. August 2001.
[8] P. Dutre, P. Bekaert, and K. Bala. “Advanced [22] S. Vedula, S. Baker and T. Kanade, “Spatio-
Global Illumination.” AK Peters, Natick, MA, Temporal View Interpolation”, Proc. of the 13th
2003. ACM Eurographics Workshop on Rendering, June,
[9] H.Y. Shum, S.B. Kang, and S.C. Chan. "Survey of 2002.
image-based representations and compression [23] S. Vedula, S. Baker, and T. Kanade. “Image-based
techniques." IEEE Trans. On Circuits and Systems spatio-temporal modeling and view interpolation
for Video Technology, vol. 13, no. 11, pp. 1020- of dynamic events.” ACM Transaction on
1037. Nov. 2003. Graphics, 24(2):240-261, April 2005.

76
NCCI 2010 -National Conference on Computational Instrumentation
CSIO Chandigarh, INDIA, 19-20 March 2010

[24] P. Debevec, C. J. Taylor and J. Malik, “Modeling


and rendering architecture from photographs: a
hybrid geometry- and image-based approach”,
Computer Graphics (SIGGRAPH’96), pp. 11-20.
August 1996.
[25] P. Debevec, Y. Yu, and G. Borshukov. “Efficient
view-dependent image-based rendering with
projective texture-mapping”. In Proc. 9th
Eurographics Workshop on Rendering, pages 105–
116, 1998.
[26] W. Matusik, C. Buehler, R. Raskar, S. Gortler, and
L. McMillan, “Image-based Visual Hulls”,
Computer Graphics (SIGGRAPH’00), pp. 369-
374. July 2000.
[27] A. Laurentini, “The Visual Hull Concept for
Silhouette Based Image Understanding.” IEEE
PAMI, Vol. 16, No. 2, pp. 150-162, 1994.
[28] B. Cabral, M. Olano, P. Nemec, “Reflection space
image based rendering”, Computer Graphics
(SIGGRAPH’99), pp. 165-171. August 1999.
[29] D. N. Wood, D. I. Azuma, K. Aldinger, B.
Curless, T. Duchamp, D. H. Salesin and W.
Stuetzle, “Surface light fields for 3D
photography”, Computer Graphics
(SIGGRAPH’00), pp. 287-296. July 2000.
[30] Cha Zhang. “On Sampling of Image Based
Rendering Data”. Ph.D. Thesis, Department of
Electrical and Computer Engineering, Carnegie
Mellon University Pittsburgh, PA. June 2004.
[31] S. E. Chen, “QuickTime VR – An Image-Based
Approach to Virtual Environment Navigation”,
Computer Graphics (SIGGRAPH’95), pp. 29-38,
August 1995.
[32] Chang C, Ger S. “Enhancing 3D graphics on
mobile devices by image-based rendering”.
Proceedings of the Third IEEE Pacific Rim
Conference on Multimedia: Advances in
Multimedia Information Processing, Taiwan, pp.
1105–1111. 2002.
[33] Boukerche A, Jing F, de Araujo R B. “A 3D
image-based rendering technique for mobile
handheld devices”. Proceedings of the 2006
International Symposium on a World of Wireless,
Mobile and Multimedia Networks
(WoWMoM’06), Buffalo, pp. 325–331. 2006.
[34] E. Pilav, B. Brkic. “Real-time Image Based
Rendering Using Limited Resources”. CESCG
2008.

77

View publication stats

You might also like