Three-Dimensional Viewing: 4 3D Viewing and Visible Surface Detection
Three-Dimensional Viewing: 4 3D Viewing and Visible Surface Detection
Three-Dimensional Viewing: 4 3D Viewing and Visible Surface Detection
Three-Dimensional Viewing
4.1 Overview of Three-Dimensional Viewing Concepts
When we model a three-dimensional scene, each object in the scene is typically defined
with a set of surfaces that form a closed boundary around the object interior.
In addition to procedures that generate views of the surface features of an object, graphics
packages sometimes provide routines for displaying internal components or cross-
sectional views of a solid object.
Many processes in three-dimensional viewing, such as the clipping routines, are similar
to those in the two-dimensional viewing pipeline.
But three-dimensional viewing involves some tasks that are not present in
twodimensional Viewing
1
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
This coordinate reference defines the position and orientation for a view
plane (or
projection plane) that corresponds to a camera film plane as shown in below figure.
Three parallel-projection views of an object, showing relative proportions from different viewing
positions
2
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
causes objects farther from the viewing position to be displayed smaller than objects of
the same size that are nearer to the viewing position
Depth Cueing
Depth information is important in a three-dimensional scene so that we can easily identify,
for a particular viewing direction, which is the front and which is the back of each
displayed object.
There are several ways in which we can include depth information in the two-
dimensional representation of solid objects.
A simple method for indicating depth with wire-frame displays is to vary the brightness of
line segments according to their distances from the viewing position which is termed as
depth cueing.
The lines closest to the viewing position are displayed with the
highest intensity, and lines farther away are displayed with decreasing intensities.
Depth cueing is applied by choosing a maximum and a minimum intensity value
and a range of distances over which the intensity is to vary.
Another application of depth cuing is modeling the effect of the
atmosphere on the perceived intensity of objects
3
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
nonvisible lines as dashed lines. Or we could remove the nonvisible lines from the
display
Surface Rendering
We set the lighting conditions by specifying the color and location of the light
sources, and we can also set background illumination effects.
Surface properties of objects include whether a surface is
transparent or opaque and whether the surface is smooth or rough.
We set values for parameters to model surfaces such as glass, plastic, wood-grain
patterns, and the bumpy appearance of an orange.
The vibrations of the mirror are synchronized with the display of the scene on the cathode
ray tube (CRT).
As the mirror vibrates, the focal length varies so that each point in the scene is reflected
to a spatial position corresponding to its depth.
Stereoscopic devices present two views of a scene: one for the left eye and the other for
the right eye.
The viewing positions correspond to the eye positions of the viewer. These two views are
typically displayed on alternate refresh cycles of a raster monitor
4
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
Figure above shows the general processing steps for creating and transforming a three-
dimensional scene to device coordinates.
Once the scene has been modeled in world coordinates, a viewing-coordinate system is
selected and the description of the scene is converted to viewing coordinates
5
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
✓right-handed viewing-coordinate system, with axes x view, y view, and z view, relative to a right-
handed world-coordinate frame.
6
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
An additional scalar parameter is used to set the position of the view plane at some
coordinate value zvp along the zview axis,
This parameter value is usually specified as a distance from the viewing origin along the
direction of viewing, which is often taken to be in the negative zview direction.
Vector N can be specified in various ways. In some graphics systems, the direction for N
is defined to be along the line from the world-coordinate origin to a selected point
position.
Other systems take N to be in the direction from a reference point Pref to the viewing
origin P0,
7
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
Specifying the view-plane normal vector N as the direction from a selected reference point Pref to
the viewing-coordinate origin P0.
Once we have chosen a view-plane normal vector N, we can set the direction for the
view-up vector V.
This vector is used to establish the positive direction for the yview axis.
Because the view-plane normal vector N defines the direction for the zview axis, vector V
should be perpendicular to N.
But, in general, it can be difficult to determine a direction for V that is precisely
perpendicular to N.
Therefore, viewing routines typically adjust the user-defined orientation of vector V,
8
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
With a left-handed system, increasing zview values are interpreted as being farther from
the viewing position along the line of sight.
But right-handed viewing systems are more common, because they have the same
orientation as the world-reference frame.
Because the view-plane normal N defines the direction for the zview axis and the view-up
vector V is used to obtain the direction for the yview axis, we need only determine the
direction for the xview axis.
Using the input values for N and V,we can compute a third vector, U, that si
perpendicular to both N and V.
Vector U then defines the direction for the positive xview axis.
We determine the correct direction for U by taking the vector cross product of V and N
so as to form a right-handed viewing frame.
The vector cross product of N and U also produces the adjusted value for V,
perpendicular to both N and U, along the positive yview axis.
Following these procedures, we obtain the following set of unit axis vectors for a right-
handed viewing coordinate system.
The coordinate system formed with these unit vectors is often described as a uvn
viewing-coordinate reference frame
9
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
For the rotation transformation, we can use the unit vectors u, v, and n to form thecomposite
rotation matrix that superimposes the viewing axes onto the world frame. This
transformation matrix is
where the elements of matrix R are the components of the uvn axis vectors.
The coordinate transformation matrix is then obtained as the product of the preceding
translation and rotation matrices:
Translation factors in this matrix are calculated as the vector dot product of each of the u,
v, and n unit vectors with P0, which represents a vector from the world origin to the
viewing origin.
11
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
12
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
Front, side, and rear orthogonal projections of an object are called elevations; and a
toporthogonal projection is called a plan view
13
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
We can also form orthogonal projections that display more than one face of an object.
Such views are called axonometric orthogonal projections.
The most commonly used axonometric projection is the isometric projection, which is
generated by aligning the projection plane (or the object) so that the plane intersects each
coordinate axis in which the object is defined, called the principal axes, at the same
distance from the origin
14
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
For three-dimensional viewing, the clipping window is positioned on the view plane with
its edges parallel to the xview and yview axes, as shown in Figure below . If we want to
use some other shape or orientation for the clipping window, we must develop our own
viewing procedures
The edges of the clipping window specify the x and y limits for the part of the scene that
we want to display.
These limits are used to form the top, bottom, and two sides of a clipping region called
the orthogonal-projection view volume.
Because projection lines are perpendicular to the view plane, these four boundaries are
planes that are also perpendicular to the view plane and that pass through the edges of the
clipping window to form an infinite clipping region, as in Figure below.
15
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
These two planes are called the near-far clipping planes, or the front-back clipping
planes.
The near and far planes allow us to exclude objects that are in front of or behind the part
of the scene that we want to display.
When the near and far planes are specified, we obtain a finite orthogonal view volume
that is a rectangular parallelepiped, as shown in Figure below along with one possible
placement for the view plane
Once we have established the limits for the view volume, coordinate descriptions inside
this rectangular parallelepiped are the projection coordinates, and they can be mapped
into a normalized view volume without any further projection processing.
Some graphics packages use a unit cube for this normalized view volume, with each of
the x, y, and z coordinates normalized in the range from 0 to 1.
16
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
Also, z-coordinate positions for the near and far planes are denoted as znear and zfar,
respectively. Figure below illustrates this normalization transformation
17
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
Objects are then displayed with foreshortening effects, and projections of distant objects
are smaller than the projections of objects of the same size that are closer to the view
plane
The projection line intersects the view plane at the coordinate position (xp, yp, zvp), where
zvp is some selected position for the view plane on the zview axis.
We can write equations describing coordinate positions along this perspective-projection
line in parametric form as
18
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
On the view plane, z’ = zvp and we can solve the z’ equation for parameter u at this
position along the projection line:
Substituting this value of u into the equations for x’ and y’, we obtain the general
perspective-transformation equations
Case 2:
Sometimes the projection reference point is fixed at the coordinate origin, and
(xprp, yprp, zprp) = (0, 0, 0) :
Case 3:
If the view plane is the uv plane and there are no restrictions on the placement of the
projection reference point, then we
have zvp = 0:
19
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
Case 4:
With the uv plane as the view plane and the projection reference point on the zview axis,
the perspective equations are
xprp = yprp = zvp = 0:
The view plane is usually placed between the projection reference point and the scene,
but, in general, the view plane could be placed anywhere except at the projection point.
If the projection reference point is between the view plane and the scene, objects are
inverted on the view plane (refer below figure)
Perspective effects also depend on the distance between the projection reference point
and the view plane, as illustrated in Figure below.
20
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
If the projection reference point is close to the view plane, perspective effects are
emphasized; that is, closer objects will appearmuchlarger thanmore distant objects of the
same size.
Similarly, as the projection reference point moves farther from the view
palne,the difference in the size of near and far objects decreases
21
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
The displayed view of a scene includes only those objects within the pyramid, just
as wecannot see objects beyond our peripheral vision, which are outside the cone of vision.
By adding near and far clipping planes that are perpendicular to the zview axis (and
parallel to the view plane), we chop off parts of the infinite, perspectiveprojection view
volume to form a truncated pyramid, or frustum, view volume
But with a perspective projection, we could also use the near clipping plane to take out
large objects close to the view plane that could project into unrecognizable shapes within
the clipping window.
Similarly, the far clipping plane could be used to cut out objects far from the
projectionreference point that might project to small blots on the view plane.
22
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
Where,
Ph is the column-matrix representation of the homogeneous point (xh, yh, zh, h)
and P is the column-matrix representation of the coordinate position (x, y, z, 1).
Second, after other processes have been applied, such as the normalization
transformation and clipping routines, homogeneous coordinates are divided by parameter h to
obtain the true transformation-coordinate positions.
The following matrix gives one possible way to formulate a perspective-
projection matrix.
Parameters sz and tz are the scaling and translation factors for normalizing the
projectedvalues of z-coordinates.
Specific values for sz and tz depend on the normalization range we select.
23
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
Because the frustum centerline intersects the view plane at the coordinate location (xprp,
yprp, zvp), we can express the corner positions for the clipping window in terms of the
window dimensions:
24
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
For a given projection reference point and view-plane position, the field-of view angle
determines the height of the clipping window from the right triangles in the diagram of
Figure below, we see that
Therefore, the diagonal elements with the value zprp −zvp could be replaced by either of
the following two expressions
25
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
In this case, we can first transform the view volume to a symmetric frustum and then to a
normalized view volume.
An oblique perspective-projection view volume can be converted to a sym metric frustum
by applying a z-axis shearing-transformation matrix.
This transformation shifts all positions on any plane that is perpendicular to the z axis by
an amount that is proportional to the distance of the plane from a specified z- axis
reference position.
The computations for the shearing transformation, as well as for the perspective and
normalization transformations, are greatly reduced if we take the projection reference
point to be the viewing-coordinate origin.
26
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
Taking the projection reference point as (xprp, yprp, zprp) = (0, 0, 0), we obtain the
elements of the required shearing matrix as
Similarly, with the projection reference point at the viewing-coordinate origin and with
the near clipping plane as the view plane, the perspective-projection matrix is simplified
to
Concatenating the simplified perspective-projection matrix with the shear matrix we have
27
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
Because the centerline of the rectangular parallelepiped view volume is now the zview
axis, no translation is needed in the x and y normalization transformations: We require
only the x and y scaling parameters relative to the coordinate origin.
The scaling matrix for accomplishing the xy normalization is
28
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
And the elements of the normalized transformation matrix for a general perspective-
projection are
29
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
glMatrixMode (GL_MODELVIEW);
a matrix is formed and concatenated with the current modelview matrix, We set the
modelview mode with the statement above
gluLookAt (x0, y0, z0, xref, yref, zref, Vx, Vy, Vz);
Viewing parameters are specified with the above GLU function.
30
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
This function designates the origin of the viewing reference frame as the world-
coordinate position P0 = (x0, y0, z0), the reference position as Pref =(xref, yref, zref), and
the view-up vector as V = (Vx, Vy, Vz).
If we do not invoke the gluLookAt function, the default OpenGL viewing
parametersareP0 = (0, 0, 0)
Pref = (0, 0, −1)
V = (0, 1, 0)
31
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
#include <GL/glut.h>
GLint winWidth = 600, winHeight = 600; // Initial display-window size.
GLfloat x0 = 100.0, y0 = 50.0, z0 = 50.0; // Viewing-coordinate origin.
GLfloat xref = 50.0, yref = 50.0, zref = 0.0; // Look-at point.
GLfloat Vx = 0.0, Vy = 1.0, Vz = 0.0; // View-up vector.
/* Set coordinate limits for the clipping window: */
GLfloat xwMin = -40.0, ywMin = -60.0, xwMax = 40.0, ywMax =
60.0; /* Set positions for near and far clipping planes: */ GLfloat dnear
= 25.0, dfar = 125.0;
32
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
33
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
glutDisplayFunc (displayFcn);
glutReshapeFunc (reshapeFcn);
glutMainLoop ( );
}
34
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
We can simplify the back-face test by considering the direction of the normal vector N for
a polygon surface. If Vview is a vector in the viewing direction from our camera position,
as shown in Figure below, then a polygon is a back face if
Vview . N > 0
In a right-handed viewing system with the viewing direction along the negative zv axis
(Figure below), a polygon is a back face if the z component, C, of its normal vector N
satisfies C < 0.
Also, we cannot see any face whose normal has z component C = 0, because our viewing
direction is grazing that polygon. Thus, in general, we can label any polygon as a back
face if its normal vector has a z component value that satisfies the inequality
C <=0
Similar methods can be used in packages that employ a left-handed viewing system. nI
these packages, plane parameters A, B, C, and D can be calculated from polygon vertex
coordinates specified in a clockwise direction.
Inequality 1 then remains a valid test for points behind the polygon.
By examining parameter C for the different plane surfaces describing an object, we can
immediately identify all the back faces.
For other objects, such as the concave polyhedron in Figure below, more tests must be
carried out to determine whether there are additional faces that are totally or partially
obscured by other faces
35
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
In general, back-face removal can be expected to eliminate about half of the polygon
surfaces in a scene from further visibility tests.
Figure above shows three surfaces at varying distances along the orthographic projection line
from position (x, y) on a view plane.
36
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
Depth-Buffer Algorithm
➔Initialize the depth buffer and frame buffer so that for all buffer positions (x, y),
depthBuff (x, y) = 1.0, frameBuff (x, y) = backgndColor
➔Process each polygon in a scene, one at a time, as follows:
For each projected (x, y) pixel position of a polygon, calculate the depth z (if not already
known).
8) If z < depthBuff (x, y), compute the surface color at that position and set
depthBuff (x, y) = z, frameBuff (x, y) = surfColor (x, y)
After all surfaces have been processed, the depth buffer contains depth values for the visible
surfaces and the frame buffer contains the corresponding color values for those surfaces.
Given the depth values for the vertex positions of any polygon in a scene,
wecan calculate the depth at any other point on the plane containing the polygon.
At surface position (x, y), the depth is calculated from the plane equation as
If the depth of position (x, y) has been determined to be z, then the depth
z’of the next position (x + 1, y) along the scan line is obtained as
37
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
The ratio −A/C is constant for each surface, so succeeding depth values across a scan
line are obtained from preceding values with a single addition.
We can implement the depth-buffer algorithm by starting at a top vertex of the polygon.
Then, we could recursively calculate the x-coordinate values down a left edge of the
polygon.
The x value for the beginning position on each scan line can be calculated from the
beginning (edge) x value of the previous scan line as
If we are processing down a vertical edge, the slope is infinite and the recursive
calculations reduce to
One slight complication with this approach is that while pixel positions are at integer (x,
coordinates, the actual point of intersection of a scan line with the edge of a polygon
may not be.
As a result, it may be necessary to adjust the intersection point by rounding its fractional
part up or down, as is done in scan-line polygon fill algorithms.
An alternative approach is to use a midpoint method or Bresenham-type algorithm for
determining the starting x values along edges for each scan line.
38
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
The method can be applied to curved surfaces by determining depth and color values at
each surface projection point.
In addition, the basic depth-buffer algorithm often performs needless calculations.
Objects are processed in an arbitrary order, so that a color can be computed for a surface
point that is later replaced by a closer surface.
39
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
We can also apply depth-buffer visibility testing using some other initial value for
themaximumdepth, and this initial value is chosen with theOpenGLfunction:
glClearDepth (maxDepth);
Parameter maxDepth can be set to any value between 0.0 and 1.0.
Projection coordinates in OpenGL are normalized to the range from −1.0
to 1.0, and the depth values between the near and far clipping planes are
further normalized to the range from 0.0 to 1.0.
As an option, we can adjust these normalization values with
glDepthRange (nearNormDepth, farNormDepth);
By default, nearNormDepth = 0.0 and farNormDepth = 1.0.
But with the glDepthRange function, we can set these two parameters to
any values within the range from 0.0 to 1.0, including nearNormDepth >
farNormDepth
Another option available in OpenGL is the test condition that is to be used for the depth-
buffer routines.We specify a test condition with the following function:
glDepthFunc (testCondition);
O Parameter testCondition can be assigned any one of the following eight symbolic
constants: GL_LESS, GL_GREATER, GL_EQUAL, GL_NOTEQUAL,
GL_LEQUAL, GL_GEQUAL, GL_NEVER (no points are processed), and
GL_ALWAYS.
O The default value for parameter testCondition is GL_LESS.
We can also set the status of the depth buffer so that it is in a read-only state or in a read-
write state. This is accomplished with
glDepthMask (writeStatus);
O When writeStatus = GL_TRUE (the default value), we can both read from and
write to the depth buffer.
O With writeStatus = GL_FALSE, the write mode for the depth buffer is disabled
and we can retrieve values only for comparison in depth testing.
40
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
41
Availaible at VTU HUB (Android App)
Module 4 3D Viewing and Visible surface detection
4.14 Questions
1. Explain 3D viewing pipeline?
2. Explain 3D Viewing parameters?
3. Explain the process of transformation from world to viewing coordinates
4. Explain the process of transformation from world to viewing coordinates
5. Expalin perspective projections
6. Explain the different perspective projection view volumes?
7. Explain openGL 3D viewing functions?
8. Explain classification of visible surface detection algorithm
9. Explain depth buffer algorithm
10. Explain openGL function for visible surface detection algorithm?
42