Computer Graphics Notes: Clipping, 3D Geometric Transformations, Color and Illumination Models
Computer Graphics Notes: Clipping, 3D Geometric Transformations, Color and Illumination Models
Computer Graphics Notes: Clipping, 3D Geometric Transformations, Color and Illumination Models
MODULE 3
3.1 Clipping:
3.1.1Clipping window,
3.1.2 Normalization and Viewport transformations,
3.1.3 Clipping algorithms:
2D point clipping,
2D line clipping algorithms: cohen-sutherland
line clipping.
Polygon fill area clipping: Sutherland
Hodgeman polygon clipping algorithm.
1
Module 3 Clipping
✓ An alternative method for specifying the orientation of the viewing frame is to give a
rotation angle relative to either the x or y axis in the world frame.
✓ The first step in the transformation sequence is to translate the viewing origin to the
world origin.
✓ Next, we rotate the viewing system to align it with the world frame.
✓ Given the orientation vector V, we can calculate the components of unit vectors v = (vx,
vy) and u = (ux, uy) for the yview and xview axes, respectively.
Where,
T is the translation matrix,
R is the rotation matrix
✓ A viewing-coordinate frame is moved into coincidence with the world frame is shown in
below figure
(a) applying a translation matrix T to move the viewing origin to the world origin, then
(b) applying a rotation matrix R to align the axes of the two systems.
2
Module 3 Clipping
➢ Once the clipping window has been established, the scene description is processed
through the viewing routines to the output device.
➢ Thus, we simply rotate (and possibly translate) objects to a desired position and set up the
clipping window all in world coordinates.
A triangle
(a) , with a selected reference point and orientation vector, is translated and rotated to position
(b) within a clipping window.
3
Module 3 Clipping
✓ To transform the world-coordinate point into the same relative position within the
viewport, we require that
✓ Solving these expressions for the viewport position (xv, yv), we have
xv = sxxw + tx
yv = syyw + ty
Where the scaling factors are
✓ We could obtain the transformation from world coordinates to viewport coordinates with
the following sequence:
1. Scale the clipping window to the size of the viewport using a fixed-point position of
(xwmin, ywmin).
2. Translate (xwmin, ywmin) to (xvmin, yvmin).
4
Module 3 Clipping
✓ The scaling transformation in step (1) can be represented with the two dimensional Matrix
✓ The two-dimensional matrix representation for the translation of the lower-left corner of
the clipping window to the lower-left viewport corner is
✓ And the composite matrix representation for the transformation to the normalized viewport
is
✓ The matrix for the normalization transformation is obtained by substituting −1 for xvmin
and yvmin and substituting +1 for xvmax and yvmax.
5
Module 3 Clipping
✓ Similarly, after the clipping algorithms have been applied, the normalized square with edge
length equal to 2 is transformed into a specified viewport.
✓ This time, we get the transformation matrix by substituting −1 for xwmin and ywmin and
substituting +1 for xwmax and ywmax
✓ Typically, the lower-left corner of the viewport is placed at a coordinate position specified
relative to the lower-left corner of the display window. Figure below demonstrates the
positioning of a viewport within a display window.
6
Module 3 Clipping
7
Module 3 Clipping
✓ A line-clipping algorithm processes each line in a scene through a series of tests and
intersection calculations to determine whether the entire line or any part of it is to be saved.
✓ The expensive part of a line-clipping procedure is in calculating the intersection positions
of a line with the window edges.
✓ Therefore, a major goal for any line-clipping algorithm is to minimize the intersection
calculations.
✓ To do this, we can first perform tests to determine whether a line segment is completely
inside the clipping window or completely outside.
✓ It is easy to determine whether a line is completely inside a clipping window, but it is more
difficult to identify all lines that are entirely outside the window.
✓ One way to formulate the equation for a straight-line segment is to use the following
parametric representation, where the coordinate positions (x0, y0) and (xend, yend) designate
the two line endpoints:
8
Module 3 Clipping
✓ A possible ordering for the clipping window boundaries corresponding to the bit positions
in the Cohen-Sutherland endpoint region code.
✓ Thus, for this ordering, the rightmost position (bit 1) references the left clipping-window
boundary, and the leftmost position (bit 4) references the top window boundary.
✓ A value of 1 (or true) in any bit position indicates that the endpoint is outside that window
border. Similarly, a value of 0 (or false) in any bit position indicates that the endpoint is not
outside (it is inside or on) the corresponding window edge.
✓ Sometimes, a region code is referred to as an “out” code because a value of 1 in any bit
position indicates that the spatial point is outside the corresponding clipping boundary.
✓ The nine binary region codes for identifying the position of a line endpoint, relative to the
clipping-window boundaries.
✓ Bit values in a region code are determined by comparing the coordinate values (x, y) of an
endpoint to the clipping boundaries.
9
Module 3 Clipping
✓ Bit 1 is set to 1 if x < xwmin, and the other three bit values are determined similarly.
✓ To determine a boundary intersection for a line segment, we can use the slopeintercept
form of the line equation.
✓ For a line with endpoint coordinates (x0, y0) and (xend, yend), the y coordinate of the
intersection point with a vertical clipping border line can be obtained with the calculation
y = y0 + m(x − x0)
Where the x value is set to either xwmin or xwmax, and the slope of
the line is calculated as
m = (yend − y0)/(xend − x0).
✓ Similarly, if we are looking for the intersection with a horizontal border, the x coordinate
can be calculated as
x = x0 + y − y0/m , with y set either to ywmin or to ywmax.
➔ When we cannot identify a fill area as being completely inside or completely outside the
clipping window, we then need to locate the polygon intersection positions with the
clipping boundaries.
10
Module 3 Clipping
➔ One way to implement convex-polygon clipping is to create a new vertex list at each
clipping boundary, and then pass this new vertex list to the next boundary clipper.
➔ The output of the final clipping stage is the vertex list for the clipped polygon
11
Module 3 ***SAI RAM*** Clipping
The selection of vertex edge of intersection for each clipper is given as follows
1. If the first input vertex is outside this clipping-window border and the second vertex is inside,
both the intersection point of the polygon edge with the window border and the second vertex are
sent to the next clipper.
2. If both input vertices are inside this clipping-window border, only the second vertex is sent to
the next clipper.
3. If the first vertex is inside this clipping-window border and the second vertex is outside, only
the polygon edge-intersection position with the clipping-window border is sent to the next clipper.
4. If both input vertices are outside this clipping-window border, no vertices are sent to the next
clipper.
Example
12
Module 3 Clipping
13
Module 3 3D Geometric Transformations
or
1
Module 3 3D Geometric Transformations
CODE:
typedef GLfloat Matrix4x4 [4][4];
/* Construct the 4 x 4 identity matrix. */
void matrix4x4SetIdentity (Matrix4x4 matIdent4x4)
{
GLint row, col;
for (row = 0; row < 4; row++)
for (col = 0; col < 4 ; col++)
matIdent4x4 [row][col] = (row == col);
}
void translate3D (GLfloat tx, GLfloat ty, GLfloat tz)
{
Matrix4x4 matTransl3D;
/* Initialize translation matrix to identity. */
matrix4x4SetIdentity (matTransl3D);
2
Module 3 3D Geometric Transformations
3
Module 3 3D Geometric Transformations
✓ Transformation equations for rotations about the other two coordinate axes can be obtained
with a cyclic permutation of the coordinate parameters x, y, and z
x → y→ z→ x
Along x axis
Along y axis
1. Translate the object so that the rotation axis coincides with the parallel coordinate axis.
4
Module 3 3D Geometric Transformations
✓ When an object is to be rotated about an axis that is not parallel to one of the coordinate
axes, we must perform some additional transformations we can accomplish the required
rotation in five steps:
1. Translate the object so that the rotation axis passes through the coordinate origin.
2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes.
3. Perform the specified rotation about the selected coordinate axis.
4. Apply inverse rotations to bring the rotation axis back to its original orientation.
5. Apply the inverse translation to bring the rotation axis back to its original spatial position.
5
Module 3 3D Geometric Transformations
Where the components a, b, and c are the direction cosines for the rotation axis
• The first step in the rotation sequence is to set up the translation matrix that repositions
the rotation axis so that it passes through the coordinate origin.
• Translation matrix is given by
• Because rotation calculations involve sine and cosine functions, we can use standard
vector operations to obtain elements of the two rotation matrices.
• A vector dot product can be used to determine the cosine term, and a vector cross product
can be used to calculate the sine term.
• Rotation of u around the x axis into the x z plane is accomplished by rotating u’ (the
projection of u in the y z plane) through angle α onto the z axis.
• If we represent the projection of u in the yz plane as the vector u’= (0, b, c), then the cosine
of the rotation angle α can be determined from the dot product of u’ and the unit vector uz
along the z axis:
6
Module 3 3D Geometric Transformations
or
• We have determined the values for cos α and sin α in terms of the components of vector
u, the matrix elements for rotation of this vector about the x axis and into the xz plane
• Rotation of unit vector u” (vector u after rotation into the x z plane) about the y axis.
Positive rotation angle β aligns u” with vector uz .
• We can determine the cosine of rotation angle β from the dot product of unit vectors u’’
and uz. Thus,
7
Module 3 3D Geometric Transformations
• we find that
• The specified rotation angle θ can now be applied as a rotation about the z axis as
follows:
• The transformation matrix for rotation about an arbitrary axis can then be expressed as
the composition of these seven individual transformations:
• The composite matrix for any sequence of three-dimensional rotations is of the form
8
Module 3 3D Geometric Transformations
• Assuming that the rotation axis is not parallel to any coordinate axis, we could form the
following set of local unit vectors
• If we express the elements of the unit local vectors for the rotation axis as
• Then the required composite matrix, which is equal to the product Ry(β) · Rx(α), is
9
Module 3 3D Geometric Transformations
✓ Rotation of the point is then carried out with the quaternion operation
✓ The second term in this ordered pair is the rotated point position p’, which is evaluated
with vector dot and cross-products as
10
Module 3 3D Geometric Transformations
where scaling parameters sx, sy, and sz are assigned any positive values.
✓ Explicit expressions for the scaling transformation relative to the origin are
✓ Because some graphics packages provide only a routine that scales relative to the
coordinate origin, we can always construct a scaling transformation with respect to any
selected fixed position (xf , yf , zf ) using the following transformation sequence:
1. Translate the fixed point to the origin.
2. Apply the scaling transformation relative to the coordinate origin
3. Translate the fixed point back to its original position.
✓ This sequence of transformations is demonstrated
11
Module 3 3D Geometric Transformations
CODE:
class wcPt3D
{
private:
GLfloat x, y, z;
public:
/* Default Constructor:
* Initialize position as (0.0, 0.0, 0.0).
*/
wcPt3D ( ) {
x = y = z = 0.0;
}
setCoords (GLfloat xCoord, GLfloat yCoord, GLfloat zCoord) {
x = xCoord;
y = yCoord;
z = zCoord;
}
GLfloat getx ( ) const {
return x;
}
GLfloat gety ( ) const {
return y;
}
GLfloat getz ( ) const {
return z;
}
};
typedef float Matrix4x4 [4][4];
void scale3D (GLfloat sx, GLfloat sy, GLfloat sz, wcPt3D fixedPt)
{
Matrix4x4 matScale3D;
12
Module 3 3D Geometric Transformations
13
Module 3 3D Geometric Transformations
Three-Dimensional Shears
➔ These transformations can be used to modify object shapes.
➔ For three-dimensional we can also generate shears relative to the z axis.
➔ A general z-axis shearing transformation relative to a selected reference position is
produced with the following matrix:
A unit cube (a) is sheared relative to the origin (b) by Matrix 46, with shzx = shzy = 1.
14
Module 3 3D Geometric Transformations
❖ Affine transformations (in two dimensions, three dimensions, or higher dimensions) have
the general properties that parallel lines are transformed into parallel lines, and finite points
map to finite points.
❖ Translation, rotation, scaling, reflection,andshear are examples of affine transformations.
❖ Another example of an affine transformation is the conversion of coordinate descriptions
for a scene from one reference system to another because this transformation can be
described as a combination of translation and rotation
15
Module 3 3D Geometric Transformations
We have two functions available in OpenGL for processing the matrices in a stack
glPushMatrix ( );
Copy the current matrix at the top of the active stack and store that copy in the second
stack position
glPopMatrix ( );
which destroys the matrix at the top of the stack, and the second matrix in the stack
becomes the current matrix
16
Module 3 Illumination and color model
1
Module 3 Illumination and color model
❖ A point source for a scene by giving its position and the color of the emitted light. light
rays are generated along radially diverging paths from the single-color source position.
❖ This light-source model is a reasonable approximation for sources whose dimensions are
small compared to the size of objects in the scene
❖ We can simulate an infinitely distant light source by assigning it a color value and a fixed
direction for the light rays emanating from the source.
❖ The vector for the emission direction and the light-source color are needed in the
illumination calculations, but not the position of the source.
2
Module 3 Illumination and color model
source receives a higher incident light intensity from that source than a more distant
surface.
➢ However, using an attenuation factor of 1/dl 2with a point source does not always produce
realistic pictures.
➢ The factor 1/dl 2tends to produce too much intensity variation for objects that are close to
➢ The numerical values for the coefficients, a0, a1, and a2, can then be adjusted to produce
optimal attenuation effects.
➢ We cannot apply intensity-attenuation calculation 1 to a point source at “infinity,”
because the distance to the light source is indeterminate.
➢ We can express the intensity-attenuation function as
3
Module 3 Illumination and color model
where angle α is the angular distance of the object from the light direction vector.
➔ If we restrict the angular extent of any light cone so that 0◦ < θl ≤ 90◦, then the object is
within the spotlight if cos α ≥ cos θl , as shown
➔ . If Vobj ·Vlight < cos θl , however, the object is outside the light cone.
4
Module 3 Illumination and color model
• Where the attenuation exponent al is assigned some positive value and angle φ is
measured from the cone axis.
• The greater the value for the attenuation exponent al , the smaller the value of the angular
intensity-attenuation function for a given value of angleφ > 0◦.
• There are several special cases to consider in the implementation of the angular-
attenuation function.
• There is no angular attenuation if the light source is not directional (not a spotlight).
• We can express the general equation for angular attenuation as
✓ One way to do this is to model the light surface as a grid of directional point emitters.
✓ We can set the direction for the point sources so that objects behind the light-emitting
surface are not illuminated.
✓ We could also include other controls to restrict the direction of the emitted light near the
edges of the source
5
Module 3 Illumination and color model
✓ The Warn model provides a method for producing studio lighting effects using sets of
point emitters with various parameters to simulate the barn doors, flaps, and spotlighting
controls employed by photographers.
✓ Spotlighting is achieved with the cone of light discussed earlier, and the flaps and barn
doors provide additional directional control
Ambient Light
➢ This produces a uniform ambient lighting that is the same for all objects, and it
approximates the global diffuse reflections from the various illuminated surfaces.
➢ Reflections produced by ambient-light illumination are simply a form of diffuse reflection,
and they are independent of the viewing direction and the spatial orientation of a surface.
➢ However, the amount of the incident ambient light that is reflected depends on surface
optical properties, which determine how much of the incident energy is reflected and how
much is absorbed
Diffuse Reflection
➢ The incident light on the surface is scattered with equal intensity in all directions,
independent of the viewing position.
➢ Such surfaces are called ideal diffuse reflectors They are also referred to as Lambertian
reflectors, because the reflected radiant light energy fromany point on the surface is
calculated with Lambert’s cosine law.
➢ This law states that the amount of radiant energy coming from any small surface area
dAin a direction φN relative to the surface normal is proportional to cos φN
6
Module 3 Illumination and color model
➢ The intensity of light in this direction can be computed as the ratio of the magnitude of the
radiant energy per unit time divided by the projection of the surface area in the radiation
direction:
➢ The below figure illustrates this effect, showing a beam of light rays incident on two equal-
area plane surface elements with different spatial orientations relative to the illumination
direction from a distant source
A surface that is perpendicular to the direction of the incident light (a) is more illuminated than
an equal-sized surface at an oblique angle (b) to the incoming light direction.
7
Module 3 Illumination and color model
➢ We can model the amount of incident light on a surface from a source with intensity Il as
➢ We can model the diffuse reflections from a light source with intensity Il using the
calculation
➢ At any surface position, we can denote the unit normal vector as N and the unit direction
vector to a point source as L,
➢ The diffuse reflection equation for single point-source illumination at a surface position
can be expressed in the form
➢ The unit direction vector L to a nearby point light source is calculated using the surface
position and the light-source position:
➢ Where both ka and kd depend on surface material properties and are assigned values in the
range from 0 to 1.0 for monochromatic lighting effects
8
Module 3 Illumination and color model
1. N represents: unit normal surface vector The specular reflection angle equals the angle of
the incident light, with the two angles measured on opposite sides of the unit normal surface
vector N
2. Rrepresents the unit vector in the direction of ideal specular reflection,
3. L is the unit vector directed toward the point light source, and
4. Vis the unit vector pointing to the viewer fromthe selected surface position.
9
Module 3 Illumination and color model
✓ Plots of cosns φ using five different values for the specular exponent ns .
10
Module 3 Illumination and color model
✓ Using the spectral-reflection function W(θ), we can write the Phong specular-reflection
model as
where Il is the intensity of the light source, and φ is the viewing angle relative to the specular-
reflection direction R.
✓ BecauseVand R are unit vectors in the viewing and specular-reflection directions, we can
calculate the value of cos φ with the dot product V·R.
✓ In addition, no specular effects are generated for the display of a surface if V and L are
on the same side of the normal vector N or if the light source is behind the surface
✓ We can determine the intensity of the specular reflection due to a point light source at a
surface position with the calculation
✓ The direction for R, the reflection vector, can be computed from the directions for vectors
L and N.
✓ The projection of L onto the direction of the normal vector has a magnitude equal to the
dot productN·L, which is also equal to the magnitude of the projection of unit vector R
onto the direction of N.
✓ Therefore, from this diagram, we see that
R + L = (2N·L)N
and the specular-reflection vector is obtained as
R = (2N·L)N – L
✓ A somewhat simplified Phong model is obtained using the halfway vector H between L
and V to calculate the range of specular reflections.
11
Module 3 Illumination and color model
✓ If we replace V·R in the Phong model with the dot productN·H, this simply replaces the
empirical cos φ calculation with the empirical cos α calculation
✓ For nonplanar surfaces, N·H requires less computation than V·R because the calculation
of R at each surface point involves the variable vector N.
12
Module 3 Illumination and color model
13
Module 3 Illumination and color model
Texture
➔ patterns are combined only with the nonspecular color, and then the two colors are
combined.
➔ We select this two-color option with
glLightModeli (GL_LIGHT_MODEL_COLOR_CONTROL,
GL_SEPARATE_SPECULAR_COLOR);
14
Module 3 Illumination and color model
Color Models
3.3.5 Properties of Light
✓ We can characterize light as radiant energy, but we also need other concepts to describe
our perception of light.
✓ Each frequency value within the visible region of the electromagnetic spectrum
corresponds to a distinct spectral color.
✓ At the low-frequency end (approximately 3.8×1014 hertz) are the red colors, and at the
high-frequency end (approximately 7.9 × 1014 hertz) are the violet colors.
✓ In the wave model of electromagnetic radiation, light can be described as oscillating
transverse electric and magnetic fields propagating through space.
✓ The electric and magnetic fields are oscillating in directions that are perpendicular to each
other and to the direction of propagation.
✓ For one spectral color (a monochromatic wave), the wavelength and frequency are
inversely proportional to each other, with the proportionality constant as the speed of light
(c):
c = λf
✓ A light source such as the sun or a standard household light bulb emits all frequencies
within the visible range to produce white light.
15
Module 3 Illumination and color model
✓ When white light is incident upon an opaque object, some frequencies are reflected and
some are absorbed.
✓ If low frequencies are predominant in the reflected light, the object is described as red. In
this case, we say that the perceived light has a dominant frequency (or dominant
wavelength) at the red end of the spectrum.
✓ The dominant frequency is also called the hue, or simply the color, of the light.
16
Module 3 Illumination and color model
➢ Below figure shows, Energy distribution for a light source with a dominant frequency
near the red end of the frequency range.
Primary Colors
❖ The hues that we choose for the sources are called the primary colors, and the color
gamut for the model is the set of all colors that we can produce from the primary colors.
❖ Two primaries that produce white are referred to as complementary colors.
❖ Examples of complementary color pairs are red and cyan, green and magenta, and blue
and yellow
17
Module 3 Illumination and color model
❖ The origin represents black and the diagonally opposite vertex, with coordinates (1, 1, 1),
is white the RGB color scheme is an additive model.
❖ Each color point within the unit cube can be represented as a weighted vector sum of the
primary colors, using unit vectors R, G, and B:
C(λ) = (R, G, B) = RR + G G + B B
where parameters R, G, and B are assigned values in the range from 0 to 1.0
❖ Chromaticity coordinates for the National Television System Committee (NTSC) standard
RGB phosphors are listed in Table
18
Module 3 Illumination and color model
❖ Below figure shows the approximate color gamut for the NTSC standard RGB primaries
19
Module 3 Illumination and color model
❖ In the CMY model, the spatial position (1, 1, 1) represents black, because all components
of the incident light are subtracted.
❖ The origin represents white light.
❖ Equal amounts of each of the primary colors produce shades of gray along the main
diagonal of the cube.
❖ A combination of cyan and magenta ink produces blue light, because the red and green
components of the incident light are absorbed.
❖ Similarly, a combination of cyan and yellow ink produces green light, and a combination
of magenta and yellow ink yields red light.
❖ The CMY printing process often uses a collection of four ink dots, which are arranged in
a close pattern somewhat as an RGB monitor uses three phosphor dots.
❖ Thus, in practice, the CMY color model is referred to as the CMYK model, where K is
the black color parameter.
❖ One ink dot is used for each of the primary colors (cyan, magenta, and yellow), and one
ink dot is black
❖ Where the white point in RGB space is represented as the unit column vector.
❖ And we convert from a CMY color representation to an RGB representation using the
matrix transformation
20