Unit 3 - Computer Graphics & Multimedia - WWW - Rgpvnotes.in
Unit 3 - Computer Graphics & Multimedia - WWW - Rgpvnotes.in
Unit 3 - Computer Graphics & Multimedia - WWW - Rgpvnotes.in
Tech
Subject Name: Computer Graphics & Multimedia
Subject Code: IT-601
Semester: 6th
Downloaded from www.rgpvnotes.in
Syllabus:2D & 3D Co-ordinate system, Translation, Rotation, Scaling, Reflection Inverse transformation, Composite
transformation, world coordinate system, screen coordinate system, parallel and perspective projection, Representation
of 3D object on 2D screen, Point Clipping, Line Clipping Algorithms, Polygon Clipping algorithms, Introduction to Hidden
Surface elimination, Basic illumination model, diffuse reflection, specular reflection, phong shading, Gourand shading ray
tracing, color models like RGB, YIQ, CMY, HSV.
Unit-III
2D & 3D Co-ordinate system:
A two-dimensional Cartesian coordinate system is formed by two mutually perpendicular axes. The axes intersect at the
point, which is called the origin. In the right-handed system, one of the axes (x-axis) is directed to the right, the other y-
axis is directed vertically upwards. The coordinates of any point on the xy-plane are determined by two real
numbers x and y, which are orthogonal projections of the points on the respective axes. The x-coordinate of the point is
called the abscissa of the point, and the y-coordinate is called its ordinate.
A three-dimensional Cartesian coordinate system is formed by a point called the origin and a basis consisting of three
mutually perpendicular vectors. These vectors define the three coordinate axes: the x−, y−, and z−axis. They are also
known as the abscissa, ordinate and applicate axis, respectively. The coordinates of any point in space are determined
by three real numbers: x, y, z.
2-D Transformation
In many applications, changes in orientations, size, and shape are accomplished with geometric transformations that
alter the coordinate descriptions of objects.
Basic geometric transformations are:
Translation, Rotation, Scaling
Other transformations: Reflection, Shear
Translation:
We translate a 2D point by adding translation distances, tx and ty, to the original coordinate position (x,y):
x ′ = x + tx
y′ = y + ty
Alternatively, translation can also be specified by the following transformation matrix:
1 0 tx
[0 1 ty]
0 0 1
Then we can rewrite the formula as:
x′ 1 0 tx x
[y′]= [0 1 ty] [y]
1 0 0 1 1
Scaling:
We scale a 2D point by multiplying scaling factor, Sx and Sy, to the original coordinate position (x,y):
x ′ = xSx
y′ = ySy
Alternatively, scaling can also be specified by the following transformation matrix:
Sx 0 0
[ 0 Sy 0]
0 0 1
Then we can rewrite the formula as:
Rotation:
To rotate an object about the origin (0,0), we specify the rotation angle. Positive and negative values for the rotation
angle define counter clockwise and clockwise rotations respectively. The following is the computation of this rotation for
a point:
x ′ = x cos θ − y sin θ
y ′ = x sin θ + y cos θ
Alternatively, this rotation can also be specified by the following transformation matrix:
cos θ − sin θ 0
[ sin θ cos θ 0]
0 0 1
Then we can rewrite the formula as:
x′ cos θ − sin θ 0 x
[y′]= [ sin θ cos θ 0] [y]
1 0 0 1 1
Reflection:
Reflection about the x axis:
x′ = x
y ′ = −y
x′ 1 0 0 x
[y′] = [0 −1 0] [y]
1 0 0 1 1
x′ −1 0 0 x
[y′] = [ 0 1 0] [y]
1 0 0 1 1
Figure 3.2 Reflection about the y axis
x′ −1 0 0 x
[y′] = [ 0 −1 0] [y ]
1 0 0 1 1
Figure 3.3 Reflection about the origin
Reflection about the diagonal line y=x:
x′ = y
y′ = x
x′ 0 1 0 x
[y′] = [1 0 0] [y]
1 0 0 1 1
x′ 0 −1 0 x
[y′] = [−1 0 0] [y ]
1 0 0 1 1
Figure 3.5 Reflection about the y=-x
Shearing:X-direction shear, with a shearing parameter shx, relative to the x-axis:
x ′ = x + y. Shx
y′ = x
x′ 1 Shx 0 x
[y′]= [0 1 0] [y]
1 0 0 1 1
This demonstrates that 2 successive scaling with respect to the origin are multiplicative.
Rotations
By common sense, if we rotate a shape with 2 successive rotation angles: θ1 and θ2, about the origin, it is equal to
rotating the shape once by an angle θ1+ θ2 about the origin.
Similarly, this additive property can be demonstrated by composite transformation matrix:
Figure 3.15 Front View, Top View, Side View and Isometric Projection
Oblique projections
Oblique projections areobtained byprojectingalongparallel lines that arenot perpendicular to the projection plane.
The following results canbeobtained from obliqueprojections of acube:
Rotation
Rotation in three dimensions is considerably more complex then rotation in two dimensions. In 2-D, a rotation is
prescribed by an angle of rotation θ and a center of rotation, say P.
In two dimensions, a rotation is prescribed by an angle of rotations require the prescription of an angle of rotation and
an axis of rotation. The canonical rotations are defined when one of the positive x, y or z coordinate axes is chosen as
the axis of rotation. Then the construction of the rotation transformation proceeds just like that of a rotation in two
dimensions about the origin. The Corresponding matrix transformations are
cos θ − sin θ 0 0
sin θ cos θ 0 0
Rotation about the z axis [ ]
0 0 1 0
0 0 0 1
cos θ 0 − sin θ 0
0 1 0 0]
Rotation about the y axis [
sin θ 0 cos θ 0
0 0 0 1
1 0 0 0
0 cos θ sin θ 0
Rotation about the x axis [ ]
0 − sin θ cos θ 0
0 0 0 1
Note that the direction of a positive angle of rotation is chosen in accordance to the right-hand rule with respect to the
axis of rotation.
Point Clipping
Point clipping is essentially the evaluation of the following inequalities:
xmin ≤ x ≤ xmax and ymin ≤ y ≤ ymax
Where, xmin, xmax, ymin and ymax define the clipping window. A point (x,y) is considered inside the window when the
inequalities all evaluate to true.
1. Set and
2. Calculate the values of tL, tR, tT, and tB (tvalues).
o if or ignore it and go to the next edge
o otherwise classify the tvalue as entering or exiting value (using inner product to classify)
Polygon Clipping
An algorithm that clips a polygon must deal with many different cases. The case is particularly noteworthy in that the
concave polygon is clipped into two separate polygons. All in all, the task of clipping seems rather complex. Each edge of
the polygon must be tested against each edge of the clip rectangle; new edges must be added, and existing edges must
be discarded, retained, or divided. Multiple polygons may result from clipping a single polygon. We need an organized
way to deal with all these cases.
Sutherland - Hodgman Polygon Clipping
Weiler-Atherton Algorithm
• General clipping algorithm for concave polygons with holes
• Produces multiple polygons (with holes)
• Make linked list data structure
• Traverse to make new polygons
Back-Face Detection
In a solid object, there are surfaces which are facing the viewer (front faces) and there are surfaces which are opposite
to the viewer (back faces). These back faces contribute to approximately half of the total number of surfaces. Since we
cannot see these surfaces anyway, to save processing time, we can remove them before the clipping process with a
simple test. Each surface has a normal vector. If this vector is pointing in the direction of the centre of projection, it is a
front face and can be seen by the viewer. If it is pointing away from the centre of projection, it is a back face and cannot
be seen by the viewer. The test is very simple, if the z component of the normal vector is positive, then, it is a back face.
If the z component of the vector is negative, it is a front face. Note that this technique only caters well for non-
overlapping convex polyhedral. For other cases where there are concave polyhedral or overlapping objects, we still need
to apply other methods to further determine where the obscured faces are partially or completely hidden by other
objects (eg. Using Depth-Buffer Method or Depth-sort Method).
Back-Face Detection
A fast and simple object-space method for identifying the back faces of a polyhedron is based on the "inside-outside"
tests. A point (x, y, z) is "inside" a polygon surface with plane parameters A, B, C, and D if When an inside point is along
the line of sight to the surface, the polygon must be a back face (we are inside that face and cannot see the front of it
from our viewing position).
We can simplify this test by considering the normal vector N to a polygon surface, which has Cartesian components (A,
B, C).
In general, if V is a vector in the viewing direction from the eye (or "camera") position, then this polygon is a back-face
if
V.N > 0
Figure 3.41 The direction of light is measured from the surface normal
Then the intensity of the diffuse reflection from a point light source is
I = Ld k d cos θ
Where Ld is the intensity of the (point) light source, kd is the diffuse reflection coefficient of the object’s material, and θ
is the angle between the normal to the surface and the light source direction vector.
If the normal vector to the surface N and the light source direction vector L are both normalized then the above
equation can be simplified to
I = Ld k d (N. L)
If a light source is an infinite distance from the object then L will be the same for all points on the object — the light
source becomes a directional light source. In this case less computation can be performed.
If the direction of (specular) reflection R and the viewpoint direction V are normalized then the equation becomes
Ray Tracing
Ray tracing is a technique for rendering three-dimensional graphics with very complex light interactions. This means you
can create pictures full of mirrors, transparent surfaces, and shadows, with stunning results. We discuss ray tracing in
this introductory graphics article because it is a very simple method to both understand and implement. It is based on
the idea that you can model reflection and refraction by recursively following the path that light takes as it bounces
through an environment.
Figure 3.45 Tracing rays from the light source to the eye
In order to save ourselves this wasted effort, we trace only those rays that are guaranteed to hit the view window and
reach the eye. It seems at first that it is impossible to know beforehand which rays reach the eye. After all, any given ray
can bounce around the room many times before reaching the eye. However, if we look at the problem backwards, we
see that it has a very simple solution. Instead of tracing the rays starting at the light source, we trace them backwards,
starting at the eye.
Consider any point on the view window whose colour we're trying to determine. Its color is given by the color of the
light ray that passes through that point on the view window and reaches the eye. We can just as well follow the ray
backwards by starting at the eye and passing through the point on its way out into the scene. The two rays will be
identical, except for their direction: if the original ray came directly from the light source, then the backwards ray will go
directly to the light source; if the original bounced off a table first, the backwards ray will also bounce off the table. You
can see this by looking at Figure again and just reversing the directions of the arrows. So the backwards method does
the same thing as the original method, except it doesn't waste any effort on rays that never reach the eye.
This, then, is how ray tracing works in computer graphics. For each pixel on the view window, we define a ray that
extends from the eye to that point. We follow this ray out into the scene and as it bounces off of different objects. The
final color of the ray (and therefore of the corresponding pixel) is given by the color of the objects hit by the ray as it
travels through the scene.
Figure 3.46 We trace a new ray from each ray-object intersection directly towards the light source
In the figure we see two rays, a and b, which intersect the purple sphere. To determine the color of a, we follow the new
ray a' directly towards the light source. The color of a will then depend on several factors, discussed in Color and
Shading below. As you can see, b will be shadowed because the ray b' towards the light source is blocked by the sphere
itself. Ray a would have also been shadowed if another object blocked the ray a'.
Color Model
A color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers,
typically as three or four values or color components. When this model is associated with a precise description of how
the components are to be interpreted (viewing conditions, etc.), the resulting set of colors is called color space.
CIE (Commission International de l'Eclairage - International Color Commission) organisation produced two models for
defining color:
• 1931: Measured on 10 subjects (!) on samples subtending 2 (!) degrees of the field of view
• 1964: Measured on larger number of subjects subtending 10 degrees of field of view
• The CIE 1931 model is the most commonly used
• It defines three primary “colors” X, Y and Z that can be used to describe all visible colors, as well as a standard
white, called C.
• The range of colors that can be described by combinations of other colors is called a color gamut.
- Since it is impossible to find three colors with a gamut containing all visible colors, the CIE’s three primary colors
are imaginary. They cannot be seen, but they can be used to define other visible colors.