MC0072 Sem3 Feb 2011 Smu
MC0072 Sem3 Feb 2011 Smu
MC0072 Sem3 Feb 2011 Smu
Set-1
C) Video controller
Ans:1
Raster Display System:
In some graphics systems a separate processor is used to interpret the commands in the display file.
Such a processor is known as display processor. Display processor access the display information,
processes it once during every refresh cycle.
In the raster scan display systems, the purpose of display processor is to free the CPU from the graphics
routine task. Here, the display processor is provided with separate memory area as shown in the fig 2.3.
The main task of the display processor is to digitize a picture definition given in an application program
into a set of pixel-intensity values for storage in the frame buffer. This digitization process is known as
scan conversion.
Display processors are also designed to perform a number of additional operations these operations
include:
· Generating various line styles( dashed, dotted or solid)
· Display color areas
· Performing certain transformations
· Manipulations on displayed objects.
MC0072 – Computer Graphics
Book ID: B0810
The fig 2.8 shows the architecture of a random scan display system with display processor. This
architecture is similar to the display processor based raster system architecture except the frame
buffer. In random scan display no local memory is provided for scan conversion algorithms,
since that functionality is typically implemented using PLAs (Programmable Logical Arrays) or
microcode.
In random scan displays, the display processor has its own instruction set and instruction address
register. Hence it is also called Display Processing Unit ( DPU) or Graphics Controller. It
performs instruction fetch, decode and execute cycles found in any computer. To provide a
flicker free display, the display processor has to execute its program 30 to 60 times per second.
The program executed by the display processor and the graphics package reside in the main
memory. The main memory is shared by the general CPU and the display processor.
Video Controller:
We know that the video controller receives the intensity information of each pixel from frame
buffer and display them on the screen. Let us see the internal organization of a video controller.
The fig 2.4 shows the internal organization of a video controller. It consists of raster-scan
generator, x and y address registers and pixel value register. The raster-scan generator produces
deflection signals that generate the raster scan. The raster scan generator also controls the x and y
address registers which in turn define the memory location to be accessed next. Frarne buffer
locations, and the corresponding screen positions, are referenced in Cartesian coordinates. For
many graphics monitors, the coordinate origin is defined at the lower left screen corner. The
screen surface is then represented as the first quadrant of a two-dimensional system, with
positive x values increasing to the right and positive y values increasing from bottom to top. (On
some personal computers, the coordinate origin is referenced at the upper left comer of the
screen, so the y values are inverted.) Scan lines are then labeled from y, at the top of the screen
to 0 at the bottom. Along each scan line, screen pixel positions are labeled from 0 to xmax. During
each fetch the pixel value is read and is used to control the intensity of the CRT beam.
MC0072 – Computer Graphics
Book ID: B0810
2. What is DDA line drawing algorithm explain it with the suitable example? discuss the
merit and demerit of the algorithm.
Ans:2
1. Read the line end points (x1,y1 ) and (x2,y2) such that they are not equal.
2. ∆x = and ∆y =
3. If then
Length=
else
Length=
end if
4. = (x2-x1)/length
= (y2-y1)/length
5. x = x1+0.5 * sign( )
MC0072 – Computer Graphics
Book ID: B0810
y = y1+0.5*sign( )
[Here the sign function makes the algorithm worl in all quadrant. It returns –1, 0,1 depending on
whether its argument is <0, =0, >0 respectively. The factor 0.5 makes it possible to round the
values in the integer function rather than truncating them]
7. while(i length)
x= x+∆x
y= y+∆y
i=i+1
8. stop
Ex.3.1: Consider the line from (0,0) to (4,6). Use the simple DDA algorithm to rasterize this line.
length = =6
∆x = =
∆y=
x= 0+0.5*sign ( )=0.5
y= 0+ 0.5 * sign(1)=0.5
Table 3.1
The results are plotted as shown in the Fig 4.6. It shows that the rasterized line lies to both sides
of the actual line, i.e. the algorithm is orientation dependent.
Ex. 3.2: Consider the line from (0,0) to (-6,-6). Use the simple DDA algorithm to rasterize this
line.
MC0072 – Computer Graphics
Book ID: B0810
Sol :
length = =6
= =-1
Fig. 3.7
MC0072 – Computer Graphics
Book ID: B0810
The results are plotted as shown in the Fig 3.7. It shows that the rasterized line lies on the actual
line and it is 450 line.
1. It is the simplest algorithm and it does not require special skills for implementation.
2. It is a faster method for calculating pixel positions than the direct use of equation y=mx + b. It
eliminates the multiplication in the equation by making use of raster characteristics, so that
appropriate increments are applied in the x or y direction to find the pixel positions along the line
path.
A) Reflection
B) Sheer
Ans:3
Reflection
Shear :
A transformation that slants the shape of an objects is called the shear transformation. Two common
shearing transformations are used. One shifts x coordinate values and other shifts y coordinate values.
However, in both the cases only one coordinate (x or y) changes its coordinates and other preserves its
values.
1 X shear :
The x shear preserves they coordinates, but changes the x values which causes vertical lines to
tilt right or left as shown in the fig. 6.7. The transformation matrix for x shear is given as
MC0072 – Computer Graphics
Book ID: B0810
Fig. 6.7
2 Y Shear:
The y shear preserves the x coordinates, but changes the y values which causes horizontal lines
to transform into lines which slope up or down, as shown in the Fig. 6.8.
MC0072 – Computer Graphics
Book ID: B0810
Fig. 6.8
The transformation matrix for y shear is given as
…. (6.19)
A rotation matrix for any axis that does not coincide with a coordinate axis can be set up as a
composite transformation involving combinations of translation and the coordinate-axes
rotations.
In a special case where an object is to be rotated about an axis that is parallel to one of the
coordinate axes we can obtain the resultant coordinates with the following transformation
sequence.
1. Translate the object so that the rotation axis coincides with the parallel coordinate axis
3. Translate the object so that the rotation axis is moved back to its original position
MC0072 – Computer Graphics
Book ID: B0810
When an object is to be rotated about an axis that is not parallel to one of the coordinate
axes, we have to perform some additional transformations. The sequence of these
transformations is given below.
1. Translate the object so that rotation axis specified by unit vector u passes through the coordinate
origin. (See fig. 6.23 (a) and (b))
2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes. Usually the
z axis is preferred. To coincide the axis of rotation to z axis we have to first perform rotation of unit
vector u about x axis to bring it into xz plane and then perform rotation about y axis to coincide it
with z axis.(See Figs. 6.23 (c) and (d))
4. Apply the inverse rotation about y axis and then about x axis to bring the rotation axis back to its
original orientation.
5. Apply the inverse translation to move the rotation axis back to its original position.
As shown in the Fig. 6.23 (a) the rotation axis is defined with two coordinate points P 1 and P2 and unit
vector u is defined along the rotation of axis as
V = P2 – P1
The components a, b and c of unit vector us are the direction cosines for the rotation axis and they can
be defined as
MC0072 – Computer Graphics
Book ID: B0810
Fig. 6.23
As mentioned earlier, the first step in the transformation sequence is to translate the object to pass the
rotation axis through the coordinate origin. This can be accomplished by moving point P 1 to the origin.
The translation is as given below.
Now we have to perform the rotation of unit vector u about x axis. The rotation of u around the x axis
into the xz plane is accomplished by rotation can be determined from the dot product of
into the z axis and the cosine of the rotation angle through angle and the unit vector uz(0, 0, 1)
along the z axis.
=
since
Fig. 6.24
By substituting values of cos the rotation matrix Rand sinx can be given as
Next we have to perform the rotation of unit vector about y axis. This can be achieved by rotating
as follows. and sin onto the z axis. Using similar equations we can determine cos
through angle
=
MC0072 – Computer Graphics
Book ID: B0810
But we have,
and
MC0072 – Computer Graphics
Book ID: B0810
in the rotation matrix R and sin By substituting value of cos y can be given as
Let
MC0072 – Computer Graphics
Book ID: B0810
With transformation matrices T and Rxy we can align the rotation axis with the positive z axis. Now the
specified rotation with angle can be achieved by rotation transformation as given below.
To complete the required rotation about the given axis, we have to transform the rotation axis back to
its original position. This can be achieved by applying the inverse transformations T -1 and The
overall transformation matrix for rotation about an arbitrary axis then can be expressed as the
concatenation of five individual transformations.
i.e
MC0072 – Computer Graphics
Book ID: B0810
Ans:4
Before discussing specific line drawing algorithms it is useful to note the general requirements
for such algorithms. These requirements specify the desired characteristics of line.
· The line should appear as a straight line and it should start and end accurately.
· The line should be displayed with constant brightness along its length independent of its length
and orientation.
Fig. 3.5
As shown in Fig.3.5 (a), horizontal and vertical lines are straight and have same width. The 450
line is straight but its width is not constant. On the other hand, the line with any other orientation
MC0072 – Computer Graphics
Book ID: B0810
is neither straight nor has same width. Such cases are due to the finite resolution of display and
we have to accept approximate pixels in such situations, shown in Fig.3.5 (c).
The brightness of the line is dependent on the orientation of the line. We can observe that the
effective spacing between pixels for the 450 line is greater than for the vertical and horizontal
lines. This will make the vertical and horizontal lines appear brighter than the 450 line. Complex
calculations are required to provide equal brightness along lines of varying length and
orientation. Therefore, to draw line rapidly some compromises are made such as
1. Read the line end points (x1,y1 ) and (x2,y2) such that they are not equal.
2. ∆x = and ∆y =
3. If then
Length=
else
Length=
end if
4. = (x2-x1)/length
= (y2-y1)/length
5. x = x1+0.5 * sign( )
y = y1+0.5*sign( )
[Here the sign function makes the algorithm worl in all quadrant. It returns –1, 0,1 depending on
whether its argument is <0, =0, >0 respectively. The factor 0.5 makes it possible to round the
values in the integer function rather than truncating them]
7. while(i length)
x= x+∆x
y= y+∆y
i=i+1
8. stop
Ex.3.1: Consider the line from (0,0) to (4,6). Use the simple DDA algorithm to rasterize this line.
length = =6
∆x = =
∆y=
x= 0+0.5*sign ( )=0.5
y= 0+ 0.5 * sign(1)=0.5
Table 3.1
The results are plotted as shown in the Fig 4.6. It shows that the rasterized line lies to both sides
of the actual line, i.e. the algorithm is orientation dependent.
Ex. 3.2: Consider the line from (0,0) to (-6,-6). Use the simple DDA algorithm to rasterize this
line.
MC0072 – Computer Graphics
Book ID: B0810
Sol :
length = =6
= =-1
Fig. 3.7
MC0072 – Computer Graphics
Book ID: B0810
The results are plotted as shown in the Fig 3.7. It shows that the rasterized line lies on the actual
line and it is 450 line.
Bresenham’s line algorithm uses only integer addition and subtraction and multiplication by 2,
and we know that the computer can perform the operations of integer addition and subtraction
very rapidly. The computer is also time-efficient when performing integer multiplication by
powers of 2. Therefore, it is an efficient method for scan-converting straight lines.
The basic principle of Bresenham’s line algorithm is to select the optimum raster locations to
represent a straight line. To accomplish this, the algorithm always increments either x or y by
one unit depending on the slope of line. The increment in the other variable is determined by
examining the distance between the actual line location and the nearest pixel. This distance is
called decision variable or the error. This is illustrated in the Fig.3.8
Fig. 3.8
As shown in the Fig. 3.8 the line does not pass through all raster points (pixels). It passes through
raster point (0,0) and subsequently crosses three pixels. It is seen that the intercept of line with
the line x=1 is closer to the line y=0,i.e. pixel(1,0) than to the line y=1 i.e. pixel (1,1). Hence, in
this case, the raster point at (1,0) better represents the path of the line that at (1,1). The intercept,
of the line with the line x=2 is close to the line y=1, i.e. pixel (2,1) than to the line y=0,i.e.
pixel(2.0). Hence the raster point at (2,1) better represents the path of the line, as shown in the
Fig. 3.8
e=DB-DA Or DA-DB
Let us define e= DB-DA. Now if e>0, then it implies that DB>DA, i.e., the pixel above the line is
closer to the true line. If DB < DA (i.e.<0) then we can say that the pixel below the line is closer to
the true line. Thus by checking only the sign of error term it is possible to determine the better
pixel to represent the line path.
while (e≥0)
y=y+1
e=e-2*∆x
x=x+1
e=e+2*∆y
When e≥0, error is initialized with e=e - 2∆x. This is continued till error is negative. In each
iteration y is incremented by 1. When e<0, error is initialized to e=e + 2∆y. In both the cases x is
incremented by 1. Let us see the Bresenham’s line drawing algorithm.
1. Read the line end points (x1, y1) and (x2, y2) such that they are not equal.
2. ∆x = and ∆y =
x = x1
y = y1
4. e = 2 * ∆y - ∆x
[ Initialize value of decision variable or error to compensate for non zero intercepts]
5. i = 1 [initialize counter]
6. Plot (x, y)
7. while (e >0)
{
MC0072 – Computer Graphics
Book ID: B0810
y = y+1
e = e-2 * ∆x
x = x+1
e = e+2 * ∆y
8. i = i+1
10. stop
Ex 3.3: Consider the line from (5,5) to (13,9). Use the Bresenham’s algorithm to rasterize the
line.
∆x = |13-5| = 8
∆y = |9-5| = 4
x = 5, y = 5
e = 2*∆y-∆x = 2*4-8
=0
Fig. 3.9
The Bresenham’s algorithm only works for the first octant. The generalized Bresenham’s
algorithm requires modification for lines lying in the other octants. Such algorithm can be easily
developed by considering the quadrant in which the line lies and its slope. When the absolute
magnitude of the slope of the line is greater than 1,y is incremented by one and Bresenham’s
decision variable or error is used to determine when to increment x. The x and y incremental
values depend on the quadrant in which the line exists. This is illustrated in Fig. 3.10.
1. read the end points (x1, y1) and (x2, y2) such that they are not equal
2. ∆x = and ∆y =
x = x1
y = y1
4. s1 = sign (x2-x1)
s2 = sign (y2-y1)
[Sign function returns -1, 0, 1 depending on whether its argument is <0, =0, >0 respectively]
5. If ∆y > ∆x
Exchange ∆x and ∆y
Ex_change =1
else
Ex_change =0
end if
[interchange ∆x and ∆y depending on the slope of the line and set Ex_change flag according]
6. e= 2 * ∆y- ∆x
[Initialize value of decision variable or error to compensate for non zero intercepts]
7. i=1
8. Plot ( x, y)
9. while (e >=0)
x=x+s1
else
y=y+s2
end if
e=e-2*∆x
y=y+s2
else
x=x+s1
end if
e=e+2*∆y
11. i=i+1
13. stop
MC0072 – Computer Graphics
Book ID: B0810
Set-2
C) Color table
Ans:1
Video Mixing:
Video controller provides the facility of video mixing. In which it accepts information of two
images simultaneously. One from frame buffer and other from television camera, recorder or
other source. This is illustrated in fig 2.7. The video controller merges the two received images
to form a composite image.
There are two types of video mixing. In first, a graphics image is set into a video image. Here
mixing is accomplished with hardware that treats a designated pixel value in the frame buffer as
a flag to indicate that the video signal should be shown instead of the signal from the frame
buffer, normally the designated pixel value corresponds to the background color of the frame
buffer image.
In the second type of mixing, the video image is placed on the top of the frame buffer image.
Here, whenever background color of video image appears, the frame buffer is shown, otherwise
the video image is shown.
FrameBuffer:
In raster scan displays a special area of memory is dedicated to graphics only. This memory area
is called frame buffer. It holds the set of intensity values for all the screen points. The stored
intensity values are retrieved from frame buffer and displayed on the screen one row (scan line)
at a time.
Usually, frame buffer is implemented using rotating random access semiconductor memory.
However, frame buffer also can be implemented using shift registers. Conceptually, shift register
MC0072 – Computer Graphics
Book ID: B0810
is operated as first-in-first-out (FIFO) fashion, i.e. similar to queue. We know that, when queue is
full and if we want to add new data bit then first data bit is pushed out from the bottom and then
the new data bit is added at the top. Here, the data ejected out of the queue can be interpreted as
the intensity of a pixel on a scan line.
Fig. 3.2 shows the implementation of frame buffer using shift register. As shown in the Fig. 3.2,
one shift register is required per pixel on a scan line and the length of shift register in bits is
equal to number of scan lines. Here, there are 8 pixels per scan line and there are in all 5 scan
lines. The synchronization between the output of the shift register and the video scan rate is
maintained data corresponding to particular scan line is displayed correctly.
Color tables:
In color displays, 24- bits per pixel are commonly used, where 8-bits represent 256 levels for
each color. Here it is necessary to read 24-bits for each pixel from frame buffer. This is very time
consuming. To avoid this video controller uses look up table (LUT) to store many entries of
pixel values in RGB format. With this facility, now it is necessary only to read index to the look
up table from the frame buffer for each pixel. This index specifies the one of the entries in the
look-up table. The specified entry in the loop up table is then used to control the intensity or
color of the CRT.
Usually, look-up table has 256 entries. Therefore, the index to the look-up table has 8-bits and
hence for each pixel, the frame buffer has to store 8-bits per pixel instead of 24 bits. Fig. 2.6
shows the organization of a color (Video) look-up table.
There are several advantages in storing color codes in a lookup table. Use of a color table can
provide a "reasonable" number of simultaneous colors without requiring Iarge frame buffers. For
most applications, 256 or 512 different colors are sufficient for a single picture. Also, table
entries can be changed at any time, allowing a user to be able to experiment easily with different
color combinations in a design, scene, or graph without changing the attribute settings for the
graphics data structure. In visualization and image-processing applications, color tables are
MC0072 – Computer Graphics
Book ID: B0810
convenient means for setting color thresholds so that all pixel values above or below a specified
threshold can be set to the same color. For these reasons, some systems provide both capabilities
for color-code storage, so that a user can elect either to use color tables or to store color codes
directly in the frame buffer.
Ans:2
Stroke method
This method uses small line segments to generate a character. The small series of line segments
are drawn like a stroke of pen to form a character as shown in the fig. 5.19.
We can build our own stroke method character generator by calls to the line drawing algorithm.
Here it is necessary to decide which line segments are needed for each character and then
drawing these segments using line drawing algorithm.
Starbust method:
MC0072 – Computer Graphics
Book ID: B0810
In this method a fix pattern of line segments are used to generate characters. As shown in the fig.
5.20, there are 24 line segments. Out of these 24 line segments, segments required to display for
particular character are highlighted. This method of character generation is called starbust
method because of its characteristic appearance
Fig. 5.20
Fig. 5.20 shows the starbust patterns for characters A and M. the patterns for particular
characters are stored in the form of 24 bit code, each bit representing one line segment. The bit is
set to one to highlight the line segment; otherwise it is set to zero. For example, 24-bit code for
Character A is 0011 0000 0011 1100 1110 0001 and for character M is 0000 0011 0000 1100
1111 0011.
1. The 24-bits are required to represent a character. Hence more memory is required
2. Requires code conversion software to display character from its 24-bit code
Bitmap method:
The third method for character generation is the bitmap method. It is also called dot matrix
because in this method characters are represented by an array of dots in the matrix form. It is a
two dimensional array having columns and rows. An 5 7 array is commonly used to represent
characters as shown in the fig 5.21. However 7 9 and 9 13 arrays are also used. Higher
resolution devices such as inkjet printer or laser printer may use character arrays that are over
100 100.
MC0072 – Computer Graphics
Book ID: B0810
Each dot in the matrix is a pixel. The character is placed on the screen by copying pixel values
from the character array into some portion of the screen’s frame buffer. The value of the pixel
controls the intensity of the pixel.
Ans:3
Homogeneous Coordinates and Matrix Representation of 2D Transformations :
In design and picture formation process, many times we may require to perform translation,
rotations, and scaling to fit the picture components into their proper positions. In the previous
section we have seen that each of the basic transformations can be expressed in the general
matrix form
= P.M1 + M2 …… (6.12)
For translation:
M2 = Translation vector
For rotation:
MC0072 – Computer Graphics
Book ID: B0810
M2 = 0
For scaling:
M2 = 0
where
For two dimensional transformations, we can have the homogeneous parameter W to be any non
zero value. But it is convenient to have W = 1. Therefore, each two dimensional position can be
represented with homogeneous coordinate as (x, y, 1).
Summering it all up, we can say that the homogeneous coordinates allow combined
transformation, eliminating the calculation of intermediate coordinate values and thus save
MC0072 – Computer Graphics
Book ID: B0810
required time for transformation and memory required to store the intermediate coordinate
values. Let us see the homogeneous coordinates for three basic transformations.
…. (6.13)
Therefore, we have
= [x + tx y + ty 1] …. (6.14)
MC0072 – Computer Graphics
Book ID: B0810
…. (6.15)
Therefore, we have
= …. (6.16)
= …. (6.17)
MC0072 – Computer Graphics
Book ID: B0810
Ans:4
Projection:
After converting the description of objects from world coordinates to viewing coordinates, we
can project the three dimensional objects onto the two dimensional view plane. There are two
basic ways of projecting objects onto the view plane : Parallel projection and Perspective
projection.
1 Parallel Projection:
In parallel projection, z coordinate is discarded and parallel lined from each vertex on the object
are extended until they intersect the view plane. The point of intersection is the projection of the
vertex. We connect the projected vertices by line segments which correspond to connections on
the original object.
As shown in the Fig. 7.8, a parallel projection preserves relative proportions of objects but does
not produce the realistic views.
2 Perspective Projection:
The perspective projection, on the other hand, produces realistic views but does not preserve
relative proportions. In perspective projection, the lines of projection are not parallel. Instead,
they all coverage at a single point called the center of projection or projection reference point.
The object positions are transformed to the view plane along these converged projection lines
and the projected view of an object is determines by calculating the intersection of the converged
projection lines with the view plane, as shown in the Fig. 7.9
MC0072 – Computer Graphics
Book ID: B0810
Fig. 7.10
Parallel projections are basically categorized into two types, depending on the relation between
the direction of projection and the normal to the view plane. When the direction of the projection
is normal (perpendicular) to the view plane, we have an orthographic parallel projection.
Otherwise, we have an oblique parallel projection. Fig. 7.10 illustrates the two types of parallel
projection.