TOPIC 1 - Basic Methods of Raster Graphics

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 45

1.

BASIC METHODS OF RASTER GRAPHICS


1.1 Fundamentals Of Computer Graphics
Introduction
Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing
and manipulating visual content. It is the use of a computer to produce and manipulate pictorial images
on a video screen. Although the term often refers to the study of three-dimensional computer graphics, it
also encompasses two-dimensional graphics and image processing.
History of Computer Graphics
The phrase “Computer Graphics” was coined in 1960 by William Fetter, a graphic designer for Boeing.
The field of computer graphics developed with the emergence of computer graphics hardware. Early
projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction
interface and introduced the light pen as an input device.
Initial 1960s developments
Further advances in computing led to greater advancements in interactive computer graphics. In 1959, the
TX-2 computer was developed at MIT's Lincoln Laboratory. The TX-2 integrated a number of new man-
machine interfaces. A light pen could be used to draw sketches on the computer using Ivan Sutherland's
revolutionary Sketchpad software.
Further 1961 developments
Also in 1961 another student at MIT, Steve Russell, created the first video game, Spacewar. Written for
the DEC PDP-1, Spacewar was an instant success and copies started flowing to other PDP-1 owners and
eventually even DEC got a copy. E. E. Zajac, a scientist at Bell Telephone Laboratory (BTL), created a
film called "Simulation of a two-giro gravity attitude control system" in 1963. In this computer generated
film, Zajac showed how the attitude of a satellite could be altered as it orbits the Earth. He created the
animation on an IBM 7090 mainframe computer.
Ralph Baer, a supervising engineer at Sanders Associates, came up with a home video game in 1966 that
was later licensed to Magnavox and called the Odyssey. While very simplistic, and requiring fairly
inexpensive electronic parts, it allowed the player to move points of light around on a screen. It was the
first consumer computer graphics product.
David C. Evans was director of engineering at Bendix Corporation's computer division from 1953 to
1962, after which he worked for the next five years as a visiting professor at Berkeley. There he
continued his interest in computers and how they interfaced with people. In 1966, the University of Utah
recruited Evans to form a computer science program, and computer graphics quickly became his primary
interest. This new department would become the world's primary research center for computer graphics.
Also in 1966, Sutherland at MIT invented the first computer controlled head-mounted display (HMD).
Called the Sword of Damocles because of the hardware required for support, it displayed two separate
wireframe images, one for each eye. This allowed the viewer to see the computer scene in stereoscopic
3D. After receiving his Ph.D. from MIT, Sutherland became Director of Information Processing at ARPA
(Advanced Research Projects Agency), and later became a professor at Harvard.
In 1967 Sutherland was recruited by Evans to join the computer science program at the University of
Utah. There he perfected his HMD. Twenty years later, NASA would re-discover his techniques in their
virtual reality research. At Utah, Sutherland and Evans were highly sought after consultants by large
companies but they were frustrated at the lack of graphics hardware available at the time so they started
formulating a plan to start their own company.

1970s
Many of the most important early breakthroughs in computer graphics research occurred at the University
of Utah in the 1970s. A student by the name of Edwin Catmull started at the University of Utah in 1970
and signed up for Sutherland's computer graphics class. Catmull had just come from The Boeing
Company and had been working on his degree in physics. Growing up on Disney, Catmull loved

1
animation yet quickly discovered that he did not have the talent for drawing. Now Catmull (along with
many others) saw computers as the natural progression of animation and they wanted to be part of the
revolution. The first animation that Catmull saw was his own. He created an animation of his hand
opening and closing. It became one of his goals to produce a feature length motion picture using
computer graphics. In the same class, Fred Parke created an animation of his wife's face. Because of
Evan's and Sutherland's presence, University of Utah was gaining quite a reputation as the place to be for
computer graphics research so Catmull went there to learn 3D animation.
As the University of Utah computer graphics laboratory was attracting people from all over, John
Warnock was one of those early pioneers; he would later found Adobe Systems and create a revolution in
the publishing world with his PostScript page description language. Tom Stockham led the image
processing group at UU which worked closely with the computer graphics lab. Jim Clark was also there;
he would later found Silicon Graphics, Inc.
1980s
In the early 1980s, the availability of bit-slice and 16-bit microprocessors started to revolutionise high
resolution computer graphics terminals which now increasingly became intelligent, semi-standalone and
standalone workstations. Graphics and application processing were increasingly migrated to the
intelligence in the workstation, rather than continuing to rely on central mainframe and mini-computers.
Typical of the early move to high resolution computer graphics intelligent workstations for the computer-
aided engineering market were the Orca 1000, 2000 and 3000 workstations, developed by Orcatech of
Ottawa, a spin-off from Bell-Northern Research, and led by an early workstation pioneer David John
Pearson. The Orca 3000 was based on Motorola 68000 and AMD bit-slice processors and had Unix as its
operating system. It was targeted squarely at the sophisticated end of the design engineering sector.
Artists and graphic designers began to see the personal computer, particularly the Commodore Amiga
and Macintosh, as a serious design tool, one that could save time and draw more accurately than other
methods. In the late 1980s, SGI computers were used to create some of the first fully computer-generated
short films at Pixar. The Macintosh remains a highly popular tool for computer graphics among graphic
design studios and businesses. Modern computers, dating from the 1980s often use graphical user
interfaces (GUI) to present data and information with symbols, icons and pictures, rather than text.
Graphics are one of the five key elements of multimedia technology.
1990s
3D graphics became more popular in the 1990s in gaming, multimedia and animation. At the end of the
80s and beginning of the nineties were created, in France, the very first computer graphics TV series: "La
Vie des bêtes" by studio Mac Guff Ligne (1988), Les Fables Géométriques J.-Y. Grall, Georges Lacroix
and Renato (studio Fantome, 1990–1993) and Quarxs, the first HDTV computer graphics series by
Maurice Benayoun and François Schuiten (studio Z-A production, 1991–1993). In 1995, Toy Story, the
first full-length computer-generated animation film, was released in cinemas worldwide. In 1996, Quake,
one of the first fully 3D games, was released. Since then, computer graphics have only become more
detailed and realistic, due to more powerful graphics hardware and 3D modeling software.

Applications of Computer Graphics


1. Computer Aided Design - also known as computer-aided design and drafting (CADD) , is the use
of computer technology for the process of design and design-documentation. Computer Aided
Drafting describes the process of drafting with a computer. CADD software, or environments,
provides the user with input-tools for the purpose of streamlining design processes; drafting,
documentation, and manufacturing processes.
2. Computer art - is any art in which computers play a role in production or display of the artwork.
Such art can be an image, sound, animation, video, CD-ROM, DVD-ROM, videogame, web site,
algorithm, performance or gallery installation.

2
3. Image processing - in computer science, image processing is any form of signal processing for
which the input is an image, such as a photograph or video frame; the output of image processing
may be either an image or, a set of characteristics or parameters related to the image..
4. Presentation graphics program - is a computer software package used to display information,
normally in the form of a slide show. It typically includes three major functions: an editor that
allows text to be inserted and formatted, a method for inserting and manipulating graphic images
and a slide-show system to display the content. Examples are Microsoft PowerPoint, Corel
Presentations and Google Docs
5. Visualization - is any technique for creating images, diagrams, or animations to communicate a
message. Visualization through visual imagery has been an effective way to communicate both
abstract and concrete ideas since the dawn of man.

6. Entertainment – Computer graphics are now used in creating motion pictures, music videos and
television shows. Sometimes the graphics scenes are displayed by themselves and sometimes
graphics objects are combined with actors and live scenes.
7. Computational biology involves the development and application of data-analytical and
theoretical methods, mathematical modeling and computational simulation techniques to the study
of biological, behavioral, and social systems.
8. A computer simulation, a computer model, or a computational model - is a computer program, or
network of computers, that attempts to simulate an abstract model of a particular system.
9. Virtual reality (VR) - is a term that applies to computer-simulated environments that can simulate
physical presence in places in the real world, as well as in imaginary worlds
10. Computer animation - is the process used for generating animated images by using computer
graphics. The more general term computer generated imagery encompasses both static scenes and
dynamic images, while computer animation only refers to moving images
11. A video game is an electronic game that involves interaction with a user interface to generate
visual feedback on a video device. The word video in video game traditionally referred to a raster
display device, but following popularization of the term "video game", it now implies any type of
display device. The electronic systems used to play video games are known as platforms;
examples of these are personal computers and video game consoles. These platforms range from
large mainframe computers to small handheld devices. Specialized video games such as arcade
games, while previously common, have gradually declined in use.

1.2 Concepts, Terms And Definitions


Introduction
Before examining to different specialisms that make up computer graphics, this chapter provides a quick
review of the basic terms and definitions that are common to most areas of computer graphics.
Low level concepts
All of computer graphics is based upon the properties of the screen or display device. The fundamental
thing that you need to know about displays is that they are divided into lots of small squares called pixels
(“PICture ELements”).

3
The simplest way to think of how this works is to stick to black and white. Each pixel can be set to black
or white (i.e. turned on or off). This allows patterns of dots to be created on the screen.

Memory Mapping
Drawing on the screen is therefore simply a matter of setting the right pixels either on or off. Each pixel
on the screen corresponds to an address in the computer’s memory - this is known as memory mapping
and the display is said to be a “memory mapped display.” Effectively, each pixel is numbered -
sequentially.

4
By writing values to the correct locations in memory (this used to be called “poking the address”) the
appearance of the screen can be controlled by a programmer. Conversely, by inspecting the contents of a
memory location (“peeking”), a program can find out if a pixel is turned on or off.
The portion of memory that is associated with the display is known as the “video memory”. In the PC
architecture, this memory is usually physically located on the graphics card, which is why you can buy
8Mb graphics cards, 16Mb graphics cards etc.
Resolution
The screen in this example is composed of 300 pixels arranged in 15 rows of 20. It is said to have a
resolution of “20 by 15” (20 x 15). This is actually very small by today’s standards. Typically a display
will have a resolution of 1024x768 maybe even more.
Screen size
Resolution is NOT the same thing as screen size. Our 20x15 display could have been 5cm across (as a
mobile phone display), 40cm across (as a monitor) or 20m (as a projection system) but the resolution
would always be 20x15.

5
Coordinates
Whilst the computer “thinks” of its display as a simple list of addresses, it is much more convenient (for
reasons which will become clear later) for us to think in terms of coordinates and leave it to the computer
to convert the coordinates to memory location itself.

Each pixel can be referred to by a pair of numbers known as its coordinates - an x coordinate (which
gives the column’s number and a y coordinate which gives the row number. The coordinates are always
written as a pair of numbers, separated by a comma and enclosed in brackets. This system of geometry is
known as “Cartesian geometry” and the coordinates are spoken of as being “Cartesian Coordinates.”
Example. Simple coordinate example
In Figure above - “Coordinates” - pixel (7,2) is set ON.
The computer converts coordinates to memory addresses by subtracting the y coordinate from the number
of rows on the display minus 1 (i.e. Ysize -1, this effectively turns the y axis upside-down to make sure
that the origin is at the bottom), multiplying that by the number of columns on the display
(Xsize), and adding the x coordinate.
location = ((Y * Xsize) + x).
(7,12) becomes ((2*20) + 7) = 47
The origin

6
In this above example, the origin or (0,0) is in the bottom-left of the screen. It is also possible to have the
origin in the top-left, in which case the rows (y coordinates) are numbered downwards
Colour
In discussion so far, we have been restricted to black and white (monochrome) i.e. each pixel can only be
on or off. Colour can be represented in most of today’s computers. Typically, instead of each pixel being
represented by one bit (on or off) each pixel will be represented by 24 bits - 3 x 8 bit bytes. Each byte
represents a number between 0 and 255 and each byte is associated with one primary colour - red, blue,
green.

7
Image files
From Figure above - “Colour smiley” it is hopefully only a small step to see how Figure below -
“smiley.bmp” works. The pattern of bits in the computer’s memory forms what is know as a bitmap.

By simply writing those values out to a data file and prepending information about the width and depth of
the image, a picture can be saved onto disk and this is indeed the approach taken by all the common
graphical formats seen on the WWW such as.gifs,.jpegs, etc.
8
Image file formats
Image file formats are standardized means of organizing and storing digital images. Image files are
composed of digital data in one of these formats that can be rasterized for use on a computer display
or printer. An image file format may store data in uncompressed, compressed, or vector formats.
Once rasterized, an image becomes a grid of pixels, each of which has a number of bits to designate
its color equal to the color depth of the device displaying it.
Image file sizes
In raster images, Image file size is positively correlated to the number of pixels in an image and the
color depth, or bits per pixel, of the image. Images can be compressed in various ways, however.
Compression uses an algorithm that stores an exact representation or an approximation of the original
image in a smaller number of bytes that can be expanded back to its uncompressed form with a
corresponding decompression algorithm. Considering different compressions, it is common for two
images of the same number of pixels and color depth to have a very different compressed file size.
Considering exactly the same compression, number of pixels, and color depth for two images,
different graphical complexity of the original images may also result in very different file sizes after
compression due to the nature of compression algorithms. With some compression formats, images
that are less complex may result in smaller compressed file sizes. This characteristic sometimes
results in a smaller file size for some lossless formats than lossy formats. For example, simple images
may be losslessly compressed into a GIF or PNG format and result in a smaller file size than a lossy
JPEG format.
Vector images, unlike raster images, can be any dimension independent of file size. File size
increases only with the addition of more vectors.
Image file compression
There are two types of image file compression algorithms: lossless and lossy.
1. Lossless compression algorithms reduce file size while preserving a perfect copy of the original
uncompressed image. Lossless compression generally, but not exclusively, results in larger files than
lossy compression. Lossless compression should be used to avoid accumulating stages of re-
compression when editing images.
2. Lossy compression algorithms preserve a representation of the original uncompressed image that
may appear to be a perfect copy, but it is not a perfect copy. Oftentimes lossy compression is able to
achieve smaller file sizes than lossless compression. Most lossy compression algorithms allow for
variable compression that trades image quality for file size.
Major graphic file formats
1. Raster formats
a. GIF- stands for graphics interchange format, a bit-mapped graphics file format used by the
World Wide Web, CompuServe and many bulletin board system. GIF supports color and
various resolutions. It also includes data compression, but because it is limited to 256
colors, it is more effective for scanned images such as illustrations rather than color
photos.
b. JPEG - Short for Joint Photographic Experts Group, and pronounced jay-peg. JPEG is a
lossy compression technique for color images. Although it can reduce files sizes to about
5% of their normal size, some detail is lost in the compression.
c. TIFF - Acronym for tagged image file format, one of the most widely supported file
formats for storing bit-mapped images on personal computers (both PCs and Macintosh
computers). Other popular formats are BMP and PCX. TIFF graphics can be any
resolution, and they can be black and white, gray-scaled, or color. Files in TIFF format
often end with a .tif extension.
d. MPEG - Short for Moving Picture Experts Group, and pronounced m-peg, is a working
group of the ISO. The term also refers to the family of digital video compression standards

9
and file formats developed by the group. MPEG generally produces better-quality video
than competing formats, such as Video for Windows, Indeo and QuickTime. MPEG files
previously on PCs needed hardware decoders (codecs) for MPEG processing. Today,
however, PCs can use software-only codecs including products from RealNetworks,
QuickTime or Windows Media Player. MPEG algorithms compress data to form small
bits that can be easily transmitted and then decompressed. MPEG achieves its high
compression rate by storing only the changes from one frame to another, instead of each
entire frame. The video information is then encoded using a technique called Discrete
Cosine Transform (DCT). MPEG uses a type of lossy compression, since some data is
removed. But the diminishment of data is generally imperceptible to the human eye.
e. PNG - Short for Portable Network Graphics, and pronounced ping, a new bit-mapped
graphics format similar to GIF. In fact, PNG was approved as a standard by the World
Wide Web consortium to replace GIF because GIF uses a patented data compression
algorithm called LZW. In contrast, PNG is completely patent- and license-free. The most
recent versions of Netscape Navigator and Microsoft Internet Explorer now support PNG.
f. BMP - The standard bit-mapped graphics format used in the Windows environment. By
convention, graphics files in the BMP format end with a.BMP extension. BMP files store
graphics in a format called device-independent bitmap (DIB).
2. Vector formats
As opposed to the raster image formats above (where the data describes the characteristics of each
individual pixel), vector image formats contain a geometric description which can be rendered
smoothly at any desired display size. At some point, all vector graphics must be rasterized in order to
be displayed on digital monitors. However, vector images can be displayed with analog CRT
technology such as that used in some electronic test equipment, medical monitors, radar displays,
laser shows and early video games. Plotters are printers that use vector data rather than pixel data to
draw graphics.
a. CGM - CGM (Computer Graphics Metafile) is a file format for 2D vector graphics, raster
graphics, and text, and is defined by ISO/IEC 8632. All graphical elements can be
specified in a textual source file that can be compiled into a binary file or one of two text
representations. CGM provides a means of graphics data interchange for computer
representation of 2D graphical information independent from any particular application,
system, platform, or device. It has been adopted to some extent in the areas of technical
illustration and professional design, but has largely been superseded by formats such as
SVG and DXF.
b. Gerber Format (RS-274X) - RS-274X Extended Gerber Format was developed by Gerber
Systems Corp., now Ucamco. This is a 2D bi-level image description format. It is the de-
facto standard format used by printed circuit board or PCB software. It is also widely used
in other industries requiring high-precision 2D bi-level images.
c. SVG - SVG (Scalable Vector Graphics) is an open standard created and developed by the
World Wide Web Consortium to address the need (and attempts of several corporations)
for a versatile, scriptable and all-purpose vector format for the web and otherwise. The
SVG format does not have a compression scheme of its own, but due to the textual nature
of XML, an SVG graphic can be compressed using a program such as gzip. Because of its
scripting potential, SVG is a key component in web applications: interactive web pages
that look and act like applications.

10
1.3: Output Primitives
Graphic SW and HW provide subroutines to describe a scene in terms of basic geometric structures
called output primitives. Those functions in a graphic package that we use to describe the various picture
components are called the graphic output primitives or simply primitives. Output primitives are
combined to form complex structures. Output primitives describe the geometry of objects and – typically
referred to as geometric primitives. Examples: point, line, text, filled region, images, quadric surfaces,
spline curves. Each of the output primitives has its own set of attributes.
Graphics Definitions
Point - A location in space, 2D or 3D. Sometimes denotes one pixel
Vertex - Point in 3D
Edge - Line in 3D connecting two vertices
Polygon/Face/Facet - Arbitrary shape formed by connected vertices. Fundamental unit of 3D computer
graphics
Output primitives attributes
Points
Attributes: Size, Color
Lines
Attributes: Color, Thickness, Type

Polylines (open)
A set of line segments joined end to end.
Attributes: Color, Thickness, Type

Polylines (closed)
A polyline with the last point connected to the first point .
Attributes: Color, Thickness, Type

Polygons
A set of line segments joined end to end.
Attributes: Fill color, Thickness, Fill pattern

Text
11
Attributes: Font, Color, Size, Spacing, Orientation.
Font:
 Type (Helvetica, Times, Courier etc.)
 Size (10 pt, 14 pt etc.)
 Style (Bold, Italic, Underlined)
Images
Attributes: Image Size, Image Type, Color Depth.
Image Type:
 Binary (only two levels)
 Monochrome
 Color.
Color Depth:
 Number of bits used to represent color.
Scan Conversion
Process of converting output primitives into frame buffer updates. Choose which pixels contain which
intensity value.
Constraints
 Straight lines should appear as a straight line
 primitives should start and end accurately
 Primitives should have a consistent brightness along their length
 They should be drawn rapidly

Line Drawing
What is a Line
 Straight path connecting two points.
 Extremely small width, consistent density.
 Beginning and end on points.
A line, or straight line is, (infinitely) thin, (infinitely) long, straight geometrical object, i.e. a curve that is
long and straight. Line is an infinitely-extending one-dimensional figure that has no curvature. A line
segment is a part of a line that is bounded by two distinct end points and contains every point on the line
between its end points.
What is an ideal line?
Must appear straight and continuous.
12
Only possible axis-aligned and 45 degree lines.
Must interpolate (interrupt ) both defining end points.
Must have uniform density and intensity.
Consistent within a line and over all lines.
Must be efficient, drawn quickly.

A straight line segment is defined by the coordinate positions for the endpoints of the segment. Given
two points, in Euclidean geometry, one can always find exactly one line that passes through the two
points. A line may have three forms with respect to slope:
 Slope=1
 Slope>1
 Slope<1

Line Equations
The Cartesian slope-intercept equation for the straight line is
y = m.x+ b-------------------(1)
Where m=Slope of the line b= Constant
Our aim is to scan convert a line segment usually defined by its endpoints: (x0, y0) and (xend, yend).

Therefore, we need to draw the line segment: y = m.x+ b


For x0≤ x ≤ xend andy0≤ y ≤ yend
Where
13
m = (yend–y0)/(xend–x0) = δy/δx -------------------(2)
At point (x0, y0)
b = y0–m. x0
= y0–x0.(yend–y0)/(xend–x0) -------------------(3)
At point (xend, yend)
b = yend–m. xend
= yend–xend.(yend–y0)/(xend–x0) -------------------(4)
From Eq.2, for any given x interval δx along a line, we can compute the corresponding y interval δy.
δy = m.δx -------------------(5)
Similarly for any given y interval δy along a line, we can compute the corresponding x interval δx.

δx = δy/m -------------------(6)
Does it Work?
Now check if |m| < 1 then starting at the first point, simply increment x by 1 (unit increment) till it
reaches ending point; whereas calculate y point by the equation 5 (δy = m.δx ) for each x.
Conversely if |m|>1 then increment y by 1 till it reaches ending point; whereas calculate x point
corresponding to each y, by the equation 6 (δx = δy/m ).

If |m|is less than 1 (|m|<1 ) Then it means that for every subsequent pixel on the line there will be unit
increment in x direction and there will be less than 1 increment in y direction and vice versa for slope
greater than 1 (|m|<1 ).
If |m|=1 then δx = δy. In this case a smooth line with slope of m is generated between the specified
points.
Line Drawing Algorithms
 Line drawing is fundamental to computer graphics.
 We must have fast and efficient line drawing functions.
 Rasterization Problem: Given only the two end points, how to compute the intermediate pixels,
so that the set of pixels closely approximate the ideal line.

14
Analytical Method

Directly based on the analytical equation of a line. Involves floating point multiplication and addition.
Requires round-off function.
Incremental Algorithms
I have got a pixel on the line (Current Pixel). How do I get the next pixel on the line?
Compute one point based on the previous point:
(x0, y0)…….…………..(xk, yk) (xk+1, yk+1) …….

15
Incrementing along x

Assumes that the next pixel to be set is on the next column of pixels (Incrementing the value of x !). Not
valid if slope of the line is large.
Digital Differential Analyzer (DDA) Algorithm
Digital Differential Analyzer Algorithm is an incremental algorithm. The basis of the DDA method is to
take unit steps along one coordinate and compute the corresponding values along the other coordinate.
The unit steps are always along the coordinate of greatest change, e.g. If dx= 10 and dy= 5, then we
would take unit steps along x and compute the steps along y.

16
As we know that
 m = δy/δx
 δy = m δx
 δx = 1/m.δy = δy/m
We consider first a line with positive slope, as shown in figure above (For 1stquadrant).
If the slope is less than or equal to 1 (m<=1), we sample at unit x intervals (δx = 1) and compute
successive y values as
yk+1–yk= m (If δy= yk+1–yk& δx = 1)
yk+1= yk+m -----------------(1)
Subscript k takes integer values starting from 0, for the first point, and increases by 1until the final
endpoint is reached. Since m can be any real number between 0.0and 1.0, each calculated y value must be
rounded to the nearest integer corresponding to a screen pixel position in the x column we are processing.
For lines with a positive slope greater than 1.0 (m>1),we reverse the roles of x and y.
That is, we sample at unit y intervals (δy= 1) and calculate consecutive x values as:
 δx = 1/m (If δx = xk+1–xk& δy = 1)
 xk+1–xk= 1/m
 xk+1= xk+1/m -----------------(2)
In this case, each computed x value is rounded to the nearest pixel position along the current y scan line.
Equations 1 and 2 are based on the assumption that lines are to be processed from the left endpoint to the
right endpoint of the figure above.
If this processing is reversed, so that the starting endpoint is at the right (4thQuadrant), then
We have δx=−1and
yk+1= yk− m -------------------(3)
When the slope is greater than 1, i.e. m>1, we have δy = −1 and
xk+1= xk− 1/m -----------------(4)

17
Similar calculations are carried out using equations 1 through 4 to determine pixel positions along a line
with negative slope.
 Thus, if the |m|<1 and the starting endpoint is at the left, we set δx = 1 and calculate y values with
Eq#1.
 When the starting endpoint is at the right (for the same slope i.e.|m|<1), we set δx = −1 and obtain
y positions using Eq#3.
 For a negative slope with |m|>1, we use δy = −1 and Eq#4or
 We use δy = 1 and Eq#2.
DDA has very simple technique.
1) Find difference δx and δy between x coordinates and y coordinates respectively ending points of a line.
2) If |δx| is greater than |δy|, then |δx| will be step; otherwise |δy| will be step.
if |dx|>|dy| then
step = |dx|
else
step = |dy|
Now very simple to say that step is the total number of pixel required for a line.
3) Next step is to divide dx and dy by step to get x Increment and y Increment that is the increment
required in each step to find next pixel value.
xIncrement= dx/step
yIncrement= dy/step
4) Next a loop is required that will run step times.
 In the loop drawPixeland add x Incrementin x1 and y Increment in y1.
 To sum-up all above in the algorithm, we will get;
dx= xEnd–x0 // find the difference of xEndand x0
dy= yEnd–y0 // find the difference of yEndand y0
x1 = x0 // set initial value of x1
18
y1 = y0 // set initial value of y1
if |dx|>|dy| then // find step
step = |dx|
else
step = |dy|
xIncrement= dx/step // calculate xIncrement
yIncrement= dy/step // calculate yIncrement
for counter = 1 to step // loop to draw pixel
Begin
draw_Pixel(round(x1), round(y1)) // function to draw pixel
x1 = x1 + xIncrement // Increment x1
y1 = y1 + yIncrement // Increment y1
Criticism on Algorithm
 There is serious criticism on the algorithm that is use of floating point calculation.
 They say that when we have to draw points that should have integers as coordinates then why to
use floating point calculation, which requires more space as well as they have more computational
cost.
 Therefore there is need to develop an algorithm which would be based on integer type
calculations.
C++ code implementation of Digital Differential Analyzer (DDA) Algorithm
#include <iostream>
#include <windows.h>
void gotoxy(int x, int y)
{
COORD ord;
ord.X = x;
ord.Y = y;
SetConsoleCursorPosition
(GetStdHandle(STD_OUTPUT_HANDLE), ord);
}
void Draw_Line(int left, int top,
int right, int bottom)
{

double m = ((double) bottom - top) /


((double) right - left);
for(int x = left; x <= right; x++)
{
int y = m * (x - left) + top;
gotoxy(x, y);
std::cout << "x";
// std::cout << "X";std::cout << x;
// std::cout << "Y";std::cout << y;
// std::cout << std::endl;
}
}
void main()
{
Draw_Line(0,0,6,3);
19
gotoxy(0,0);
}
Bresenham Line Drawing Algorithm
The Bresenham line algorithm is an algorithm that determines which points in an n-dimensional raster
should be plotted in order to form a close approximation to a straight line between two given points. It is
commonly used to draw lines on a computer screen, as it uses only integer addition and subtraction all of
which are very economical operations in standard computer architectures. Bresenham's algorithm finds
the closest integer coordinates to the actual line, using only integer math. Assuming that the slope is
positive and less than 1, moving 1 step in the x direction,
 y either stays the same, or
 Increases by 1.
A decision function is required to resolve this choice.
If the current point is (xi, yi), the next point can be either (xi+1, yi) or(xi+1, yi+1).
The actual position on the line is
(xi+1, m(xi+1)+b)

To illustrate Bresenham’s approach, we first consider the scan-conversion process for lines with positive
slope less than 1.0. Pixel positions along a line path are then determined by sampling at unit x intervals.
Starting from the left endpoint (x 0, y0) of a given line, we step to each successive column (x position) and
plot the pixel whose scan-line y value is closest to the line path.
Figure below demonstrates the kth step in this process. Assuming we have determined that the pixel at
(xk, yk) is to be displayed, we next need to decide which pixel to plot in column xk+1= xk+ 1 ?------(1)
Our choices are the pixels at positions
 (xk+ 1, yk) and
 (xk+ 1, yk+ 1).

20
At sampling position xk+ 1, we label vertical pixel separations from the mathematical line path as dlower
and dupper. The y coordinate on the mathematical line at pixel column position xk+ 1 is calculated as:
 y = mx+b
 y = m(xk+ 1) + b
 {Replace x by xk+ 1}

21
Then
dlower= y –yk
= m(xk+ 1) + b –yk----------(2)
{Replace y by m(xk+ 1) + b }
And
dupper= yk+1 –y
= yk+1 –[m(xk+ 1) + b ]
{Replace y by m(xk+ 1) + b }
= yk+1 –m(xk+ 1) –b -----------(3)
To find which pixel is closest???, a test is performed that is based on the difference of both
separations. i.e.; dlower–dupper

22
So subtracting Eq#3 from Eq#2.
dlower –dupper = m(xk+ 1) + b –yk –[yk+1 –m(xk+ 1) –b]
= m(xk+ 1) + b –yk –yk–1 + m(xk+ 1) +b
= 2m(xk+ 1) –2yk+2b –1 ------------------------(4)
Decision parameter Pk for the kth step can be obtained by re-arranging Eq#4.
dlower –dupper = 2m(xk+ 1) –2yk+(2b –1)
= 2 Δy/Δx (xk+ 1) –2yk+(2b –1)
Δx (dlower –dupper)= 2 Δy (xk+ 1) –2 Δxyk+ Δx(2b –1)----(5)
As Δx (dlower –dupper)= Pk, Eq#5 becomes:
Pk =2 Δy (xk+ 1) –2 Δxyk+ Δx(2b –1)
Pk =2 Δyxk+ 2 Δy –2 Δxyk+ Δx(2b –1)
Pk =2 Δyxk–2 Δxyk+ 2 Δy + Δx(2b –1) ------------------------(6)
As + 2 Δy + Δx(2b –1) is a constant term, so Eq#6 becomes:
Pk =2 Δyxk–2 Δxyk+ C--------------------------------------------(7)
Note that Δx and Δy are independent of pixel positions, so are considered as Constant. Coordinate
changes along the line occurs in unit step in either x or y directions. Therefore we can obtain the value of
successive decision parameters using incremental integer calculations.
At step k+1, the decision parameter Pk+1can be calculated from Eq#7 as:
Pk+1 =2 Δyxk+1–2 Δxyk+1+ C -----------------(8)
Perform subtraction of Eq#7 from Eq#8.
Pk+1–Pk=2 Δyxk+1–2 Δxyk+1+ C –[2 Δyxk–2 Δxyk+ C]
Pk+1–Pk =2 Δyxk+1–2 Δxyk+1+ C –2 Δyxk+2 Δxyk–C
Pk+1–Pk =2 Δyxk+1–2 Δyxk–2 Δxyk+1+2 Δxyk
Pk+1–Pk =2 Δy (xk+1–xk)–2 Δx (yk+1–yk )
And
Pk+1=Pk+ 2 Δy (xk+1–xk)–2 Δx (yk+1–yk ) ------------------(9)
But from Eq#1, xk+1= xk+ 1
So Eq#9 becomes:
Pk+1=Pk+ 2 Δy (xk + 1 –xk)–2 Δx (yk+1–yk )
23
Pk+1=Pk+ 2 Δy –2 Δx (yk+1–yk )-------------------------------(10)
Where yk+1–ykis either 0 or 1.
So
Pk+1=Pk+ 2 Δy -------------------------------------------------(11)
For yk+1–yk = 0 and
Point is (xk+1, yk).
And
Pk+1=Pk+ 2 Δy –2 Δx ------------------------------------(12)
For yk+1–yk = 1 and
Point is (xk+1, yk+1).
How to calculate the 1stdecision parameter P0???
Consider Eq#6.
Pk =2 Δy xk+ 2 Δy –2 Δx yk+ Δx(2b –1)
Pk =2 Δy (xk+ 1) –2 Δx yk+ Δx(2b –1) ----[A]
For P0, use point/pixel (x1, y1), so Eq#A becomes:
P0 =2 Δy (x1+ 1) –2 Δx y1+ Δx(2b –1) -----[B]
As b = y –mx
b = y1–mx1 for (x1, y1)
So Eq#Bchanges as
P0 =2 Δy x1+ 2 Δy –2 Δx y1+ Δx[2(y1–mx1) –1]
P0=2Δy x1+2Δy–2Δxy1+ Δx (2y1–2mx1–1)----------[C]
But m = Δy/Δx
So Eq#Cbecomes
P0 =2Δy x1 + 2Δy–2Δxy1+Δx (2y1–2(Δy/Δx )x1–1)
P0 =2Δy x1+ 2Δy –2Δxy1+ 2Δx y1–2Δyx1 –Δx
P0 =2Δy –Δx --------------------------------------------------------[D]
BresenhamAlgorithm
BresenhamLine(intx1,y1,x2,y2) //Line drawing procedure for m<1
Begin //Start of Line drawing procedure
Int dx=x2-x1 //Calculate dx
Int dy=y2-y1 //Calculate dy
Int p=2*dy-dx //Calculate initial decision parameter
Int increE=2*dy //for p < = 0
Int incrNE=2*(dy-dx) //for p = 1
if (x1 > x2) //Determine the start points
Begin //Start of if body
x = x2
y = y2
x2 = x1
End //End of if body
else
Begin //Start of else body
x=x1
y=y1
End //End of else body
Plot_Pixel(x, y) //Function to draw pixel
while (x < x2)
Begin //Start of loop body
if (p < = 0)

24
Begin //Start of if body
p+=incrE
x++
End //End of if body
else
Begin //Start of else body
p+=incrNE
x++
y++
End //End of else body
Plot_Pixel(x, y) //Function to draw pixel
End //End of loop
End //End of Line drawing procedure
C++ implementation for Bresenhams’ line drawing Algorithm

#include <iostream>
#include <windows.h>
void gotoxy(int x, int y)
{
COORD ord;
ord.X = x;
ord.Y = y;
SetConsoleCursorPosition(GetStdHandle
(STD_OUTPUT_HANDLE), ord);
}

void Bresenham_Line(int left, int top,


int right, int bottom)
{

int x, y;
double Cx=right-left;x
double Cy=bottom-top;
double twoCy=2*Cy;
double twoCx=2*Cx;
double twoCxy=twoCy-twoCx;
double P;

gotoxy(left, top);std::cout << "i";

P=twoCy-Cx;

if(P<0)
{ x=left+1; y=top;}
else
{ x=left+1; y=top+1;}

gotoxy(x, y);std::cout << "a";

25
int stop=Cx+x-1;

for(int X = x; X < stop; X++)


{
if(P<0)
{
P=P+twoCy;
x=x+1; y=y;
}
else
{
x=x+1; y=y+1;
P=P+twoCxy;

gotoxy(x, y);
std::cout << "x";

}
}

void main()
{
Bresenham_Line(0,0,10,6);
gotoxy(100,100);
}

Circle Drawing Algorithms


What is Circle?
A circle is the set of points in a plane that are equidistant from a given point O. The distance r from the
center is called the radius, and the point O is called the center. Twice the radius is known as the diameter.
The angle a circle subtends from its center is a full angle, equal to 360°or 2 π radians.

Circumference of Circle
26
A circle has the maximum possible area for a given perimeter, and the minimum possible perimeter for a
given area. The perimeter C of a circle is called the circumference, and is given by C=2 πr

Circle Drawing Techniques


First of all for circle drawing we need to know what the input will be. Well, the input will be one center
point (x, y) and radius r. Therefore, using these two inputs there are a number of ways to draw a circle.
They involve understanding level very simple to complex and reversely time complexity inefficient to
efficient. We see them one by one giving comparative study.
Circle Drawing Using Cartesian Coordinates
This technique uses the equation for a circle with radius r centered at (0,0):
x2+y2=r2---------------------(I)
An obvious choice is to plot
Against different values of x.
Using above equation a circle can be easily drawn.

The value of x varies from r –xc to r + xc, and y is calculated using the formula given below.

Eq#I for points (x, y) and (xc, yc) can be modified as:

27
( x-xc )2 +( y-yc )2 = r2----------------(II)
To calculate y:

---------------(III)
Algorithm
Using this technique a simple algorithm will be:

Drawbacks/ Shortcomings
 This works, but is inefficient because of the multiplications and square root operations.
 It also creates large gaps in the circle for values of x close to r as shown in the figure.

Circle Drawing Using Polar Coordinates


A better approach, to eliminate unequal spacing as shown in previous figure is to calculate points along
the circular boundary using polar coordinates r and θ. Expressing the circle equation in parametric polar
form yields the pair of equations

--------------------(I)
Using Eq#I circle can be plotted by calculating x and y coordinates as θ takes values from 0to 360
degrees or 0to 2π.
When these equations are using a fixed angular step size, a circle is plotted with equidistant points on the
circumference. The step size chosen for θ depends on the application and the display device. Larger
angular separations along the circumference can be connected with straight-line segments to approximate
the circular path. For a more continuous boundary on a raster display, we can set the step size at 1/r. This
plots pixel positions that are approximately one unit apart.
Algorithm
Now let us see how this technique can be sum up in algorithmic form.

28
Again this is very simple technique. And also it solves problem of unequal space. But unfortunately this
technique is still inefficient in terms of calculations because it involves
 Floating point calculations and
 Trigonometric calculations
both of which are time consuming.
Calculations can be reduced by considering the symmetry of circles.
The shape of circle is similar in each quadrant. We can generate the circle section in the second quadrant
of the xy-plane by noting that the two circle sections are symmetric with respect to the y axis and Circle
sections in the third and fourth quadrants can be obtained from sections in the first and second quadrants
by considering symmetry about the x axis. We can take this one step further and note that there is also
symmetry between octants. Circle sections in adjacent octants within one quadrant are symmetric with
respect to the 45o line dividing the two octants. These symmetry conditions are illustrated in given figure.

Eight Octants Symmetry

29
Optimizing the Algorithm
Now this algorithm can be optimized by using symmetric octants as:

Inefficiency Still Prevails


 Hence we have reduced half the calculations by considering symmetric octants of the circle.
30
 But as we discussed earlier, inefficiency is still there and that is due to the use of floating point
calculations.
 In next algorithm we will try to remove this problem.
Midpoint Circle Algorithm
As in the Bresenham line drawing algorithm we derive a decision parameter that helps us to determine
whether or not to increment in the y coordinate against increment of x coordinate or vice versa for slope
> 1. Similarly here we will try to derive decision parameter which can give us closest pixel position. Let
us consider only the first octant of a circle of radius r centered on the origin. We begin by plotting point
(0, r) and end when x < y.

The decision at each step is whether to choose:


 The pixel to the right of the current pixel or
 The pixel which is to the right and below the current pixel. (8-way stepping)
Assume:
P=(xk,yk)is the current pixel.
Q=(xk+1,yk)is the pixel to the right
R=(xk+1,yk–1)is the pixel to the right and below.

To apply the midpoint method, we define a circle function:


fcircle(x, y) = x2+ y2–r2
The following relations can be observed:
fcircle(x, y) < 0, if (x, y) is inside the circle boundary.
fcircle(x, y) = 0, if (x, y) is on the circle boundary.
fcircle(x, y) > 0, if (x, y) is outside the circle boundary.

31
The circle function tests given above are performed for the midpoints between pixels near the circle
path at each sampling step. Thus, the circle function is the decision parameter in the midpoint algorithm,
and we can set up incremental calculations for this function as we did in the line algorithm. Figure given
below shows the midpoint between the two candidate pixels at sampling position x k+1. Assuming we
have just plotted the pixel at (x k, yk), we next need to determine whether the pixel at position (x k+ 1, yk),
or the one at position (xk+1, yk–1) is closer to the circle. Our decision parameter is the circle function
evaluated at the midpoint between these two pixels:
Pk = f circle ( xk + 1, yk –½ )
Pk= ( xk + 1 )2+ ( yk–½ )2–r2 ………..…...(1)
If pk< 0, this midpoint is inside the circle and the pixel on scan line yk is closer to the circle boundary.
Otherwise, the mid position is outside or on the circle boundary, and we select the pixel on scan line y k–
1.

Successive decision parameters are obtained using incremental calculations.


We obtain a recursive expression for the next decision parameter by evaluating the circle function at
sampling position xk+1+1 = xk+2.
Pk+1 = f circle ( xk+1 + 1, yk+1 –½ )
Pk+1 = [ ( xk+ 1 ) + 1 ]2+ ( yk+1 –½ )2–r2 .……….…....…(2)
Subtracting (1) from (2), we get:
Pk+1 -Pk= [ ( xk+ 1 ) + 1 ]2+ ( yk+1 –½ )2–r2 [( xk+ 1 )2+ ( yk–½ )2–r2]
Pk+1 = Pk+ 2( xk+ 1 ) + ( y2k+1 –y2k ) –( yk+1 –yk) + 1 …(3)
Where yk+1 is either yk or yk–1, depending on the sign of Pk.
Therefore, if Pk<0 or negative then yk+1 will be yk and the formula to calculate Pk+1will be:

------------------------------(4)
32
Otherwise, if Pk> 0 or positive then yk+1will be yk–1 and the formula to calculate Pk+1will be:

-------------------(5)
How to calculate initial Decision Parameter P0?
Now a similar case that we observed in line algorithm is that how would starting Pk be calculated.
For this, the starting pixel position will be (0, r).
Therefore, putting this value in equation, we get:
P0= f circle ( xk + 1, yk –½ )
P0=(0+1)2+(r–½)2–r2
P0=1+r2–r+¼–r2
P0=5/4–r
If radius r is specified as an integer, we can simply round p0 to:
P0=1–r……………………………….……(6)
Since all increments are integer, finally the algorithm can be summed up as:
Midpoint Circle Algorithm

33
Midpoint Circle Drawing Algorithm
Step 1:Input radius r and circle center(Xc, Yc)and obtain the first point on the circumference of a circle
centered on the origin as (X0, Y0) = (0, r)
Step 2: Calculate the initial values of the decision parameter as
P0 = 5/4 – r
Step 3: At each position starting at k perform the following test:
If Pk < 0, the next point to plot is (Xk+1, Yk) and
Pk+1 = Pk+2 Xk+1 + 1
Otherwise the next point is (Xk+1, Yk-1) and
Pk+1 = Pk+2 Xk+1 + 1- 2Yk+1
where 2Xk+1=2Xk+2 and 2Yk+1=2Yk-2
Step 4: Determine symmetry points in the other seven octants
Step 5: Move each pixel position(X, Y) onto the circular path centered on(X c, Yc) and plot the coordinate
values as
X = X + Xc Y = Y + Yc
Step 6: Repeat steps 3 through until X>=Y
C++ code implementing midpoint circle drawing algorithm
#include <iostream>
#include <windows.h>
void gotoxy(int x, int y)
{
COORD ord;
ord.X = x;
ord.Y = y;
SetConsoleCursorPosition(GetStdHandle(STD_OUTPUT_HANDLE), ord);
}

void Circle(int Radius,int xC,int yC)


{
int P;
int x,y,X,Y;
std::cout << std::endl;

gotoxy(xC, yC);
std::cout << "a";

P = 1 - Radius;
x = 0;
y = Radius;
X=xC+x; Y=yC+y;gotoxy(X, Y);std::cout << "i";
X=xC+x; Y=yC-y;gotoxy(X, Y);std::cout << "i";
X=xC-x; Y=yC+y;gotoxy(X, Y);std::cout << "i";
X=xC-x; Y=yC-y;gotoxy(X, Y);std::cout << "i";
X=xC+y; Y=yC+x;gotoxy(X, Y);std::cout << "i";
X=xC-y; Y=yC+x;gotoxy(X, Y);std::cout << "i";
X=xC+y; Y=yC-x;gotoxy(X, Y);std::cout << "i";
X=xC-y; Y=yC-x;gotoxy(X, Y);std::cout << "i";

34
while (x<=y)
{
x++;
if (P<0)
{
P += 2 * x + 1;
}
else
{
P += 2 * (x - y) + 1;
y--;
}
X=xC+x; Y=yC+y;gotoxy(X, Y);std::cout << "x";
X=xC+x; Y=yC-y;gotoxy(X, Y);std::cout << "x";
X=xC-x; Y=yC+y;gotoxy(X, Y);std::cout << "x";
X=xC-x; Y=yC-y;gotoxy(X, Y);std::cout << "x";
X=xC+y; Y=yC+x;gotoxy(X, Y);std::cout << "x";
X=xC-y; Y=yC+x;gotoxy(X, Y);std::cout << "x";
X=xC+y; Y=yC-x;gotoxy(X, Y);std::cout << "x";
X=xC-y; Y=yC-x;gotoxy(X, Y);std::cout << "x";

void main()
{
Circle(7,5,5);
Circle(10,20,20);
gotoxy(0,0);
}
Midpoint Ellipse Drawing Algorithm
Step 1: Input radius rx, ry and ellipse center (Xc, Yc) and obtain the first point on the circumference of a
circle centered on the origin as (X0, Y0) = (0, ry)
Step 2: Calculate the initial values of the decision parameter in region 1 as
P10 = r2y – r2x ry + 1/4 r2x
Step 3: At each Xk position in region 1, starting at k = 0, perform the following test:
If P1k < 0, the next point to plot is (Xk+1, Yk) and
P1k+1 = P1k+2 r2yXk+1 + r2y
Otherwise the next point is (Xk+1, Yk-1) and
P1k+1 = P1k+2 r2yXk+1 - 2r2xYk+1 + r2y
With
2 r2yXk+1=2 r2yXk+ 2r2y
2r2xYk+1=2r2xYk- 2r2x
Step 4: Calculate the initial values of the decision parameter in region 2 as
P20 = r2y(X0+1/2)2+ r2x(Y0 – 1)2- r2x r2y

35
Step 5: At each position starting at Yk position in region 2, starting at k = 0,
perform the following test:
If P2k > 0, the next point to plot is (Xk, Yk-1) and
P2k+1 = P2k - 2 r2yYk+1 + r2x
Otherwise the next point is (Xk+1, Yk-1) and
P2k+1 = P2k + 2 r2yXk+1 - 2r2xYk+1 + r2x
Step 6: Determine symmetry points in the other three octants
Step 7: Move each pixel position(X, Y) onto the circular path centered on
(Xc, Yc) and plot the coordinate values as
X = X + Xc Y = Y + Yc
Step 8: Repeat steps for region 1 until 2 r2yX>=2 r2xY

1.4: Clipping Methods


The primary use of clipping in computer graphics is to remove objects, lines, or line segments that are
outside the viewing pane. The viewing transformation is insensitive to the position of points relative to
the viewing volume − especially those points behind the viewer − and it is necessary to remove these
points before generating the view.
Point Clipping
Clipping a point from a given window is very easy. Consider the following figure, where the rectangle
indicates the window. Point clipping tells us whether the given point (X, Y) is within the given window
or not; and decides whether we will use the minimum and maximum coordinates of the window.
The X-coordinate of the given point is inside the window, if X lies in between Wx1 ≤ X ≤ Wx2. Same
way, Y coordinate of the given point is inside the window, if Y lies in between Wy1 ≤ Y ≤ Wy2.

jj
Line Clipping
The concept of line clipping is same as point clipping. In line clipping, we will cut the portion of line
which is outside of window and keep only the portion that is inside the window.
Cohen-Sutherland Line Clippings
This algorithm uses the clipping window as shown in the following figure. The minimum coordinate for
the clipping region is (XWmin,YWmin) and the maximum coordinate for the clipping region
is (XWmax,YWmax).

36
We will use 4-bits to divide the entire region. These 4 bits represent the Top, Bottom, Right, and Left of
the region as shown in the following figure. Here, the TOP and LEFT bit is set to 1 because it is
the TOP-LEFT corner.

There are 3 possibilities for the line −


 Line can be completely inside the window
 Line can be completely outside of the window (This line will be completely removed from the
region).
 Line can be partially inside the window (We will find intersection point and draw only that
portion of line that is inside region).
Algorithm
Step 1 − Assign a region code for each endpoints.
Step 2 − If both endpoints have a region code 0000 then accept this line.
Step 3 − Else, perform the logical ANDoperation for both region codes.
Step 3.1 − If the result is not 0000, then reject the line.
Step 3.2 − Else you need clipping.
Step 3.2.1 − Choose an endpoint of the line that is outside the window.
Step 3.2.2 − Find the intersection point at the window boundary (base on region code).
Step 3.2.3 − Replace endpoint with the intersection point and update the region code.
Step 3.2.4 − Repeat step 2 until we find a clipped line either trivially accepted or trivially rejected.
Step 4 − Repeat step 1 for other lines.
Cyrus-Beck Line Clipping Algorithm
This algorithm is more efficient than Cohen-Sutherland algorithm. It employs parametric line
representation and simple dot products.

37
Parametric equation of line is −
P0P1:P(t) = P0 + t(P1 - P0)
Let Ni be the outward normal edge Ei. Now pick any arbitrary point P Ei on edge Ei then the dot product
Ni.[P(t) – PEi] determines whether the point P(t) is “inside the clip edge” or “outside” the clip edge or
“on” the clip edge.
The point P(t) is inside if Ni.[P(t) – PEi] < 0
The point P(t) is outside if Ni.[P(t) – PEi] > 0
The point P(t) is on the edge if Ni.[P(t) – PEi] = 0 (Intersection point)
Polygon Clipping (Sutherland Hodgman Algorithm)
A polygon can also be clipped by specifying the clipping window. Sutherland Hodgeman polygon
clipping algorithm is used for polygon clipping. In this algorithm, all the vertices of the polygon are
clipped against each edge of the clipping window.
First the polygon is clipped against the left edge of the polygon window to get new vertices of the
polygon. These new vertices are used to clip the polygon against right edge, top edge, bottom edge, of
the clipping window as shown in the following figure.

While processing an edge of a polygon with clipping window, an intersection point is found if edge is
not completely inside clipping window and the a partial edge from the intersection point to the outside
edge is clipped. The following figures show left, right, top and bottom edge clippings −

38
Text Clipping
Various techniques are used to provide text clipping in a computer graphics. It depends on the methods
used to generate characters and the requirements of a particular application. There are three methods for
text clipping which are listed below −
 All or none string clipping
 All or none character clipping
 Text clipping
The following figure shows all or none string clipping −

In all or none string clipping method, either we keep the entire string or we reject entire string based on
the clipping window. As shown in the above figure, STRING2 is entirely inside the clipping window so
we keep it and STRING1 being only partially inside the window, we reject.
The following figure shows all or none character clipping −

39
This clipping method is based on characters rather than entire string. In this method if the string is
entirely inside the clipping window, then we keep it. If it is partially outside the window, then −
 You reject only the portion of the string being outside
 If the character is on the boundary of the clipping window, then we discard that entire character
and keep the rest string.
The following figure shows text clipping −

This clipping method is based on characters rather than the entire string. In this method if the string is
entirely inside the clipping window, then we keep it. If it is partially outside the window, then
 You reject only the portion of string being outside.
 If the character is on the boundary of the clipping window, then we discard only that portion of
character that is outside of the clipping window.
Bitmap Graphics
A bitmap is a collection of pixels that describes an image. It is a type of computer graphics that the
computer uses to store and display pictures. In this type of graphics, images are stored bit by bit and
hence it is named Bit-map graphics. For better understanding let us consider the following example
where we draw a smiley face using bit-map graphics.

Now we will see how this smiley face is stored bit by bit in computer graphics.

By observing the original smiley face closely, we can see that there are two blue lines which are
represented as B1, B2 and E1, E2 in the above figure.

40
In the same way, the smiley is represented using the combination bits of A4, B5, C6, D6, E5, and F4
respectively.
The main disadvantages of bitmap graphics are −
 We cannot resize the bitmap image. If you try to resize, the pixels get blurred.
 Colored bitmaps can be very large.

1.5: Fill Methods.


Defining and Filling Regions of Pixels
Methods of defining region
i) Pixel-defined: specifies pixels in color or geometric range
ii) Symbolic: provides property [which] pixels in [a] region must have
Examples of symbolic:
• Closeness to some pixel
• Within circle of radius R
• Within a specified polygon
Polygon Filling Algorithm
Polygon is an ordered list of vertices as shown in the following figure. For filling polygons with
particular colors, you need to determine the pixels falling on the border of the polygon and those which
fall inside the polygon. In this chapter, we will see how we can fill polygons using different techniques.

Scan Line Algorithm


This algorithm works by intersecting scanline with polygon edges and fills the polygon between pairs of
intersections. The following steps depict how this algorithm works.
Step 1 − Find out the Ymin and Ymax from the given polygon.

Step 2 − ScanLine intersects with each edge of the polygon from Ymin to Ymax. Name each
intersection point of the polygon. As per the figure shown above, they are named as p0, p1, p2, p3.
Step 3 − Sort the intersection point in the increasing order of X coordinate i.e. (p0, p1), (p1, p2), and
(p2, p3).
Step 4 − Fill all those pair of coordinates that are inside polygons and ignore the alternate pairs.
Flood Fill Algorithm

41
Sometimes we come across an object where we want to fill the area and its boundary with different
colors. We can paint such objects with a specified interior color instead of searching for particular
boundary color as in boundary filling algorithm.
Instead of relying on the boundary of the object, it relies on the fill color. In other words, it replaces the
interior color of the object with the fill color. When no more pixels of the original interior color exist,
the algorithm is completed.
Once again, this algorithm relies on the Four-connect or Eight-connect method of filling in the pixels.
But instead of looking for the boundary color, it is looking for all adjacent pixels that are a part of the
interior.

Boundary Fill Algorithm


The boundary fill algorithm works as its name. This algorithm picks a point inside an object and starts to
fill until it hits the boundary of the object. The color of the boundary and the color that we fill should be
different for this algorithm to work.
In this algorithm, we assume that color of the boundary is same for the entire object. The boundary fill
algorithm can be implemented by 4-connected pixels or 8-connected pixels.
4-Connected Polygon
In this technique 4-connected pixels are used as shown in the figure. We are putting the pixels above,
below, to the right, and to the left side of the current pixels and this process will continue until we find a
boundary with different color.

Algorithm
Step 1 − Initialize the value of seed point (seedx, seedy), fcolor and dcol.
Step 2 − Define the boundary values of the polygon.

42
Step 3 − Check if the current seed point is of default color, then repeat the steps 4 and 5 till the
boundary pixels reached.
If getpixel(x, y) = dcol then repeat step 4 and 5
Step 4 − Change the default color with the fill color at the seed point.
setPixel(seedx, seedy, fcol)
Step 5 − Recursively follow the procedure with four neighborhood points.
FloodFill (seedx – 1, seedy, fcol, dcol)
FloodFill (seedx + 1, seedy, fcol, dcol)
FloodFill (seedx, seedy - 1, fcol, dcol)
FloodFill (seedx – 1, seedy + 1, fcol, dcol)
Step 6 − Exit
There is a problem with this technique. Consider the case as shown below where we tried to fill the
entire region. Here, the image is filled only partially. In such cases, 4-connected pixels technique cannot
be used.

8-Connected Polygon
In this technique 8-connected pixels are used as shown in the figure. We are putting pixels above,
below, right and left side of the current pixels as we were doing in 4-connected technique.
In addition to this, we are also putting pixels in diagonals so that entire area of the current pixel is
covered. This process will continue until we find a boundary with different color.

Algorithm
Step 1 − Initialize the value of seed point (seedx, seedy), fcolor and dcol.
Step 2 − Define the boundary values of the polygon.
Step 3 − Check if the current seed point is of default color then repeat the steps 4 and 5 till the boundary
pixels reached
43
If getpixel(x,y) = dcol then repeat step 4 and 5
Step 4 − Change the default color with the fill color at the seed point.
setPixel(seedx, seedy, fcol)
Step 5 − Recursively follow the procedure with four neighbourhood points
FloodFill (seedx – 1, seedy, fcol, dcol)
FloodFill (seedx + 1, seedy, fcol, dcol)
FloodFill (seedx, seedy - 1, fcol, dcol)
FloodFill (seedx, seedy + 1, fcol, dcol)
FloodFill (seedx – 1, seedy + 1, fcol, dcol)
FloodFill (seedx + 1, seedy + 1, fcol, dcol)
FloodFill (seedx + 1, seedy - 1, fcol, dcol)
FloodFill (seedx – 1, seedy - 1, fcol, dcol)
Step 6 − Exit
The 4-connected pixel technique failed to fill the area as marked in the following figure which won’t
happen with the 8-connected technique.

Inside-outside Test
This method is also known as counting number method. While filling an object, we often need to
identify whether particular point is inside the object or outside it. There are two methods by which we
can identify whether particular point is inside an object or outside.
 Odd-Even Rule
 Nonzero winding number rule
Odd-Even Rule
In this technique, we will count the edge crossing along the line from any point (x,y) to infinity. If the
number of interactions is odd, then the point (x,y) is an interior point; and if the number of interactions
is even, then the point (x,y) is an exterior point. The following example depicts this concept.

44
From the above figure, we can see that from the point (x,y), the number of interactions point on the left
side is 5 and on the right side is 3. From both ends, the number of interaction points is odd, so the point
is considered within the object.
Nonzero Winding Number Rule
This method is also used with the simple polygons to test the given point is interior or not. It can be
simply understood with the help of a pin and a rubber band. Fix up the pin on one of the edge of the
polygon and tie-up the rubber band in it and then stretch the rubber band along the edges of the polygon.
When all the edges of the polygon are covered by the rubber band, check out the pin which has been
fixed up at the point to be test. If we find at least one wind at the point consider it within the polygon,
else we can say that the point is not inside the polygon.

In another alternative method, give directions to all the edges of the polygon. Draw a scan line from the
point to be test towards the left most of X direction.
 Give the value 1 to all the edges which are going to upward direction and all other -1 as direction
values.
 Check the edge direction values from which the scan line is passing and sum up them.
 If the total sum of this direction value is non-zero, then this point to be tested is an interior
point, otherwise it is an exterior point.
 In the above figure, we sum up the direction values from which the scan line is passing then the
total is 1 – 1 + 1 = 1; which is non-zero. So the point is said to be an interior point.

45

You might also like