Final Lecture Notes - 2022 Computer Graphics CE 273
Final Lecture Notes - 2022 Computer Graphics CE 273
Final Lecture Notes - 2022 Computer Graphics CE 273
Compiled by
EXPECTED OUTCOMES
COURSE PRESENTATION
The course is presented through lectures, tutorial and activities supported with
handouts. The tutorial will be in the form of problem solving and discussions and will
constitute an integral part of each lecture. In this way, application of computing
theories and skills can be directly demonstrated. The student can best understand
and appreciate the subject by attending all lectures and laboratory work, by
practicing, reading references and handouts and by completing all assignments on
schedule.
POLICIES
Cheating means “submitting, without proper attribution, any computer code that is
directly traceable to the computer code written by another person.”
Or even better:
“Any form of cheating, including concealed notes during exams, copying or allowing others to
copy from an exam, students substituting for one another in exams, submission of another
person’s work for evaluation, preparing work for another person’s submission, unauthorized
collaboration on an assignment, submission of the same or substantially similar work for two
courses without the permission of the professors. Plagiarism is a form of Academic Misconduct
that involves taking either direct quotes or slightly altered, paraphrased material from a source
without proper citations and thereby failing to credit the original author. Cutting and pasting
from any source including the Internet, as well as purchasing papers, are forms of plagiarism.”
a. Hughes, J.F. (2014), Computer graphics: principles and practice, Upper Saddle
River, New Jersey: Addison-Wesley.
b. Hurlbut, J. (2017), Building Virtual Reality with A-Frame, Packt Publishing Ltd.
Edition: 1st, Pages: 186.
c. Marschner, S. and Shirley, P. (2016), Fundamentals of computer graphics, Boca
Raton: CRC Press, Taylor & Francis Group.
d. Akenine-Moller, T. and E. Haines (2002), Real-Time Rendering, A.K. Peters.
e. Angel, E. (2005), Interactive Computer Graphics: A Top-Down Approach with
OpenGL, Addison Wesley.
f. Farin, G. and D. Hansford (2004), Practical Linear Algebra: A Geometry Toolbox,
AK Peters.
g. Mozingo, D. (2016), A-Frame: A WebVR framework for building virtual reality
experiences, O'Reilly Media, Inc. Edition: 1st, 100 pp.
COURSE ASSESSMENT
ATTENDANCE
UMaT rules and regulations say that attendance is MANDATORY for every student. A
total of FIVE (5) attendances shall be taken at random to 10%. The only acceptable
excuse for absence is the one authorized by the Dean of Student on their prescribed
form. However, a student can also ask permission from me to be absent from a
particular class with a tangible reason. A student who misses all the five random
attendances marked WOULD not be allowed to take the final exams.
OFFICE HOURS
Thank You
CHAPTER 1
1.1 Introduction
It is difficult to display an image of any size on the computer screen. This method is
simplified by using Computer graphics. Graphics on the computer are produced by
using various algorithms and techniques. This tutorial describes how a rich visual
experience is provided to the user by explaining how all these processed by the
computer.
Computer Graphics is the creation of pictures with the help of a computer. The end
product of the computer graphics is a picture it may be a business graph, drawing,
and engineering.
Today computer graphics is entirely different from the earlier one. It is not possible. It is
an interactive user can control the structure of an object of various input devices.
1.2 Objectives
Suppose a shoe manufacturing company want to show the sale of shoes for five years.
For this vast amount of information is to store. So, a lot of time and memory will be
needed. This method will be tough to understand by a common man. In this situation
graphics is a better alternative. Graphics tools are charts and graphs. Using graphs,
data can be represented in pictorial form. A picture can be understood easily just
with a single look.
For some training applications, particular systems are designed. For example, Flight
Simulator.
Flight Simulator
It helps in giving training to the pilots of airplanes. These pilots spend much of their
training not in a real aircraft but on the ground at the controls of a Flight Simulator.
Advantages
a. Fuel Saving
b. Safety
c. Ability to familiarize the training with a large number of the world's airports.
1.5.4 Architect
Architect can explore an alternative solution to design problems at an interactive
graphics terminal. In this way, they can test many more solutions that would not be
possible without the computer.
1.5.5 Presentation Graphics
Example of presentation Graphics are bar charts, line graphs, pie charts and other
displays showing relationships between multiple parameters. Presentation Graphics is
commonly used to summarize
a. Financial Reports
b. Statistical Reports
c. Mathematical Reports
d. Scientific Reports
e. Economic Data for research reports
f. Managerial Reports
g. Consumer Information Bulletins
h. And other types of reports
1.5.7 Entertainment
Computer Graphics are now commonly used in making motion pictures, music videos
and television shows.
1.58 Visualization
It is used for visualization of scientists, engineers, medical personnel, business analysts
for the study of a large amount of information.
In interactive Computer Graphics user have some controls over the picture, i.e., the
user can make any change in the produced image. One example of it is the ping-
pong game.
Interactive Computer Graphics require two-way communication between the
computer and the user. A User can see the image and make any change by sending
his command with an input device.
Advantages:
a. Higher Quality
b. More precise results or products
c. Greater Productivity
d. Lower analysis and design cost
e. Significantly enhances our ability to understand data and to perceive trends.
Fig. 1.2 Illustration of the Work Within A Frame Buffer and T.V Monitor
Frame Buffer: A digital frame buffer is large, contiguous piece of computer memory
used to hold or map the image displayed on the screen.
a. At a minimum, there is 1 memory bit for each pixel in the raster. This amount of
memory is called a bit plane.
b. A 1024 x 1024 element requires 220 (210=1024;220=1024 x 1024)sq.raster or
1,048,576 memory bits in a single bit plane.
c. The picture is built up in the frame buffer one bit at a time.
d. ∵ A memory bit has only two states (binary 0 or 1), a single bit plane yields a
black and white (monochrome display).
e. As frame buffer is a digital device write raster CRT is an analog device.
GRAPHICS SYSTEMS
a. Input
b. Processing
c. Display / Output
How Data collected by input devices, process and shown as output is shown in Fig.
2.1.
This hardware and software framework are more than 4 decades old but is still useful
Graphics. Fig. 2.2 illustrates how the Conceptual Framework for interactive graphics
process starts and ends:
Display devices are also known as output devices. Most commonly used output
device in a graphics system is a video monitor.
a. CRT
b. Random Scan
c. Raster Scan
d. Color CRT
e. DVST (Direct View Storage Tube)
f. Flat Panel Display
g. Plasma Panel Display
h. LCD (Liquid Crystal Display)
2.4.1 Cathode Ray Tube (CRT)
CRT stands for Cathode Ray Tube. CRT is a technology used in traditional computer
monitors and televisions. The image on CRT display is created by firing electrons from
the back of the tube of phosphorus located towards the front of the screen.
Once the electron heats the phosphorus, they light up, and they are projected on a
screen. The color you view on the screen is produced by a blend of red, blue and
green light. Fig. 2.3, contains the named parts of the Cathode Ray Tube invented by
Karl Ferdinand Braun in the 1897.
Components of CRT:
Random Scan System uses an electron beam which operates like a pencil to create
a line image on the CRT screen. The picture is constructed out of a sequence of
straight-line segments. Each line segment is drawn on the screen by directing the
beam to move from one point on the screen to the next, where its x & y coordinates
define each point. After drawing the picture. The system cycles back to the first line
and design all the lines of the image 30 to 60 time each second. The process is shown
in Fig. 2.4 drawing an image in a Random Scan Display.
Advantages:
1. A CRT has the electron beam directed only to the parts of the screen where
an image is to be drawn.
2. Produce smooth line drawings.
3. High Resolution
Disadvantages:
A Raster Scan Display is based on intensity control of pixels in the form of a rectangular
box called Raster on the screen. Information of on and off pixels is stored in refresh
buffer or Frame buffer. Televisions in our house are based on Raster Scan Method. The
raster scan system can store information of each pixel position, so it is suitable for
realistic display of objects. Raster Scan provides a refresh rate of 60 to 80 frames per
second.
Frame Buffer is also known as Raster or bit map. In Frame Buffer the positions are called
picture elements or pixels. Beam refreshing is of two types. First is horizontal retracing
and second is vertical retracing. When the beam starts from the top left corner and
reaches the bottom right scale, it will again return to the top left side called at vertical
retrace. Then it will again be more horizontal, from top to bottom call as horizontal
retracing shown in Fig. 2.5.
Fig. 2.5 Photograph of Raster Scan Display
1. Interlaced Scanning
2. Non-Interlaced Scanning
In Interlaced scanning, each horizontal line of the screen is traced from top to bottom.
Due to which fading of display of object may occur. This problem can be solved by
Non-Interlaced scanning. In this first of all odd numbered lines are traced or visited by
an electron beam, then in the next circle, even number of lines are located.
For non-interlaced display refresh rate of 30 frames per second used. But it gives
flickers. For interlaced display refresh rate of 60 frames per second is used. Fig. 2.6
shows the difference between the Interlaced and Non-Interlaced types of scanning.
Advantages:
1. Realistic image
2. Million Different colors to be generated
3. Shadow Scenes are possible.
Disadvantages:
1. Low Resolution
2. Expensive
The CRT Monitor display by using a combination of phosphors. The phosphors are
different colors. There are two popular approaches for producing color displays with
a CRT are:
The Beam-Penetration method has been used with random-scan monitors. In this
method, the CRT screen is coated with two layers of phosphor, red and green and
the displayed color depends on how far the electron beam penetrates the phosphor
layers. This method produces four colors only, red, green, orange and yellow. A beam
of slow electrons excites the outer red layer only; hence screen shows red color only.
A beam of high-speed electrons excites the inner green layer. Thus, screen shows a
green color. Fig. 2.7 shows how an electron glows on the Phosphorus Coating.
Advantages:
1. Inexpensive
Disadvantages:
Shadow-Mask Method:
Construction: A shadow mask CRT has 3 phosphor color dots at each pixel position.
This type of CRT has 3 electron guns, one for each color dot and a shadow mask grid
just behind the phosphor coated screen.
Shadow mask grid is pierced with small round holes in a triangular pattern.
Fig. shows the delta-delta shadow mask method commonly used in color CRT system.
Illustration of how electrons produce RGB Phosphorus dots is shown in Fig. 2.8.
When the three beams pass through a hole in the shadow mask, they activate a
dotted triangle, which occurs as a small color spot on the screen.
The phosphor dots in the triangles are organized so that each electron beam can
activate only its corresponding color dot when it passes through the shadow mask.
Advantage:
1. Realistic image
2. Million different colors to be generated
3. Shadow scenes are possible
Disadvantage:
DVST terminals also use the random scan approach to generate the image on the
CRT screen. The term "storage tube" refers to the ability of the screen to retain the
image which has been projected against it, thus avoiding the need to rewrite the
image constantly.
Fig. 2.10, provides insight of how an image is generated on a CRT Screen in the Direct
View Storage Tube.
1. No refreshing is needed.
2. High Resolution
3. Cost is very less
Disadvantage:
The Flat-Panel display refers to a class of video devices that have reduced volume,
weight and power requirement compare to CRT.
Example: Small T.V. monitor, calculator, pocket video games, laptop computers, an
advertisement board in elevator. Flat Panel Display has two Types which is shown in
the Fig. 2.11.
1. Emissive Display: The emissive displays are devices that convert electrical energy
into light. Examples are Plasma Panel, thin film electroluminescent display and LED
(Light Emitting Diodes).
2. Non-Emissive Display: The Non-Emissive displays use optical effects to convert
sunlight or light from some other source into graphics patterns. Examples are LCD
(Liquid Crystal Device).
1. Cathode: It consists of fine wires. It delivers negative voltage to gas cells. The
voltage is released along with the negative axis.
2. Anode: It also consists of line wires. It delivers positive voltage. The voltage is
supplied along positive axis.
3. Fluorescent cells: It consists of small pockets of gas liquids when the voltage is
applied to this liquid (neon gas) it emits light.
4. Glass Plates: These plates act as capacitors. The voltage will be applied, the
cell will glow continuously.
The gas will slow when there is a significant voltage difference between horizontal and
vertical wires. The voltage level is kept between 90 volts to 120 volts. Plasma level does
not require refreshing. Erasing is done by reducing the voltage to 90 volts.
Each cell of plasma has two states, so cell is said to be stable. Displayable point in
plasma panel is made by the crossing of the horizontal and vertical grid. The resolution
of the plasma panel can be up to 512 * 512 pixels. Fig. 2.12 shows the state of cell in
plasma Panel Display.
Fig 2.12 Illustration of the Sate of Cell in Plasma Display Panel
Advantage:
1. High Resolution
2. Large screen size is also possible.
3. Less Volume
4. Less weight
5. Flicker Free Display
Disadvantage:
1. Poor Resolution
2. Wiring requirement anode and the cathode is complex.
3. Its addressing is also complex.
In an LED, a matrix of diodes is organized to form the pixel positions in the display and
picture definition is stored in a refresh buffer. Data is read from the refresh buffer and
converted to voltage levels that are applied to the diodes to produce the light
pattern in the display.
Liquid Crystal Displays are the devices that produce a picture by passing polarized
light from the surroundings or from an internal light source through a liquid-crystal
material that transmits the light.
LCD uses the liquid-crystal material between two glass plates; each plate is the right
angle to each other between plates liquid is filled. One glass plate consists of rows of
conductors arranged in vertical direction. Another glass plate is consisting of a row of
conductors arranged in horizontal direction. The pixel position is determined by the
intersection of the vertical & horizontal conductor. This position is an active part of the
screen.
Advantage:
Disadvantage:
Display File Memory: It is used for generation of the picture. It is used for identification
of graphic entities.
Display Controller
1. It handles interrupt
2. It maintains timings
3. It is used for interpretation of instruction.
Display Generator
The raster scan system is a combination of some processing units. It consists of the
control processing unit (CPU) and a particular processor called a display controller.
Display Controller controls the operation of the display device. It is also called a video
controller.
Working: The video controller in the output circuitry generates the horizontal and
vertical drive signals so that the monitor can sweep. Its beam across the screen during
raster scans. Diagram illustration of how the signal Fig. 2.15 Architecture of a Raster
Display System with a Display Processor.
As fig showing that 2 registers (X register and Y register) are used to store the
coordinate of the screen pixels. Assume that y values of the adjacent scan lines
increased by 1 in an upward direction starting from 0 at the bottom of the screen to
ymax at the top and along each scan line the screen pixel positions or x values are
incremented by 1 from 0 at the leftmost position to xmax at the rightmost position.
The origin is at the lowest left corner of the screen as in a standard Cartesian
coordinate system. Diagramatic display of X-Y Coordinates in Fig. 2.16.
Fig. 2.16 Diagram of the X-Y Coordinates
X register is set to 0 and y register is set to ymax. This (x, y') address is translated into a
memory address of frame buffer where the color value for this pixel position is stored.
The controller receives this color value (a binary no) from the frame buffer, breaks it
up into three parts and sends each element to a separate Digital-to-Analog
Converter (DAC).
These voltages, in turn, controls the intensity of 3 e-beam that are focused at the (x,
y) screen position by the horizontal and vertical drive signals.
This process is repeated for each pixel along the top scan line, each time
incrementing the X register by Y.
As pixels on the first scan line are generated, the X register is incremented through
xmax.
Then x register is reset to 0, and y register is decremented by 1 to access the next scan
line.
Pixel along each scan line is then processed, and the procedure is repeated for each
successive scan line units pixels on the last scan line (y=0) are generated.
For a display system employing a color look-up table frame buffer value is not directly
used to control the CRT beam intensity.
It is used as an index to find the three pixel-color value from the look-up table. This
lookup operation is done for each pixel on every display cycle.
As the time available to display or refresh a single pixel in the screen is too less,
accessing the frame buffer every time for reading each pixel intensity value would
consume more time what is allowed: The process from the Frame Buffer to the Monitor
is illustrated in the Fig. 2.17 as the Refresh Cycle.
Multiple adjacent pixel values are fetched to the frame buffer in single access and
stored in the register.
After every allowable time gap, the one-pixel value is shifted out from the register to
control the warm intensity for that pixel.
The procedure is repeated with the next block of pixels, and so on, thus the whole
group of pixels will be processed.
Display Devices:
The most commonly used display device is a video monitor. The operation of most
video monitors based on CRT (Cathode Ray Tube). The following display devices are
used:
Image representation is essentially the description of pixel colors. There are three
primary colors: R (red), G (green) and B (blue). Each primary color can take on
intensity levels produces a variety of colors. Using direct coding, we may allocate 3
bits for each pixel, with one bit for each primary color. The 3-bit representation allows
each primary to vary independently between two intensity levels: 0 (off) or 1 (on).
Hence each pixel can take on one of the eight colors. Table 2.2 reveals the Bit
information of the RGB colors as shown below.
A widely accepted industry standard uses 3 bytes, or 24 bytes, per pixel, with one byte
for each primary color. The way, we allow each primary color to have 256 different
intensity levels. Thus, a pixel can take on a color from 256 x 256 x 256 or 16.7 million
possible choices. The 24-bit format is commonly referred to as the actual color
representation.
Lookup Table approach reduces the storage requirement. In this approach pixel
values do not code colors directly. Alternatively, they are addresses or indices into a
table of color values. The color of a particular pixel is determined by the color value
in the table entry that the value of the pixel references. Fig. shows a look-up table with
256 entries. The entries have addresses 0 through 255. Each entry contains a 24-bit RGB
color value. Pixel values are now 1-byte. The color of a pixel whose value is i, where 0
<i<255, is persistence by the color value in the table entry whose address is i. It reduces
the storage requirement of a 1000 x 1000 image to one million bytes plus 768 bytes for
the color values in the look-up table. Fig. 2.18 illustrates the RGB look-up Table.
Fig. 2.18 Tabular illustration of the RGB Lookup Table
CHAPTER 3
INPUT-OUTPUT DEVICES
The Input Devices are the hardware that is used to transfer transfers input to the
computer. The data can be in the form of text, graphics, sound, and text. Output
device display data from the memory of the computer. Output can be text, numeric
data, line, polygon, and other objects. Image of The Information Processing Cycle
from the Input Devices, through the processing units and the Output devices is shown
in Fig. 3. 1.
1. Keyboard
2. Mouse
3. Trackball
4. Spaceball
5. Joystick
6. Light Pen
7. Digitizer
8. Touch Panels
9. Voice Recognition
10. Image Scanner
3.1.1 Keyboard:
The most commonly used input device is a keyboard. The data is entered by pressing
the set of keys. All keys are labeled. A keyboard with 101 keys is called a QWERTY
keyboard.
The keyboard has alphabetic as well as numeric keys. Some special keys are also
available.
1. Numeric Keys: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
2. Alphabetic keys: a to z (lower case), A to Z (upper case)
3. Special Control keys: Ctrl, Shift, Alt
4. Special Symbol Keys: ; , " ? @ ~ ? :
5. Cursor Control Keys: ↑ → ← ↓
6. Function Keys: F1 F2 F3....F9.
7. Numeric Keyboard: It is on the right-hand side of the keyboard and used for
fast entry of numeric data.
Function of Keyboard:
Advantage:
Disadvantage:
3.1.2 Mouse:
A Mouse is a pointing device and used to position the pointer on the screen. It is a
small palm size box. There are two or three depression switches on the top. The
movement of the mouse along the x-axis helps in the horizontal movement of the
cursor and the movement along the y-axis helps in the vertical movement of the
cursor on the screen. The mouse cannot be used to enter text. Therefore, they are
used in conjunction with a keyboard. Drawing of the X (horizontal) and Y (Vertical
Movement) of the Mouse is displayed in Fig. 3.2.
Advantage:
1. Easy to use
2. Not very expensive
3.1.3 Trackball
Advantage:
1. Trackball is stationary, so it does not require much space to use it.
2. Compact Size
3.1.4 Spaceball:
It is similar to trackball, but it can move in six directions where trackball can move in
two directions only. The movement is recorded by the strain gauge. Strain gauge is
applied with pressure. It can be pushed and pulled in various directions. The ball has
a diameter around 7.5 cm. The ball is mounted in the base using rollers. One-third of
the ball is an inside box, the rest is outside.
Applications:
3.1.5 Joystick:
Light Pen (similar to the pen) is a pointing device which is used to select a displayed
menu item or draw pictures on the monitor screen. It consists of a photocell and an
optical system placed in a small tube. When its tip is moved over the monitor screen,
and pen button is pressed, its photocell sensing element detects the screen location
and sends the corresponding signals to the CPU. Image of the Light pen is found in
the Fig. 3.5 below:
Uses:
3.1.7 Digitizers:
The digitizer is an operator input device, which contains a large, smooth board (the
appearance is similar to the mechanical drawing board) & an electronic tracking
device, which can be changed over the surface to follow existing lines. The electronic
tracking device contains a switch for the user to record the desire x & y coordinate
positions. The coordinates can be entered into the computer memory or stored or an
off-line storage medium such as magnetic tape. Fig. 3.6 shows the image of a Digitizer.
Fig. 3.6 Photograph of a Digitizer
Advantages:
Disadvantages:
1. Costly
2. Suitable only for applications which required high-resolution graphics.
Touch Panels is a type of display screen that has a touch-sensitive transparent panel
covering the screen. A touch screen registers input when a finger or other object
comes in contact with the screen.
When the wave signals are interrupted by some contact with the screen, that located
is recorded. Touch screens have long been used in military applications.
Voice Recognition is one of the newest, most complex input techniques used to
interact with the computer. The user inputs data by speaking into a microphone. The
simplest form of voice recognition is a one-word command spoken by one person.
Each command is isolated with pauses between the words.
Advantage:
Disadvantages:
It is an input device. The data or text is written on paper. The paper is feed to scanner.
The paper written information is converted into electronic format; this format is stored
in the computer. The input documents can contain text, handwritten material, picture
extra.
By storing the document in a computer document became safe for longer period of
time. The document will be permanently stored for the future. We can change the
document when we need. The document can be printed when needed.
Scanning can be of the black and white or colored picture. On stored picture 2D or
3D rotations, scaling and other operations can be applied.
1. Flat Bed Scanner: It resembles a photocopy machine. It has a glass top on its
top. Glass top in further covered using a lid. The document to be scanned is
kept on glass plate. The light is passed underneath side of glass plate. The light
is moved left to right. The scanning is done the line by line. The process is
repeated until the complete line is scanned. Within 20-25 seconds a document
of 4" * 6" can be scanned. Photo of the Named parts of a Flat Bed Scanner is
shown in Fig. 3.7.
Fig. 3. 7 Image Scanner - Flat Bed Scanner
2. Hand Held Scanner: It has a number of LED's (Light Emitting Diodes) the LED's
are arranged in the small case. It is called a Hand-held Scanner because it can
be kept in hand which performs scanning. For scanning the scanner is moved
over document from the top towards the bottom. Its light is on, while we move
it on document. It is dragged very slowly over document. If dragging of the
scanner over the document is not proper, the conversion will not correct. The
front and side views of a Hand-held Scanner is shown in Fig. 3.8.
Types of Printers
1. Printers
2. Plotters
3.2.1 Printers:
Printer is the most important output device, which is used to print data on paper.
Types of Printers:
There are many types of printers which are classified on various criteria as shown in
Fig. 3.10 Classification of Printers:
Fig. 3.10 Diagrammatic Display of the Classifications of Printers
1. Impact Printers: The printers that print the characters by striking against the ribbon
and onto the papers are known as Impact Printers.
1. Character Printers
2. Line Printers
2. Non-Impact Printers: The printers that print the characters without striking against
the ribbon and onto the papers are called Non-Impact Printers. These printers print a
complete page at a time, therefore, also known as Page Printers.
1. Laser Printers
2. Inkjet Printers
Dot matrix has printed in the form of dots. A printer has a head which contains nine
pins. The nine pins are arranged one below other. Each pin can be activated
independently. All or only the same needles are activated at a time. When needless
is not activated, and then the tip of needle stays in the head. When pin work, it
comes out of the print head. In nine pin printers, pins are arranged in 5 * 7 matrixes.
Fig. 3.11 shows the image of a Dot Matrix Printer.
Advantage:
1. Dot Matrix Printers prints output as dots, so it can print any shape of the
character. This allows the printer to print special character, charts, graphs,
etc.
2. Dot Matrix Printers come under the category of impact printers. The printing is
done when the hammer pin strikes the inked ribbon. The impressions are
printed on paper. By placing multiple copies of carbon, multiple copies of
output can be produced.
3. It is suitable for printing of invoices of companies.
Head is lying on a wheel and Pins corresponding to characters are like petals of
Daisy, that's why called Daisy wheel printer. Daisy Wheel image is shown in Fig. 3.12.
Fig. 3.12 Daisy Wheel Printer
Advantage:
Disadvantage:
These are line printers, which prints one line at a time. It consists of a drum. The shape
of the drum is cylindrical. The drum is solid and has characters embossed on it in the
form of vertical bands. The characters are in circular form. Each band consists of
some characters. Each line on drum consists of 132 characters. Because there are 96
lines so total characters are (132 * 95) = 12, 672.
These are called as line printers. These are used to print one line at a line. Basically,
chain consists of links. Each link contains one character. Printers can follow any
character set style, i.e., 48, 64 or 96 characters. Printer consists of a number of
hammers also.
Advantages:
Disadvantages:
These printers use a special link called electrostatic ink. The printer head has a
special nozzle. Nozzle drops ink on paper. Head contains up to 64 nozzles. The ink
dropped is deflected by the electrostatic plate. The plate is fixed outside the nozzle.
The deflected ink settles on paper. Fig. 3.13 contains the image of a Inkjet Printer.
Advantages:
Disadvantages:
These are non-impact page printers. They use laser lights to produces the dots
needed to form the characters to be printed on a page & hence the name laser
printers.
Step1: The bits of data sent by processing unit act as triggers to turn the laser beam
on & off.
Step2: The output device has a drum which is cleared & is given a positive electric
charge. To print a page, the modulated laser beam passing from the laser scans
back & forth the surface of the drum. The positive electric charge on the drum is
stored on just those parts of the drum surface which are exposed to the laser beam
create the difference in electric which charges on the exposed drum surface.
Image of a Laser Printer is depicted in the Fig. 3.14.
Step3: The laser exposed parts of the drum attract an ink powder known as toner.
Step6: The drum rotates back to the cleaner where a rubber blade cleans off the
excess ink & prepares the drum to print the next page.
Plotters
Advantage:
It consists of a drum. Paper on which design is made is kept on the drum. The drum
can rotate in both directions. Plotters comprised of one or more pen and
penholders. The holders are mounted perpendicular to drum surface. The pens are
kept in the holder, which can move left to the right as well as right to the left. The
graph plotting program controls the movement of pen and drum. Fig. 3.16 displays
the Drum Plotter.
It is used to draw complex design and graphs, charts. The Flatbed plotter can be
kept over the table. The plotter consists of pen and holder. The pen can draw
characters of various sizes. There can be one or more pens and pen holding
mechanism. Each pen has ink of different color. Different colors help to produce
multicolor design of document. The area of plotting is also variable. It can vary A4 to
21'*52'. Fig. 3.17 shows the image of a Flatbed Plotter.
It is used to draw
1. Cars
2. Ships
3. Airplanes
4. Shoe and dress designing
5. Road and highway design
The circuitry of the video display device of the computer is capable of converting
binary values (0, 1) into a pixel on and pixel off information. 0 is represented by pixel
off. 1 is represented using pixel on. Using this ability graphics computer represent
picture having discrete dots.
Any model of graphics can be reproduced with a dense matrix of dots or points. Most
human beings think graphics objects as points, lines, circles, ellipses. For generating
graphical object, many algorithms have been developed.
1. Point
2. Line
3. Sector
4. Arc
5. Ellipse
6. Rectangle
7. Polygon
8. Characters
9. Filled Regions
The process of converting is also called as rasterization. The algorithms implementation
varies from one computer system to another computer system. Some algorithms are
implemented using the software. Some are performed using hardware or firmware.
Some are performed using various combinations of hardware, firmware, and software.
The term pixel is a short form of the picture element. It is also called a point or dot. It is
the smallest picture unit accepted by display devices. A picture is constructed from
hundreds of such pixels. Pixels are generated using commands. Lines, circle, arcs,
characters; curves are drawn with closely spaced pixels. To display the digit or letter
matrix of pixels is used.
The closer the dots or pixels are, the better will be the quality of picture. Closer the
dots are, crisper will be the picture. Picture will not appear jagged and unclear if pixels
are closely spaced. So, the quality of the picture is directly proportional to the density
of pixels on the screen.
Pixels are also defined as the smallest addressable unit or element of the screen. Each
pixel can be assigned an address as shown in Fig. 4.1.
P (5, 5) used to represent a pixel in the 5th row and the 5th column. Each pixel has
some intensity value which is represented in memory of computer called a frame
buffer. Frame Buffer is also called a refresh buffer. This memory is a storage area for
storing pixels values using which pictures are displayed. It is also called as digital
memory. Inside the buffer, image is stored as a pattern of binary digits either 0 or 1. So
there is an array of 0 or 1 used to represent the picture. In black and white monitors,
black pixels are represented using 1's and white pixels are represented using 0's. In
case of systems having one bit per pixel frame buffer is called a bitmap. In systems
with multiple bits per pixel it is called a pixmap.
Each pixel on the graphics display does not represents a mathematical point. Instead,
it means a region which theoretically can contain an infinite number of points. Scan-
Converting a point involves illuminating the pixel that contains the point.
1. Line should appear Straight: We must appropriate the line by choosing addressable
points close to it. If we choose well, the line will appear straight, if not, we shall produce
crossed lines. Fig. 4.4 Poor Line Generating Algorithm.
2. Lines should terminate accurately: Unless lines are plotted accurately, they may
terminate at the wrong place. Fig. 4.6 shows a drawing of an Uneven Line.
4. Line density should be independent of line length and angle: This can be done by
computing an approximating line-length estimate and to use a line-generation
algorithm that keeps line density constant to within the accuracy of this estimate.
It is the simplest form of conversion. First of all scan P1 and P2 points. P1 has co-ordinates
(x1',y1') and (x2' y2' ).
Example: A line with starting point as (0, 0) and ending point (6, 18) is given. Calculate
value of intermediate points and slope of line. Fig. 4.7 illustrates Graph.
1. P2 (1,3)
2. P3 (2,6)
3. P4 (3,9)
4. P5 (4,12)
5. P6 (5,15)
6. P7 (6,18)
yi=mxi+b......................equation 1
xi+1= , y=y+1
Until y → y2
Advantage:
Disadvantage:
DDA Algorithm:
Step1: Start Algorithm
Step2: Declare x1,y1,x2,y2,dx,dy,x,y as integer variables.
Step3: Enter value of x1,y1,x2,y2.
Step4: Calculate dx = x2-x1
Step5: Calculate dy = y2-y1
Step6: If ABS (dx) > ABS (dy)
Then step = abs (dx)
Else
Step7: xinc=dx/step
yinc=dy/step
assign x = x1
assign y = y1
Step8: Set pixel (x, y)
Step9: x = x + xinc
y = y + yinc
Set pixels (Round (x), Round (y))
Step10: Repeat step 9 until x = x2
Step11: End Algorithm
Example: If a line is drawn from (2, 3) to (6, 15) with use of DDA. How many points will
be needed to generate such line?
▪ x1=2
▪ y1=3
▪ x2= 6
▪ y2=15
▪ dx = 6 - 2 = 4
▪ dy = 15 - 3 = 12
▪ m=
This algorithm is used for scan converting a line. It was developed by Bresenham. It is
an efficient method because it involves only integer addition, subtractions, and
multiplication operations. These operations can be performed very rapidly so lines can
be generated quickly.
In this method, next pixel selected is that one who has the least distance from true
line.
Assume a pixel P1'(x1',y1'),then select subsequent pixels as we work our may to the
night, one pixel position at a time in the horizontal direction toward P2'(x2',y2').
The line is best approximated by those pixels that fall the least distance from the path
between P1',P2'. Fig. 4.9 Scan Converting A Line.
Fig 4. 9 Illustration of Scan Converting a line
To chooses the next one between the bottom pixel S and top pixel T.
If S is chosen
If T is chosen
y=mxi+1+b
s = y-yi
t = (yi+1)-y
s-t
This difference is
s-t = (y-yi)-[(yi+1)-y]
= 2y - 2yi -1
di=△x (s-t)
di=△x (2 (xi+1)+2b-2yi-1)
=2△xyi-2△y-1△x.2b-2yi△x-△x
di=2△y.xi-2△x.yi+c
We can write the decision variable di+1 for the next slip on
di+1=2△y.xi+1-2△x.yi+1+c
di+1-di=2△y.(xi+1-xi)- 2△x(yi+1-yi)
di+1+di=2△y.(xi+1-xi)- 2△x(yi+1-yi)
Special Cases
di+1=di+2△y-2△x
Finally, we calculate d1
d1=△x[2m(x1+1)+2b-2y1-1]
d1=△x[2(mx1+b-y1)+2m-1]
d1=2△y-△x
Advantage:
Disadvantage:
1. This algorithm is meant for basic line drawing only Initializing is not a part of
Bresenham's line algorithm. So to draw smooth lines, you should want to look
into a different algorithm.
Calculate i1=2*dy
Calculate i2=2*(dy-dx)
Calculate d=i1-dx
Step5: Consider (x, y) as starting point and xend as maximum possible value of x.
If dx < 0
Then x = x2
y = y2
xend=x1
If dx > 0
Then x = x1
y = y1
xend=x2
If x > = xend
Stop.
If d < 0
Then d = d + i1
If d ≥ 0
Then d = d + i2
Increment y = y + 1
Step9: Increment x = x + 1
Step11: Go to step 7
Example: Starting and Ending position of the line are (1, 1) and (8, 5). Find
intermediate points. Table 4.1 shows the Formula used to gain the D values.
Solution:
x1=1
y1=1
x2=8
y2=5
dx= x2-x1=8-1=7
dy=y2-y1=5-1=4
I1=2* ∆y=2*4=8
I2=2*(∆y-∆x)=2*(4-7)=-6
d = I1-∆x=8-7=1
Table 4. 1 D=D+l1 or l2
x y d=d+I1 or I2
1 1 d+I2=1+(-6)=-5
2 2 d+I1=-5+8=3
3 2 d+I2=3+(-6)=-3
4 3 d+I1=-3+8=5
5 3 d+I2=5+(-6)=-1
6 4 d+I1=-1+8=7
7 4 d+I2=7+(-6)=1
8 5
4.9 The DDA Algorithm and the Bresenham’s Line Algorithm has many differences.
Within the Table 4.2 is a summary of the Major differences between the two:
DDA Algorithm Bresenham's Line Algorithm
1. DDA Algorithm use floating 1. Bresenham's Line Algorithm use fixed point, i.e.,
point, i.e., Real Arithmetic. Integer Arithmetic
2. DDA Algorithms uses
2.Bresenham's Line Algorithm uses only
multiplication & division its
subtraction and addition its operation
operation
3. DDA Algorithm is slowly than
3. Bresenham's Algorithm is faster than DDA
Bresenham's Line Algorithm in line
Algorithm in line because it involves only addition
drawing because it uses real
& subtraction in its calculation and uses only
arithmetic (Floating Point
integer arithmetic.
operation)
4. DDA Algorithm is not accurate
4. Bresenham's Line Algorithm is more accurate
and efficient as Bresenham's Line
and efficient at DDA Algorithm.
Algorithm.
5.DDA Algorithm can draw circle 5. Bresenham's Line Algorithm can draw circle
and curves but are not accurate and curves with more accurate than DDA
as Bresenham's Line Algorithm Algorithm.
CHAPTER 5
3D COMPUTER GRAPHICS
The 2D can show two-dimensional objects. Like the Bar chart, pie chart, graphs. But
some more natural objects can be represented using 3D. Using 3D, we can see
different shapes of the object in different sections.
5.2 3D Geometry
In the right -handed system thumb of right- hand points to positive z-direction and left-
hand system thumb point to negative two directions. Following Fig. show right-hand
orientation of the cube. Fig. 5.1 shows the image of a Three-Dimensional Graphical
events
Fig. 5.1 Photo of a Three-Dimensional Graphics
Point A x, y, z
Point B x, y, 0
Point C 0, y, 0
Point D 0, y, z
Producing realism in 3D: The three-dimensional objects are made using computer
graphics. The technique used for two Dimensional displays of three-Dimensional
objects is called projection. Several types of projection are available, i.e.,
1. Parallel Projection
2. Perspective Projection
3. Orthographic Projection
Parallel Projection: In this projection point on the screen is identified within a point in
the three-dimensional object by a line perpendicular to the display screen. The
architect Drawing, i.e., plan, front view, side view, elevation is nothing but lines of
parallel projections.
Perspective Projection: This projection has a property that it provides idea about
depth. Farther the object from the viewer, smaller it will appear. All lines in perspective
projection converge at a center point called as the center of projection.
Orthographic Projection: It is simplest kind of projection. In this, we take a top, bottom,
side view of the object by extracting parallel lines from the object.
The techniques for generating different images of a solid object depend upon the
type of object. Two viewing techniques are available for viewing three-dimensional
objects.
5.4.1 Translation
x1=x+ Tx
y1=y+Ty
z1=z+ Tz
Example: A point has coordinates in the x, y, z direction i.e., (5, 6, 7). The translation is
done in the x-direction by 3 coordinate and y direction. Three coordinates and in the
z- direction by two coordinates. Shift the object. Find coordinates of the new position.
Solution:
Co-ordinate of the point are (5, 6, 7)
Translation vector in x direction = 3
Translation vector in y direction = 3
Translation vector in z direction = 2
Translation matrix is
5.5 3D Scaling
Scaling is used to change the size of an object. The size can be increased or
decreased. The scaling three factors are required Sx Sy and Sz. Fig. 5.3 shows the various
forms of 3D Scaling.
Note: If all scaling factors Sx=Sy=Sz. Then scaling is called as uniform. If scaling is done
with different scaling vectors, it is called a differential scaling.
In Fig. (a) point (a, b, c) is shown, and object whose scaling is to done also shown in
steps in fig (b), fig (c) and fig (d). Fig. 5.4 Object and Point Scaling, Fig. 5.6 Object
shifts.
5.6 Rotation
Following Fig. 5.7 show rotation of the object about the Y axis
When the object is rotated about an axis that is not parallel to any one of co-ordinate
axis, i.e., x, y, z. Then additional transformations are required. First of all, alignment is
needed, and then the object is being back to the original position. Following steps are
required
Photo in Fig. 5.9 shows the Rotation about the Y-axis in Clockwise motion.
Fig. show the original position of object and position of object after rotation about the
x-axis. Fig. 5.10 shows the Rotation of an object about the X- Axis.
Step1: Initial position of P' and P" is shown. Fig. 5.11 shows the Rotation of an object
about the arbitrary axis.
Step2: Translate object P' to origin. Fig. 5.12 shows the translation of an object about
arbitrary axis.
Step3: Rotate P" to z axis so that it aligns along the z-axis. Fig. 5.13 depicts the rotation
of an object about the arbitrary z-axis.
Fig. 5. 13 Illustrates The Rotation About Arbitrary Z-Axis
Step4: Rotate about around z- axis. The rotation of an object around the Z-axis is
illustrated in Fig. 5.14
Step5: Rotate axis to the original position. Fig. 5.15 shows the axis rotation upon the
original positions.
Fig. 5.15 Illustrates how to Rotate Arbitrary Axis to the Original Position
Step6: Translate axis to the original position. Illustration of the axis transformation is
displayed in the Fig. 5.16.
Translation matrix
Inverse Transformations
Inverse Transformations
Inverse Transformations
5.8 Reflection
It is also called a mirror image of an object. For this reflection axis and reflection of
plane is selected. Three-dimensional reflections are similar to two dimensions.
Reflection is 180° about the given axis. For reflection, plane is selected (xy,xz or yz).
Following matrices show reflection respect to all these three planes. Fig. 5. 17 shows
the Reflection of the XY plane.
5.8.1 Reflection relative to XY plane
Reflection
5.8.2 Reflection relative to YZ plane
Reflection
Reflection
5.9 Shearing
It is change in the shape of the object. It is also called as deformation. Change can
be in the x -direction or y -direction or both directions in case of 2D. If shear occurs in
both directions, the object will be distorted. But in 3D shear can occur in three
directions. Fig. 5.18 depicts the Shear in Y direction.
PROJECTION
6.1 Projection
In perspective projection farther away object from the viewer, small it appears. This
property of projection gives an idea about depth. The artist use perspective projection
from drawing three-dimensional scenes.
It is the point where all lines will appear to meet. There can be one point, two point,
and three-point perspectives.
One Point: There is only one vanishing point as shown in fig (a)
Two Points: There are two vanishing points. One is the x-direction and other in the y -
direction as shown in fig (b)
Three Points: There are three vanishing points. One is x second in y and third in two
directions.
In Perspective projection lines of projection do not remain parallel. The lines converge
at a single point called a center of projection. The projected image on the screen is
obtained by points of intersection of converging lines with the plane of the screen.
The image on the screen is seen as of viewer's eye were located at the center of
projection, lines of projection would correspond to path travel by light beam
originating from object.
It introduces several anomalies due to these object shape and appearance gets
affected.
1. Perspective foreshortening: The size of the object will be small of its distance
from the center of projection increases.
2. Vanishing Point: All lines appear to meet at some point in the view plane.
3. Distortion of Lines: A range lies in front of the viewer to back of viewer is
appearing to six rollers.
Fig. 6.3 exhibits images of the Anomalies in Perspective views in Projection
Foreshortening of the z-axis in fig (a) produces one vanishing point, P1. Foreshortening
the x and z-axis results in two vanishing points in fig (b). Adding a y-axis foreshortening
in fig (c) adds vanishing point along the negative y-axis.
6.6 Parallel Projection
Parallel Projection use to display picture in its true shape and size. When projectors are
perpendicular to view plane then is called orthographic projection. The parallel
projection is formed by extending parallel lines from each vertex on the object until
they intersect the plane of the screen. The point of intersection is the projection of
vertex.
Parallel projections are used by architects and engineers for creating working drawing
of the object, for complete representations require two or more views of an object
using different planes. Fig. 6.4 shows the directions in Parallel Projections and Fig. 6. 5
reveals the Block of the Types of the Parallel Projection.
Fig. 6.4 Illustrations of the Directions in Parallel Projection
A-FRAME
7.1 Introduction
A-Frame is a web framework for building virtual reality (VR) experiences. A-Frame is
based on top of HTML, making it simple to get started. But A-Frame is not just a 3D
scene graph or a markup language; the core is a powerful entity-component
framework that provides a declarative, extensible, and composable structure to
three.js.
A-Frame supports most VR headsets such as Vive, Rift, Windows Mixed Reality,
Daydream, GearVR, Cardboard, Oculus Go, and can even be used for augmented
reality. Although A-Frame supports the whole spectrum, A-Frame aims to define fully
immersive interactive VR experiences that go beyond basic 360° content, making full
use of positional tracking and controllers.
7.2 Getting Started
A-Frame can be developed from a plain HTML file without having to install anything.
A great way to try out A-Frame is to remix the starter example on Glitch, an online
code editor that instantly hosts and deploys for free. Alternatively, create an .html file
and include A-Frame in the <head>:
<html>
<head>
<script src="https://aframe.io/releases/1.2.0/aframe.min.js"></script>
</head>
<body>
<a-scene>
<a-sky color="#ECECEC"></a-sky>
</a-scene>
</body>
</html>
7.2 Features
VR Made Simple: Just drop in a <script> tag and <a-scene>. A-Frame will handle
3D boilerplate, VR setup, and default controls. Nothing to install, no build steps.
Cross-Platform VR: Build VR applications for Vive, Rift, Windows Mixed Reality,
Daydream, GearVR, and Cardboard with support for all respective controllers. Don’t
have a headset or controllers? No problem! A-Frame still works on standard desktop
and smartphones.
Performance: A-Frame is optimized from the ground up for WebVR. While A-Frame
uses the DOM, its elements don’t touch the browser layout engine. 3D object updates
are all done in memory with little garbage and overhead. The most interactive and
large scale WebVR applications have been done in A-Frame running smoothly at
90fps.
Components: Hit the ground running with A-Frame’s core components such as
geometries, materials, lights, animations, models, raycasters, shadows, positional
audio, text, and controls for most major headsets. Get even further from the hundreds
of community components including environment, state, particle systems, physics,
multiuser, oceans, teleportation, super hands, and augmented reality.
Proven and Scalable: A-Frame has been used by companies such as Google,
Disney, Samsung, Toyota, Ford, Chevrolet, Amnesty International, CERN, NPR, Al
Jazeera, The Washington Post, NASA. Companies such as Google, Microsoft, Oculus,
and Samsung have made contributions to A-Frame.
7.4 Entity
A-Frame represents an entity via the <a-entity> element. As defined in the entity-
component-system pattern, entities are placeholder objects to which we plug in
components to provide them appearance, behavior, and functionality. In A-Frame,
entities are inherently attached with the position, rotation, and scale components.
E.g., consider the entity below. By itself, it has no appearance, behavior, or
functionality. It does nothing:
<a-entity>
Retrieving an Entity
<a-entity id="mario"></a-entity>
var el = document.querySelector('#mario');
Once we have an entity, we have access to its properties and methods detailed
below.
▪ Properties
▪ components
In A-Frame, components modify entities which are 3D objects in the scene. We mix
and compose components together to build complex objects. They let us
encapsulate three.js and JavaScript code into modules that we can use declaratively
from HTML.
Components are roughly analogous to CSS. Like how CSS rules modify the
appearance of elements, component properties modify the appearance, behavior,
and functionality of entities.
HTML attributes represent component names and the value of those attributes
represent component data.
7.6 System
AFRAME.registerSystem('my-component', {
schema: {}, // System schema. Parses into `this.data`.
init: function () {
// Called on scene initialization.
},
AFRAME.registerComponent('my-component', {
init: function () {
console.log(this.system);
}
});
7.7 Scene
A scene is represented by the <a-scene> element. The scene is the global root
object, and all entities are contained within the scene.
The scene inherits from the Entity class so it inherits all of its properties, its
methods, the ability to attach components, and the behavior to wait for all of
its child nodes (e.g., <a-assets> and <a-entity>) to load before kicking off the
render loop.
<a-scene> handles all of the three.js and WebVR/WebXR boilerplate for us:
Set up canvas, renderer, render loop
Default camera and lights
Set up webvr-polyfill, VREffect
Add UI to Enter VR that calls WebVR API
ConFig. WebXR devices through the webxr system
Example:
<a-scene>
<a-assets>
<img id="texture" src="texture.png">
</a-assets>
<a-box src="#texture"></a-box>
</a-scene>
There are many properties and their descriptions. Below are just a few enlisted
in the Table 7.1.
Table 7.1 Properties and Description Table
Name Description
1. behaviors Array of components with tick methods that will be run on
every frame
2. camera Active three.js camera.
3. canvas Reference to the canvas element.
4. isMobile Whether or not environment is detected to be mobile
5. object3D THREE.Scene object.
A-Frame has an asset management system that allows us to place our assets
in one place and to preload and cache assets for better performance. Note
the asset management system is purely for preloading assets. Assets that are
set on entities at runtime could be done via direct URLs to the assets.
Games and rich 3D experiences traditionally preload their assets, such as
models or textures, before rendering their scenes. This makes sure that assets
aren’t missing visually, and this is beneficial for performance to ensure scenes
don’t try to fetch assets while rendering.
We place assets within <a-assets>, and we place <a-assets> within <a-scene>.
Assets include:
• <a-asset-item> - Miscellaneous assets such as 3D models and materials
• <audio> - Sound files
• <img> - Image textures
• <video> - Video textures
The scene won’t render or initialize until the browser fetches (or errors out) all
the assets or the asset system reaches the timeout. E.g., we can define our
assets in <a-assets> and point to those assets from our entities using selectors:
<a-scene>
<!-- Asset management system. -->
<a-assets>
<a-asset-item id="horse-obj" src="horse.obj"></a-asset-item>
<a-asset-item id="horse-mtl" src="horse.mtl"></a-asset-item>
<a-mixin id="giant" scale="5 5 5"></a-mixin>
<audio id="neigh" src="neigh.mp3"></audio>
<img id="advertisement" src="ad.png">
<video id="kentucky-derby" src="derby.mp4"></video>
</a-assets>
The scene and its entities will wait for every asset (up until the timeout) before
initializing and rendering.