Final Lecture Notes - 2022 Computer Graphics CE 273

Download as pdf or txt
Download as pdf or txt
You are on page 1of 107

FACULTY OF ENGINEERING

Computer Science and Engineering Department

COMPUTER GRAPHICS (CE 273)

Compiled by

FELIX LARBI ARYEH, PhD


UMaT, TARKWA
2022
TABLE OF CONTENTS

TABLE OF CONTENTS ............................................................................................................ ii


COURSE OBJECTIVES ..........................................................................................................vi
EXPECTED OUTCOMES ........................................................................................................vi
COURSE PRESENTATION ......................................................................................................vi
POLICIES............................................................................................................................... vi
SOFTWARE REQUIREMENT FOR A-FRAME PRACTICAL SECTION
REFERENCES AND RECOMMENDED TEXTBOOKS .............................................................. vii
COURSE ASSESSMENT ........................................................................................................ viii
ATTENDANCE ..................................................................................................................... viii
OFFICE HOURS ................................................................................................................... viii
CHAPTER 1 ........................................................................................................................... 1
INTRODUCTION TO COMPUTER GRAPHICS ....................................................................... 1
1.1 Introduction
1.2 Objectives
1.3 Definition of Computer Graphics
1.4 Why Computer Graphics Used?
1.5 Application of Computer Graphics
1.5.1 Education and Training
1.5.2 Use in Biology
1.5.3 Computer-Generated Maps
1.5.4 Architect
1.5.5 Presentation Graphics
1.5.6 Computer Art
1.5.7 Entertainment
1.58 Visualization
1.5.9 Educational Software
1.5.10 Printing Technology
1.6 Interactive and Passive Graphics
1.6.1 Non-Interactive or Passive Computer Graphics:
1.6.2 Interactive Computer Graphics:
1.7 Working of Interactive Computer Graphics:
1.8 Properties of Video Monitor:
CHAPTER 2 ........................................................................................................................... 7
GRAPHICS SYSTEMS ............................................................................................................. 7
2.1 Computer Graphic System
2.2 Basic Component - Interactive Graphic System
2.3 Conceptual framework for Interactive Graphics
2.4 Display Devices
2.4.1 Cathode Ray Tube (CRT)
2.4.2 Random Scan Display
2.4.3 Raster Scan Display
2.4.4 Color CRT Monitors:
2.4.5 Direct View Storage Tubes:
2.4.6 Flat Panel Display:
2.5 Display Processor
Display Devices:
2.6 Look-Up Table:
CHAPTER 3 ......................................................................................................................... 30
INPUT-OUTPUT DEVICES ..................................................................................................... 30
3.1 Input Devices
3.1.1 Keyboard:
3.1.2 Mouse:
3.1.3 Trackball
3.1.4 Spaceball:
3.1.6 Light Pen
3.1.7 Digitizers:
3.1.8 Touch Panels:
3.1.9 Voice Systems (Voice Recognition):
3.1.10 Image Scanner
3.2 Output Devices
3.2.1 Printers:
3.2.3 Daisy Wheel Printers:
3.2.4 Drum Printers:
3.2.5 Chain Printers:
3.2.8 Drum Plotter:
3.2.9 Flatbed Plotter:
3.3 Graphics Software:
CHAPTER 4 ......................................................................................................................... 47
SCAN CONVERSION DEFINITION ...................................................................................... 47
4.1 Scan Conversion Definition
4.2 Pixel or Pel:
4.3 Scan Converting a Point
4.4 Scan Converting a Straight Line
4.5 Properties of Good Line Drawing Algorithm:
4.6 Algorithm for line Drawing:
4.6.1 Direct use of line equation:
4.6.2 Algorithm for drawing line using equation:
4.7 DDA Algorithm
4.8 Bresenham's Line Algorithm
4.9 The DDA Algorithm and the Bresenham’s Line Algorithm has many
differences.
CHAPTER 5 ......................................................................................................................... 63
3D COMPUTER GRAPHICS ................................................................................................ 63
5.1 Three-Dimensional Graphics
5.2 3D Geometry
5.3 Three Dimensional Models
5.4 Three Dimensional Transformations
5.4.1 Translation
5.4.2 Matrix for translation
5.5 3D Scaling
5.5.1 Matrix for Scaling
5.6 Rotation
5.7 Rotation about Arbitrary Axis
5.8 Inverse Transformations
5.8 Reflection
5.8.1 Reflection relative to XY plane
5.8.2 Reflection relative to YZ plane
5.9 Shearing .................................................................................................................. 79
5.9.1 Matrix for shear
CHAPTER 6 ......................................................................................................................... 81
PROJECTION ...................................................................................................................... 81
6.1 Projection
6.2 Perspective Projection
6.3 Vanishing Point
6.4 Important terms related to perspective
6.5 Anomalies in Perspective Projection
6.6 Parallel Projection
CHAPTER 7 ......................................................................................................................... 89
A-FRAME ............................................................................................................................ 89
7.1 Introduction
7.2 Getting Started
7.2 Features
7.3 Software Requirement
7.4 Entity
7.5 Entity
7.5.1 Component HTML Form
7.5.2 Single-Property Component
7.5.2 Multi-Property Component
7.5.3 Register a Component
7.6 System
7.6.1 Registering a System
7.7 Scene
7.8 Asset Management System
COURSE OBJECTIVES

This course is designed to provide a comprehensive introduction to computer


graphics leading to the ability to understand contemporary terminology, progress,
issues, and trends. A thorough introduction to computer graphics techniques, focusing
on 3D modeling, image synthesis, and rendering. The interdisciplinary nature of
computer graphics is emphasized in the wide variety of examples and applications.

EXPECTED OUTCOMES

The goal of this course is to equip students to produce 3D illustration or image


processing. It is expected that at the end of this course students should be able to
understand:
a. Fundamentals of computer graphics algorithms;
b. Basics of real-time rendering and graphics hardware;
c. Basic Web OpenGL (A variant of OpenGL, DirectX); and
d. A-Frame (Web OpenGL Framework) programming experience.

COURSE PRESENTATION

The course is presented through lectures, tutorial and activities supported with
handouts. The tutorial will be in the form of problem solving and discussions and will
constitute an integral part of each lecture. In this way, application of computing
theories and skills can be directly demonstrated. The student can best understand
and appreciate the subject by attending all lectures and laboratory work, by
practicing, reading references and handouts and by completing all assignments on
schedule.

POLICIES

Cheating means “submitting, without proper attribution, any computer code that is
directly traceable to the computer code written by another person.”
Or even better:
“Any form of cheating, including concealed notes during exams, copying or allowing others to
copy from an exam, students substituting for one another in exams, submission of another
person’s work for evaluation, preparing work for another person’s submission, unauthorized
collaboration on an assignment, submission of the same or substantially similar work for two
courses without the permission of the professors. Plagiarism is a form of Academic Misconduct
that involves taking either direct quotes or slightly altered, paraphrased material from a source
without proper citations and thereby failing to credit the original author. Cutting and pasting
from any source including the Internet, as well as purchasing papers, are forms of plagiarism.”

I give students a failing homework grade for any cheating.

SOFTWARE REQUIREMENT FOR A-FRAME PRACTICAL SECTION

a. Notepad ++ (Text Editors) or Bracket


b. XAMPP (Webserver)
c. Web-GL (A-Frame)
d. Web browsers: IE 9, Google Chrome 10+, Opera 10+, Safari 5+, Mozilla Firefox

REFERENCES AND RECOMMENDED TEXTBOOKS

a. Hughes, J.F. (2014), Computer graphics: principles and practice, Upper Saddle
River, New Jersey: Addison-Wesley.
b. Hurlbut, J. (2017), Building Virtual Reality with A-Frame, Packt Publishing Ltd.
Edition: 1st, Pages: 186.
c. Marschner, S. and Shirley, P. (2016), Fundamentals of computer graphics, Boca
Raton: CRC Press, Taylor & Francis Group.
d. Akenine-Moller, T. and E. Haines (2002), Real-Time Rendering, A.K. Peters.
e. Angel, E. (2005), Interactive Computer Graphics: A Top-Down Approach with
OpenGL, Addison Wesley.
f. Farin, G. and D. Hansford (2004), Practical Linear Algebra: A Geometry Toolbox,
AK Peters.
g. Mozingo, D. (2016), A-Frame: A WebVR framework for building virtual reality
experiences, O'Reilly Media, Inc. Edition: 1st, 100 pp.
COURSE ASSESSMENT

Factor Weight Location Date Time


Could be
Quizzes &
20% In class announced or
presentations
NOT
Grading System

Attendance 10% In class Random


Laboratory
10% In class
Exercises
To Be
Final Exam 60% (TBA) Announced 3 Hrs
(TBA)
80-100% 70-79.9% 60-69.9% 50-59.9% 0-49.9%
A B C D FAIL

ATTENDANCE

UMaT rules and regulations say that attendance is MANDATORY for every student. A
total of FIVE (5) attendances shall be taken at random to 10%. The only acceptable
excuse for absence is the one authorized by the Dean of Student on their prescribed
form. However, a student can also ask permission from me to be absent from a
particular class with a tangible reason. A student who misses all the five random
attendances marked WOULD not be allowed to take the final exams.

OFFICE HOURS

I will be available in my office every Thursday (8.00-10.00hrs) to answering students’


questions and provide guidance on any issues related to the course. All electronic
assignments to be forwarded to the following address: flaryeh@umat.edu.gh.
Please Note the Following:
a. Students must endeavor to attend all lectures, lab works and complete all their
assignments and coursework on time.
b. Students must be seated and fully prepared for lectures at least 5 minutes
before the scheduled time.
c. Under no circumstance, a student should be late more than 15 minutes after
the scheduled time
d. NO student shall be admitted into the lecture room more than 15 minutes after
the start of lectures unless pre-approved by me.
e. All cell phones, IPods, MP3/MP4s, and PDAs etc MUST remain switched off
throughout the lecture period.
f. There shall be no eating or gum chewing in class
g. Plagiarism shall NOT be accepted in this course so be sure to do your
referencing properly

Thank You
CHAPTER 1

INTRODUCTION TO COMPUTER GRAPHICS

1.1 Introduction

It is difficult to display an image of any size on the computer screen. This method is
simplified by using Computer graphics. Graphics on the computer are produced by
using various algorithms and techniques. This tutorial describes how a rich visual
experience is provided to the user by explaining how all these processed by the
computer.

Computer Graphics involves technology to access. The Process transforms and


presents information in a visual form. The role of computer graphics insensible. In today
life, computer graphics has now become a common element in user interfaces, T.V.
commercial motion pictures.

Computer Graphics is the creation of pictures with the help of a computer. The end
product of the computer graphics is a picture it may be a business graph, drawing,
and engineering.

In computer graphics, two or three-dimensional pictures can be created that are


used for research. Many hardware devices algorithm has been developing for
improving the speed of picture generation with the passes of time. It includes the
creation storage of models and image of objects. These models for various fields like
engineering, mathematical and so on.

Today computer graphics is entirely different from the earlier one. It is not possible. It is
an interactive user can control the structure of an object of various input devices.

1.2 Objectives

On completing this unit, you would be able to:

a. Explain the various application areas of computer graphics


b. Understand the elements of a Graphic system.
c. Explain Graphics processing unit and its various forms
1.3 Definition of Computer Graphics

It is the use of computers to create and manipulate pictures on a display device. It


comprises of software techniques to create, store, modify, represents pictures.

1.4 Why Computer Graphics Used?

Suppose a shoe manufacturing company want to show the sale of shoes for five years.
For this vast amount of information is to store. So, a lot of time and memory will be
needed. This method will be tough to understand by a common man. In this situation
graphics is a better alternative. Graphics tools are charts and graphs. Using graphs,
data can be represented in pictorial form. A picture can be understood easily just
with a single look.

Interactive computer graphics work using the concept of two-way communication


between computer users. The computer will receive signals from the input device, and
the picture is modified accordingly. Picture will be changed quickly when we apply
command. Fig. 1.1 shows the Image structure in Computer Graphics.

Fig. 1.1 Illustration of Image Structure in Computer Graphics

1.5 Application of Computer Graphics

1.5.1 Education and Training

Computer-generated model of the physical, financial and economic system is often


used as educational aids. Model of physical systems, physiological system, population
trends or equipment can help trainees to understand the operation of the system.

For some training applications, particular systems are designed. For example, Flight
Simulator.

Flight Simulator
It helps in giving training to the pilots of airplanes. These pilots spend much of their
training not in a real aircraft but on the ground at the controls of a Flight Simulator.

Advantages

a. Fuel Saving
b. Safety
c. Ability to familiarize the training with a large number of the world's airports.

1.5.2 Use in Biology


Molecular biologist can display a picture of molecules and gain insight into their
structure with the help of computer graphics.

1.5.3 Computer-Generated Maps


Town planners and transportation engineers can use computer-generated maps
which display data useful to them in their planning work.

1.5.4 Architect
Architect can explore an alternative solution to design problems at an interactive
graphics terminal. In this way, they can test many more solutions that would not be
possible without the computer.
1.5.5 Presentation Graphics
Example of presentation Graphics are bar charts, line graphs, pie charts and other
displays showing relationships between multiple parameters. Presentation Graphics is
commonly used to summarize
a. Financial Reports
b. Statistical Reports
c. Mathematical Reports
d. Scientific Reports
e. Economic Data for research reports
f. Managerial Reports
g. Consumer Information Bulletins
h. And other types of reports

1.5.6 Computer Art


Computer Graphics are also used in the field of commercial arts. It is used to generate
television and advertising commercial.

1.5.7 Entertainment
Computer Graphics are now commonly used in making motion pictures, music videos
and television shows.
1.58 Visualization
It is used for visualization of scientists, engineers, medical personnel, business analysts
for the study of a large amount of information.

1.5.9 Educational Software


Computer Graphics is used in the development of educational software for making
computer-aided instruction.

1.5.10 Printing Technology


Computer Graphics is used for printing technology and textile design.

Example of Computer Graphics Packages:


1. LOGO
2. COREL DRAW
3. AUTO CAD
4. 3D STUDIO
5. CORE
6. GKS (Graphics Kernel System)
7. PHIGS
8. CAM (Computer Graphics Metafile)
9. CGI (Computer Graphics Interface)

1.6 Interactive and Passive Graphics

1.6.1 Non-Interactive or Passive Computer Graphics:

In non-interactive computer graphics, the picture is produced on the monitor, and


the user does not have any controlled over the image, i.e., the user cannot make any
change in the rendered image. One example of its Titles shown on T.V.

Non-interactive Graphics involves only one-way communication between the


computer and the user, User can see the produced image, and he cannot make any
change in the image.

1.6.2 Interactive Computer Graphics:

In interactive Computer Graphics user have some controls over the picture, i.e., the
user can make any change in the produced image. One example of it is the ping-
pong game.
Interactive Computer Graphics require two-way communication between the
computer and the user. A User can see the image and make any change by sending
his command with an input device.

Advantages:
a. Higher Quality
b. More precise results or products
c. Greater Productivity
d. Lower analysis and design cost
e. Significantly enhances our ability to understand data and to perceive trends.

1.7 Working of Interactive Computer Graphics:

The modern graphics display is very simple in construction. It consists of three


components:
a. Frame Buffer or Digital Memory
b. A Monitor likes a home T.V. set without the tuning and receiving electronics.
c. Display Controller or Video Controller: It passes the contents of the frame
buffer to the monitor.
Fig. 1.2 illustrates the relationship between the Frame Buffer and the T.V Monitor.

Fig. 1.2 Illustration of the Work Within A Frame Buffer and T.V Monitor

Frame Buffer: A digital frame buffer is large, contiguous piece of computer memory
used to hold or map the image displayed on the screen.
a. At a minimum, there is 1 memory bit for each pixel in the raster. This amount of
memory is called a bit plane.
b. A 1024 x 1024 element requires 220 (210=1024;220=1024 x 1024)sq.raster or
1,048,576 memory bits in a single bit plane.
c. The picture is built up in the frame buffer one bit at a time.
d. ∵ A memory bit has only two states (binary 0 or 1), a single bit plane yields a
black and white (monochrome display).
e. As frame buffer is a digital device write raster CRT is an analog device.

1.8 Properties of Video Monitor:

a. Persistence: Persistence is the duration of phosphorescence. Different kinds of


phosphors are available for use in CRT. Besides color, a major difference
between phosphor in their persistence how they continue to emit light after the
electron beam is removed.
b. Resolution: Use to describe the number of pixels that are used on display image.
c. Aspect Ratio: It is the ratio of width to its height. Its measure is unit in length or
number of pixels.

Aspect Ratio = width unit / height unit


CHAPTER 2

GRAPHICS SYSTEMS

2.1 Computer Graphic System

There are two (2) main types of computer graphics

1. Non-Interactive Computer Graphics


a. In non-interactive computer graphics, the picture is produced on the
monitor, and the user does not have any control over the image, i.e.,
the user cannot make any change in the rendered image. One
example of its Titles shown on T.V.
b. Non-interactive Graphics involves only one-way communication
between the computer and the user, User can see the produced image,
and he cannot make any change in the image.
2. Interactive Computer Graphics
a. Interactive Computer Graphics involves a two-way communication
between computer and user. Here the observer is given some control
over the image by providing him with an input device for example the
video game controller of the ping pong game or FIFA 2020. This helps
him to signal his request to the computer.
b. The computer on receiving signals from the input device can modify the
displayed picture appropriately. To the user it appears that the picture
is changing instantaneously in response to his commands. He can give
a series of commands, each one generating a graphical response from
the computer. In this way he maintains a conversation, or dialogue, with
the computer.
c. Interactive computer graphics affects our lives in a number of indirect
ways. For example, it helps to train the pilots of our airplanes. We can
create a flight simulator which may help the pilots to get trained not in
a real aircraft but on the grounds at the control of the flight simulator.
The flight simulator is a mockup of an aircraft flight deck, containing all
the usual controls and surrounded by screens on which we have the
projected computer-generated views of the terrain visible on take-off
and landing.
d. Flight simulators have many advantages over the real aircrafts for
training purposes, including fuel savings, safety, and the ability to
familiarize the trainee with a large number of the world’s airports.

2.2 Basic Component - Interactive Graphic System

The basic components are:

a. Input
b. Processing
c. Display / Output

How Data collected by input devices, process and shown as output is shown in Fig.
2.1.

Fig. 2.1 Basic Component – Interactive Graphic System

2.3 Conceptual framework for Interactive Graphics

Graphics library/package is intermediary between application and display hardware


(Graphics System)
Application program maps all the application objects to views (images) of those
objects by calling/invoking on graphics library. Application model may contain lots
of non-graphical data (e.g., non-geometric object properties).

User interaction results in modification of model and/or image

This hardware and software framework are more than 4 decades old but is still useful
Graphics. Fig. 2.2 illustrates how the Conceptual Framework for interactive graphics
process starts and ends:

Fig. 2.2 Conceptual Framework For Interactive Graphics

2.4 Display Devices

Display devices are also known as output devices. Most commonly used output
device in a graphics system is a video monitor.

Types of Display Devices

a. CRT
b. Random Scan
c. Raster Scan
d. Color CRT
e. DVST (Direct View Storage Tube)
f. Flat Panel Display
g. Plasma Panel Display
h. LCD (Liquid Crystal Display)
2.4.1 Cathode Ray Tube (CRT)

CRT stands for Cathode Ray Tube. CRT is a technology used in traditional computer
monitors and televisions. The image on CRT display is created by firing electrons from
the back of the tube of phosphorus located towards the front of the screen.

Once the electron heats the phosphorus, they light up, and they are projected on a
screen. The color you view on the screen is produced by a blend of red, blue and
green light. Fig. 2.3, contains the named parts of the Cathode Ray Tube invented by
Karl Ferdinand Braun in the 1897.

Fig. 2.3 Illustrates the Named Parts of Cathode Ray Tube

Components of CRT:

Main Components of CRT are:

1. Electron Gun: Electron gun consisting of a series of elements, primarily a heating


filament (heater) and a cathode. The electron gun creates a source of
electrons which are focused into a narrow beam directed at the face of the
CRT.
2. Control Electrode: It is used to turn the electron beam on and off.
3. Focusing system: It is used to create a clear picture by focusing the electrons
into a narrow beam.
4. Deflection Yoke: It is used to control the direction of the electron beam. It
creates an electric or magnetic field which will bend the electron beam as it
passes through the area. In a conventional CRT, the yoke is linked to a sweep
or scan generator. The deflection yoke which is connected to the sweep
generator creates a fluctuating electric or magnetic potential.
5. Phosphorus-coated screen: The inside front surface of every CRT is coated with
phosphors. Phosphors glow when a high-energy electron beam hits them.
Phosphorescence is the term used to characterize the light given off by a
phosphor after it has been exposed to an electron beam.
2.4.2 Random Scan Display

Random Scan System uses an electron beam which operates like a pencil to create
a line image on the CRT screen. The picture is constructed out of a sequence of
straight-line segments. Each line segment is drawn on the screen by directing the
beam to move from one point on the screen to the next, where its x & y coordinates
define each point. After drawing the picture. The system cycles back to the first line
and design all the lines of the image 30 to 60 time each second. The process is shown
in Fig. 2.4 drawing an image in a Random Scan Display.

Fig. 2.4 Illustration of Random Scan Display in Action

Random Scan and Raster Scan Display


Random-scan monitors are also known as vector displays or stroke-writing displays or
calligraphic displays.

Advantages:

1. A CRT has the electron beam directed only to the parts of the screen where
an image is to be drawn.
2. Produce smooth line drawings.
3. High Resolution

Disadvantages:

1. Random-Scan monitors cannot display realistic shades scenes.

2.4.3 Raster Scan Display

A Raster Scan Display is based on intensity control of pixels in the form of a rectangular
box called Raster on the screen. Information of on and off pixels is stored in refresh
buffer or Frame buffer. Televisions in our house are based on Raster Scan Method. The
raster scan system can store information of each pixel position, so it is suitable for
realistic display of objects. Raster Scan provides a refresh rate of 60 to 80 frames per
second.

Frame Buffer is also known as Raster or bit map. In Frame Buffer the positions are called
picture elements or pixels. Beam refreshing is of two types. First is horizontal retracing
and second is vertical retracing. When the beam starts from the top left corner and
reaches the bottom right scale, it will again return to the top left side called at vertical
retrace. Then it will again be more horizontal, from top to bottom call as horizontal
retracing shown in Fig. 2.5.
Fig. 2.5 Photograph of Raster Scan Display

Types of Scanning or travelling of beam in Raster Scan

1. Interlaced Scanning
2. Non-Interlaced Scanning

In Interlaced scanning, each horizontal line of the screen is traced from top to bottom.
Due to which fading of display of object may occur. This problem can be solved by
Non-Interlaced scanning. In this first of all odd numbered lines are traced or visited by
an electron beam, then in the next circle, even number of lines are located.

For non-interlaced display refresh rate of 30 frames per second used. But it gives
flickers. For interlaced display refresh rate of 60 frames per second is used. Fig. 2.6
shows the difference between the Interlaced and Non-Interlaced types of scanning.

Fig. 2.6 Image of Interlaced and Non-Interlaced Scanning

Advantages:

1. Realistic image
2. Million Different colors to be generated
3. Shadow Scenes are possible.

Disadvantages:

1. Low Resolution
2. Expensive

Differentiate between Random and Raster Scan Display:


Raster Scan Display and Random Scan Display have Differences, enlisted in the Table
2.1 are the Major differences:

SN Random Scan Raster Scan


1 It has high Resolution Its resolution is low.
2 It is more expensive It is less expensive
3 Any modification if needed is easy Modification is tough
4 Solid pattern is tough to fill Solid pattern is easy to fill
Refresh rate does not depend on the
5 Refresh rate depends or resolution
picture.
Only screen with view on an area is
6 Whole screen is scanned.
displayed.
Beam Penetration technology come Shadow mark technology came under
7
under it. this.
8 It does not use interlacing method. It uses interlacing
It is restricted to line drawing
9 It is suitable for realistic display.
applications

2.4.4 Color CRT Monitors:

The CRT Monitor display by using a combination of phosphors. The phosphors are
different colors. There are two popular approaches for producing color displays with
a CRT are:

1. Beam Penetration Method


2. Shadow-Mask Method

Beam Penetration Method:

The Beam-Penetration method has been used with random-scan monitors. In this
method, the CRT screen is coated with two layers of phosphor, red and green and
the displayed color depends on how far the electron beam penetrates the phosphor
layers. This method produces four colors only, red, green, orange and yellow. A beam
of slow electrons excites the outer red layer only; hence screen shows red color only.
A beam of high-speed electrons excites the inner green layer. Thus, screen shows a
green color. Fig. 2.7 shows how an electron glows on the Phosphorus Coating.

Fig. 2.7 Image of Beam Penetration Method

Advantages:

1. Inexpensive

Disadvantages:

1. Only four colors are possible


2. Quality of pictures is not as good as with another method.

Shadow-Mask Method:

▪ Shadow Mask Method is commonly used in Raster-Scan System because they


produce a much wider range of colors than the beam-penetration method.
▪ It is used in the majority of color TV sets and monitors.

Construction: A shadow mask CRT has 3 phosphor color dots at each pixel position.

▪ One phosphor dot emits: red light


▪ Another emits: green light
▪ Third emits: blue light

This type of CRT has 3 electron guns, one for each color dot and a shadow mask grid
just behind the phosphor coated screen.
Shadow mask grid is pierced with small round holes in a triangular pattern.

Fig. shows the delta-delta shadow mask method commonly used in color CRT system.
Illustration of how electrons produce RGB Phosphorus dots is shown in Fig. 2.8.

Fig. 2.8 Photograph of a Shadow Mask CRT

Working: Triad arrangement of red, green, and blue guns.


The deflection system of the CRT operates on all 3 electron beams simultaneously; the
3 electron beams are deflected and focused as a group onto the shadow mask,
which contains a sequence of holes aligned with the phosphor- dot patterns.

When the three beams pass through a hole in the shadow mask, they activate a
dotted triangle, which occurs as a small color spot on the screen.

The phosphor dots in the triangles are organized so that each electron beam can
activate only its corresponding color dot when it passes through the shadow mask.

Inline arrangement: Another configuration for the 3 electron guns is an Inline


arrangement in which the 3 electron guns and the corresponding red-green-blue
color dots on the screen, are aligned along one scan line rather of in a triangular
pattern.

This inline arrangement of electron guns in easier to keep in alignment and is


commonly used in high-resolution color CRT's. Fig. 2.9 shows the image of the Triad
and In-line Color arrangement of RGB electron gun on a CRT Color Monitor.

Fig. 2.9 Illustration of Triad and In-Line RGB Colour Arrangement

Advantage:

1. Realistic image
2. Million different colors to be generated
3. Shadow scenes are possible

Disadvantage:

1. Relatively expensive compared with the monochrome CRT.


2. Relatively poor resolution
3. Convergence Problem

2.4.5 Direct View Storage Tubes:

DVST terminals also use the random scan approach to generate the image on the
CRT screen. The term "storage tube" refers to the ability of the screen to retain the
image which has been projected against it, thus avoiding the need to rewrite the
image constantly.

Function of guns: Two guns are used in DVST

1. Primary guns: It is used to store the picture pattern.


2. Flood gun or Secondary gun: It is used to maintain picture display.

Fig. 2.10, provides insight of how an image is generated on a CRT Screen in the Direct
View Storage Tube.

Fig. 2.10 Illustration of a Direct View of Storage Tubes


Advantage:

1. No refreshing is needed.
2. High Resolution
3. Cost is very less

Disadvantage:

1. It is not possible to erase the selected part of a picture.


2. It is not suitable for dynamic graphics applications.
3. If a part of picture is to modify, then time is consumed.

2.4.6 Flat Panel Display:

The Flat-Panel display refers to a class of video devices that have reduced volume,
weight and power requirement compare to CRT.

Example: Small T.V. monitor, calculator, pocket video games, laptop computers, an
advertisement board in elevator. Flat Panel Display has two Types which is shown in
the Fig. 2.11.

Fig. 2.11 Diagram of A Flat Panel Display Types

1. Emissive Display: The emissive displays are devices that convert electrical energy
into light. Examples are Plasma Panel, thin film electroluminescent display and LED
(Light Emitting Diodes).
2. Non-Emissive Display: The Non-Emissive displays use optical effects to convert
sunlight or light from some other source into graphics patterns. Examples are LCD
(Liquid Crystal Device).

2.4.7 Plasma Panel Display:

Plasma-Panels are also called as Gas-Discharge Display. It consists of an array of small


lights. Lights are fluorescent in nature. The essential components of the plasma-panel
display are:

1. Cathode: It consists of fine wires. It delivers negative voltage to gas cells. The
voltage is released along with the negative axis.
2. Anode: It also consists of line wires. It delivers positive voltage. The voltage is
supplied along positive axis.
3. Fluorescent cells: It consists of small pockets of gas liquids when the voltage is
applied to this liquid (neon gas) it emits light.
4. Glass Plates: These plates act as capacitors. The voltage will be applied, the
cell will glow continuously.

The gas will slow when there is a significant voltage difference between horizontal and
vertical wires. The voltage level is kept between 90 volts to 120 volts. Plasma level does
not require refreshing. Erasing is done by reducing the voltage to 90 volts.

Each cell of plasma has two states, so cell is said to be stable. Displayable point in
plasma panel is made by the crossing of the horizontal and vertical grid. The resolution
of the plasma panel can be up to 512 * 512 pixels. Fig. 2.12 shows the state of cell in
plasma Panel Display.
Fig 2.12 Illustration of the Sate of Cell in Plasma Display Panel

Advantage:

1. High Resolution
2. Large screen size is also possible.
3. Less Volume
4. Less weight
5. Flicker Free Display

Disadvantage:

1. Poor Resolution
2. Wiring requirement anode and the cathode is complex.
3. Its addressing is also complex.

2.4.8 LED (Light Emitting Diode):

In an LED, a matrix of diodes is organized to form the pixel positions in the display and
picture definition is stored in a refresh buffer. Data is read from the refresh buffer and
converted to voltage levels that are applied to the diodes to produce the light
pattern in the display.

2.4.9 LCD (Liquid Crystal Display):

Liquid Crystal Displays are the devices that produce a picture by passing polarized
light from the surroundings or from an internal light source through a liquid-crystal
material that transmits the light.

LCD uses the liquid-crystal material between two glass plates; each plate is the right
angle to each other between plates liquid is filled. One glass plate consists of rows of
conductors arranged in vertical direction. Another glass plate is consisting of a row of
conductors arranged in horizontal direction. The pixel position is determined by the
intersection of the vertical & horizontal conductor. This position is an active part of the
screen.

Liquid crystal display is temperature dependent. It is between zero to seventy degree


Celsius. It is flat and requires very little power to operate. Image in Fig. 2.13 depicts the
On-State and Off-State of a Liquid Crystal Display.
Fig. 2.13 Image of The On-State and Off-State of an LCD

Advantage:

1. Low power consumption.


2. Small Size
3. Low Cost

Disadvantage:

1. LCDs are temperature-dependent (0-70°C)


2. LCDs do not emit light; as a result, the image has very little contrast.
3. LCDs have no color capability.
4. The resolution is not as good as that of a CRT.

2.5 Display Processor


It is interpreter or piece of hardware that converts display processor code into
pictures. It is one of the four main parts of the display processor. Design of the Display
Process is shown in Fig. 2.14.

Parts of Display Processor

1. Display File Memory


2. Display Processor
3. Display Generator
4. Display Console

Fig. 2.14 Illustration of The Display Process in A Display System

Display File Memory: It is used for generation of the picture. It is used for identification
of graphic entities.

Display Controller

1. It handles interrupt
2. It maintains timings
3. It is used for interpretation of instruction.

Display Generator

1. It is used for the generation of character.


2. It is used for the generation of curves.
Display Console: It contains CRT, Light Pen, and Keyboard and deflection system.

The raster scan system is a combination of some processing units. It consists of the
control processing unit (CPU) and a particular processor called a display controller.
Display Controller controls the operation of the display device. It is also called a video
controller.

Working: The video controller in the output circuitry generates the horizontal and
vertical drive signals so that the monitor can sweep. Its beam across the screen during
raster scans. Diagram illustration of how the signal Fig. 2.15 Architecture of a Raster
Display System with a Display Processor.

Fig. 2.15 Diagram of Architecture of a Raster Display System

As fig showing that 2 registers (X register and Y register) are used to store the
coordinate of the screen pixels. Assume that y values of the adjacent scan lines
increased by 1 in an upward direction starting from 0 at the bottom of the screen to
ymax at the top and along each scan line the screen pixel positions or x values are
incremented by 1 from 0 at the leftmost position to xmax at the rightmost position.

The origin is at the lowest left corner of the screen as in a standard Cartesian
coordinate system. Diagramatic display of X-Y Coordinates in Fig. 2.16.
Fig. 2.16 Diagram of the X-Y Coordinates

At the start of a Refresh Cycle:

X register is set to 0 and y register is set to ymax. This (x, y') address is translated into a
memory address of frame buffer where the color value for this pixel position is stored.

The controller receives this color value (a binary no) from the frame buffer, breaks it
up into three parts and sends each element to a separate Digital-to-Analog
Converter (DAC).

These voltages, in turn, controls the intensity of 3 e-beam that are focused at the (x,
y) screen position by the horizontal and vertical drive signals.

This process is repeated for each pixel along the top scan line, each time
incrementing the X register by Y.

As pixels on the first scan line are generated, the X register is incremented through
xmax.

Then x register is reset to 0, and y register is decremented by 1 to access the next scan
line.

Pixel along each scan line is then processed, and the procedure is repeated for each
successive scan line units pixels on the last scan line (y=0) are generated.

For a display system employing a color look-up table frame buffer value is not directly
used to control the CRT beam intensity.

It is used as an index to find the three pixel-color value from the look-up table. This
lookup operation is done for each pixel on every display cycle.
As the time available to display or refresh a single pixel in the screen is too less,
accessing the frame buffer every time for reading each pixel intensity value would
consume more time what is allowed: The process from the Frame Buffer to the Monitor
is illustrated in the Fig. 2.17 as the Refresh Cycle.

Fig. 2.17 Photograph of the Refresh Cycle

Multiple adjacent pixel values are fetched to the frame buffer in single access and
stored in the register.

After every allowable time gap, the one-pixel value is shifted out from the register to
control the warm intensity for that pixel.

The procedure is repeated with the next block of pixels, and so on, thus the whole
group of pixels will be processed.

Display Devices:

The most commonly used display device is a video monitor. The operation of most
video monitors based on CRT (Cathode Ray Tube). The following display devices are
used:

1. Refresh Cathode Ray Tube


2. Random Scan and Raster Scan
3. Color CRT Monitors
4. Direct View Storage Tubes
5. Flat Panel Display
6. Lookup Table
2.6 Look-Up Table:

Image representation is essentially the description of pixel colors. There are three
primary colors: R (red), G (green) and B (blue). Each primary color can take on
intensity levels produces a variety of colors. Using direct coding, we may allocate 3
bits for each pixel, with one bit for each primary color. The 3-bit representation allows
each primary to vary independently between two intensity levels: 0 (off) or 1 (on).
Hence each pixel can take on one of the eight colors. Table 2.2 reveals the Bit
information of the RGB colors as shown below.

Table 2.2 The Look-up Table

Bit 1:r Bit 2:g Bit 3:b Color name


0 0 0 Black
0 0 1 Blue
0 1 0 Green
0 1 1 Cyan
1 0 0 Red
1 0 1 Magenta
1 1 0 Yellow
1 1 1 White

A widely accepted industry standard uses 3 bytes, or 24 bytes, per pixel, with one byte
for each primary color. The way, we allow each primary color to have 256 different
intensity levels. Thus, a pixel can take on a color from 256 x 256 x 256 or 16.7 million
possible choices. The 24-bit format is commonly referred to as the actual color
representation.

Lookup Table approach reduces the storage requirement. In this approach pixel
values do not code colors directly. Alternatively, they are addresses or indices into a
table of color values. The color of a particular pixel is determined by the color value
in the table entry that the value of the pixel references. Fig. shows a look-up table with
256 entries. The entries have addresses 0 through 255. Each entry contains a 24-bit RGB
color value. Pixel values are now 1-byte. The color of a pixel whose value is i, where 0
<i<255, is persistence by the color value in the table entry whose address is i. It reduces
the storage requirement of a 1000 x 1000 image to one million bytes plus 768 bytes for
the color values in the look-up table. Fig. 2.18 illustrates the RGB look-up Table.
Fig. 2.18 Tabular illustration of the RGB Lookup Table
CHAPTER 3

INPUT-OUTPUT DEVICES

3.1 Input Devices

The Input Devices are the hardware that is used to transfer transfers input to the
computer. The data can be in the form of text, graphics, sound, and text. Output
device display data from the memory of the computer. Output can be text, numeric
data, line, polygon, and other objects. Image of The Information Processing Cycle
from the Input Devices, through the processing units and the Output devices is shown
in Fig. 3. 1.

Fig. 3.1 Diagrammatic illustration of the Data Processing Cycle

These Devices include:

1. Keyboard
2. Mouse
3. Trackball
4. Spaceball
5. Joystick
6. Light Pen
7. Digitizer
8. Touch Panels
9. Voice Recognition
10. Image Scanner

3.1.1 Keyboard:

The most commonly used input device is a keyboard. The data is entered by pressing
the set of keys. All keys are labeled. A keyboard with 101 keys is called a QWERTY
keyboard.
The keyboard has alphabetic as well as numeric keys. Some special keys are also
available.

1. Numeric Keys: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
2. Alphabetic keys: a to z (lower case), A to Z (upper case)
3. Special Control keys: Ctrl, Shift, Alt
4. Special Symbol Keys: ; , " ? @ ~ ? :
5. Cursor Control Keys: ↑ → ← ↓
6. Function Keys: F1 F2 F3....F9.
7. Numeric Keyboard: It is on the right-hand side of the keyboard and used for
fast entry of numeric data.

Function of Keyboard:

1. Alphanumeric Keyboards are used in CAD. (Computer Aided Drafting)


2. Keyboards are available with special features line screen co-ordinates entry,
Menu selection or graphics functions, etc.
3. Special purpose keyboards are available having buttons, dials, and switches.
Dials are used to enter scalar values. Dials also enter real numbers. Buttons and
switches are used to enter predefined function values.

Advantage:

1. Suitable for entering numeric data.


2. Function keys are a fast and effective method of using commands, with fewer
errors.

Disadvantage:

1. Keyboard is not suitable for graphics input.

3.1.2 Mouse:

A Mouse is a pointing device and used to position the pointer on the screen. It is a
small palm size box. There are two or three depression switches on the top. The
movement of the mouse along the x-axis helps in the horizontal movement of the
cursor and the movement along the y-axis helps in the vertical movement of the
cursor on the screen. The mouse cannot be used to enter text. Therefore, they are
used in conjunction with a keyboard. Drawing of the X (horizontal) and Y (Vertical
Movement) of the Mouse is displayed in Fig. 3.2.

Fig. 3.2 Illustration of the Mouse Coordinates

Advantage:

1. Easy to use
2. Not very expensive
3.1.3 Trackball

It is a pointing device. It is similar to a mouse. This is mainly used in notebook or laptop


computer, instead of a mouse. This is a ball which is half inserted, and by changing
fingers on the ball, the pointer can be moved. This trackball pointing device is shown
in the Fig. 3.3.

Fig. 3.3 Photograph of the Trackball

Advantage:
1. Trackball is stationary, so it does not require much space to use it.
2. Compact Size
3.1.4 Spaceball:

It is similar to trackball, but it can move in six directions where trackball can move in
two directions only. The movement is recorded by the strain gauge. Strain gauge is
applied with pressure. It can be pushed and pulled in various directions. The ball has
a diameter around 7.5 cm. The ball is mounted in the base using rollers. One-third of
the ball is an inside box, the rest is outside.

Applications:

1. It is used for three-dimensional positioning of the object.


2. It is used to select various functions in the field of virtual reality.
3. It is applicable in CAD applications.
4. Animation is also done using spaceball.
5. It is used in the area of simulation and modeling.

3.1.5 Joystick:

A Joystick is also a pointing device which is used to change cursor position on a


monitor screen. Joystick is a stick having a spherical ball as its both lower and upper
ends as shown in fig. The lower spherical ball moves in a socket. The joystick can be
changed in all four directions. The function of a joystick is similar to that of the mouse.
It is mainly used in Computer Aided Designing (CAD) and playing computer games.
Fig. 3.4 displays the image of the Joystick.

Fig. 3.4 Joystick


3.1.6 Light Pen

Light Pen (similar to the pen) is a pointing device which is used to select a displayed
menu item or draw pictures on the monitor screen. It consists of a photocell and an
optical system placed in a small tube. When its tip is moved over the monitor screen,
and pen button is pressed, its photocell sensing element detects the screen location
and sends the corresponding signals to the CPU. Image of the Light pen is found in
the Fig. 3.5 below:

Fig. 3.5 Photo of A Light Pen

Uses:

1. Light Pens can be used as input coordinate positions by providing necessary


arrangements.
2. If background color or intensity, a light pen can be used as a locator.
3. It is used as a standard pick device with many graphics systems.
4. It can be used as stroke input devices.
5. It can be used as valuators

3.1.7 Digitizers:

The digitizer is an operator input device, which contains a large, smooth board (the
appearance is similar to the mechanical drawing board) & an electronic tracking
device, which can be changed over the surface to follow existing lines. The electronic
tracking device contains a switch for the user to record the desire x & y coordinate
positions. The coordinates can be entered into the computer memory or stored or an
off-line storage medium such as magnetic tape. Fig. 3.6 shows the image of a Digitizer.
Fig. 3.6 Photograph of a Digitizer

Advantages:

1. Drawing can easily be changed.


2. It provides the capability of interactive graphics.

Disadvantages:

1. Costly
2. Suitable only for applications which required high-resolution graphics.

3.1.8 Touch Panels:

Touch Panels is a type of display screen that has a touch-sensitive transparent panel
covering the screen. A touch screen registers input when a finger or other object
comes in contact with the screen.

When the wave signals are interrupted by some contact with the screen, that located
is recorded. Touch screens have long been used in military applications.

3.1.9 Voice Systems (Voice Recognition):

Voice Recognition is one of the newest, most complex input techniques used to
interact with the computer. The user inputs data by speaking into a microphone. The
simplest form of voice recognition is a one-word command spoken by one person.
Each command is isolated with pauses between the words.

Voice Recognition is used in some graphics workstations as input devices to accept


voice commands. The voice-system input can be used to initiate graphics operations
or to enter data. These systems operate by matching an input against a predefined
dictionary of words and phrases.

Advantage:

1. More efficient device.


2. Easy to use
3. Unauthorized speakers can be identified

Disadvantages:

1. Very limited vocabulary


2. Voice of different operators can't be distinguished.

3.1.10 Image Scanner

It is an input device. The data or text is written on paper. The paper is feed to scanner.
The paper written information is converted into electronic format; this format is stored
in the computer. The input documents can contain text, handwritten material, picture
extra.

By storing the document in a computer document became safe for longer period of
time. The document will be permanently stored for the future. We can change the
document when we need. The document can be printed when needed.

Scanning can be of the black and white or colored picture. On stored picture 2D or
3D rotations, scaling and other operations can be applied.

Types of image Scanner:

1. Flat Bed Scanner: It resembles a photocopy machine. It has a glass top on its
top. Glass top in further covered using a lid. The document to be scanned is
kept on glass plate. The light is passed underneath side of glass plate. The light
is moved left to right. The scanning is done the line by line. The process is
repeated until the complete line is scanned. Within 20-25 seconds a document
of 4" * 6" can be scanned. Photo of the Named parts of a Flat Bed Scanner is
shown in Fig. 3.7.
Fig. 3. 7 Image Scanner - Flat Bed Scanner

2. Hand Held Scanner: It has a number of LED's (Light Emitting Diodes) the LED's
are arranged in the small case. It is called a Hand-held Scanner because it can
be kept in hand which performs scanning. For scanning the scanner is moved
over document from the top towards the bottom. Its light is on, while we move
it on document. It is dragged very slowly over document. If dragging of the
scanner over the document is not proper, the conversion will not correct. The
front and side views of a Hand-held Scanner is shown in Fig. 3.8.

Fig. 3. 8 Photo of A Hand-Held Scanner

3.2 Output Devices

Types of Printers

It is an electromechanical device, which accepts data from a computer and


translates them into form understood by users. Fig. 3.9 contains the Major types of
Outputs Devices.
Fig. 3. 9 Photo of the Types of Output Devices

Following are Output Devices:

1. Printers
2. Plotters

3.2.1 Printers:

Printer is the most important output device, which is used to print data on paper.

Types of Printers:

There are many types of printers which are classified on various criteria as shown in
Fig. 3.10 Classification of Printers:
Fig. 3.10 Diagrammatic Display of the Classifications of Printers

1. Impact Printers: The printers that print the characters by striking against the ribbon
and onto the papers are known as Impact Printers.

These Printers are of two types:

1. Character Printers
2. Line Printers

2. Non-Impact Printers: The printers that print the characters without striking against
the ribbon and onto the papers are called Non-Impact Printers. These printers print a
complete page at a time, therefore, also known as Page Printers.

Page Printers are of two types:

1. Laser Printers
2. Inkjet Printers

3.2.2 Dot Matrix Printers:

Dot matrix has printed in the form of dots. A printer has a head which contains nine
pins. The nine pins are arranged one below other. Each pin can be activated
independently. All or only the same needles are activated at a time. When needless
is not activated, and then the tip of needle stays in the head. When pin work, it
comes out of the print head. In nine pin printers, pins are arranged in 5 * 7 matrixes.
Fig. 3.11 shows the image of a Dot Matrix Printer.

Fig. 3.11 Image of a Dot Matrix Printer

Advantage:

1. Dot Matrix Printers prints output as dots, so it can print any shape of the
character. This allows the printer to print special character, charts, graphs,
etc.
2. Dot Matrix Printers come under the category of impact printers. The printing is
done when the hammer pin strikes the inked ribbon. The impressions are
printed on paper. By placing multiple copies of carbon, multiple copies of
output can be produced.
3. It is suitable for printing of invoices of companies.

3.2.3 Daisy Wheel Printers:

Head is lying on a wheel and Pins corresponding to characters are like petals of
Daisy, that's why called Daisy wheel printer. Daisy Wheel image is shown in Fig. 3.12.
Fig. 3.12 Daisy Wheel Printer

Advantage:

1. More reliable than DMPs


2. Better Quality

Disadvantage:

1. Slower than DMPs

3.2.4 Drum Printers:

These are line printers, which prints one line at a time. It consists of a drum. The shape
of the drum is cylindrical. The drum is solid and has characters embossed on it in the
form of vertical bands. The characters are in circular form. Each band consists of
some characters. Each line on drum consists of 132 characters. Because there are 96
lines so total characters are (132 * 95) = 12, 672.

Drum contains a number of hammers also.

3.2.5 Chain Printers:

These are called as line printers. These are used to print one line at a line. Basically,
chain consists of links. Each link contains one character. Printers can follow any
character set style, i.e., 48, 64 or 96 characters. Printer consists of a number of
hammers also.

Advantages:

1. Chain or Band if damaged can be changed easily.


2. It allows printing of different form.
3. Different Scripts can be printed using this printer.

Disadvantages:

1. It cannot print charts and graphs.


2. It cannot print characters of any shape.
3. Chain Printers is impact printer, hammer strikes so it is noisy.
Non-Impact Printers:

3.2.6 Inkjet Printers:

These printers use a special link called electrostatic ink. The printer head has a
special nozzle. Nozzle drops ink on paper. Head contains up to 64 nozzles. The ink
dropped is deflected by the electrostatic plate. The plate is fixed outside the nozzle.
The deflected ink settles on paper. Fig. 3.13 contains the image of a Inkjet Printer.

Fig. 3.13 Photograph of an Inkjet Printer

Advantages:

1. These produce high quality of output as compared to the dot matrix.


2. A high-quality output can be produced using 64 nozzles printed.
3. Inkjet can print characters in a variety of shapes.
4. Inkjet can print special characters.
5. The printer can print graphs and charts.

Disadvantages:

1. Inkjet Printers are slower than dot matrix printers.


2. The cost of inkjet is more than a dot matrix printer.

3.2.7 Laser Printers:

These are non-impact page printers. They use laser lights to produces the dots
needed to form the characters to be printed on a page & hence the name laser
printers.

The output is generated in the following steps:

Step1: The bits of data sent by processing unit act as triggers to turn the laser beam
on & off.

Step2: The output device has a drum which is cleared & is given a positive electric
charge. To print a page, the modulated laser beam passing from the laser scans
back & forth the surface of the drum. The positive electric charge on the drum is
stored on just those parts of the drum surface which are exposed to the laser beam
create the difference in electric which charges on the exposed drum surface.
Image of a Laser Printer is depicted in the Fig. 3.14.

Fig. 3.14 Photograph of a Laser Printer

Step3: The laser exposed parts of the drum attract an ink powder known as toner.

Step4: The attracted ink powder is transferred to paper.


Step5: The ink particles are permanently fixed to the paper by using either heat or
pressure technique.

Step6: The drum rotates back to the cleaner where a rubber blade cleans off the
excess ink & prepares the drum to print the next page.

Plotters

Plotters are a special type of output device. It is suitable for applications:

1. Architectural plan of the building.


2. CAD applications like the design of mechanical components of aircraft.
3. Many engineering applications.

Fig. 3.15 shows the special type of a Plotter.

Fig. 3.15 Image of A Plotter

Advantage:

1. It can produce high-quality output on large sheets.


2. It is used to provide the high precision drawing.
3. It can produce graphics of various sizes.
4. The speed of producing output is high.
3.2.8 Drum Plotter:

It consists of a drum. Paper on which design is made is kept on the drum. The drum
can rotate in both directions. Plotters comprised of one or more pen and
penholders. The holders are mounted perpendicular to drum surface. The pens are
kept in the holder, which can move left to the right as well as right to the left. The
graph plotting program controls the movement of pen and drum. Fig. 3.16 displays
the Drum Plotter.

Fig. 3.16 Image of a Drum Plotter

3.2.9 Flatbed Plotter:

It is used to draw complex design and graphs, charts. The Flatbed plotter can be
kept over the table. The plotter consists of pen and holder. The pen can draw
characters of various sizes. There can be one or more pens and pen holding
mechanism. Each pen has ink of different color. Different colors help to produce
multicolor design of document. The area of plotting is also variable. It can vary A4 to
21'*52'. Fig. 3.17 shows the image of a Flatbed Plotter.

Fig. 3.17 Photograph of A Flatbed Plotter

It is used to draw

1. Cars
2. Ships
3. Airplanes
4. Shoe and dress designing
5. Road and highway design

3.3 Graphics Software:

There are two types of Graphics Software.

1. General Purpose Packages: Basic Functions in a general package include


those for generating picture components (straight lines, polygons, circles and
other figures), setting color and intensity values, selecting views, and applying
transformations. Example of general-purpose package is the GL (Graphics
Library), GKS, PHIGS, PHIGS+ etc.

2. Special Purpose Packages: These packages are designed for non-


programmers, so that these users can use the graphics packages, without
knowing the inner details. Example of special purpose package is
1. Painting programs
2. Package used for business purpose
3. Package used for medical systems.
4. CAD packages
CHAPTER 4

SCAN CONVERSION DEFINITION

4.1 Scan Conversion Definition

It is a process of representing graphics objects a collection of pixels. The graphics


objects are continuous. The pixels used are discrete. Each pixel can have either on or
off state.

The circuitry of the video display device of the computer is capable of converting
binary values (0, 1) into a pixel on and pixel off information. 0 is represented by pixel
off. 1 is represented using pixel on. Using this ability graphics computer represent
picture having discrete dots.

Any model of graphics can be reproduced with a dense matrix of dots or points. Most
human beings think graphics objects as points, lines, circles, ellipses. For generating
graphical object, many algorithms have been developed.

Advantage of developing algorithms for scan conversion

1. Algorithms can generate graphics objects at a faster rate.


2. Using algorithms memory can be used efficiently.
3. Algorithms can develop a higher level of graphical objects.

Examples of objects which can be scan converted

1. Point
2. Line
3. Sector
4. Arc
5. Ellipse
6. Rectangle
7. Polygon
8. Characters
9. Filled Regions
The process of converting is also called as rasterization. The algorithms implementation
varies from one computer system to another computer system. Some algorithms are
implemented using the software. Some are performed using hardware or firmware.
Some are performed using various combinations of hardware, firmware, and software.

4.2 Pixel or Pel:

The term pixel is a short form of the picture element. It is also called a point or dot. It is
the smallest picture unit accepted by display devices. A picture is constructed from
hundreds of such pixels. Pixels are generated using commands. Lines, circle, arcs,
characters; curves are drawn with closely spaced pixels. To display the digit or letter
matrix of pixels is used.

The closer the dots or pixels are, the better will be the quality of picture. Closer the
dots are, crisper will be the picture. Picture will not appear jagged and unclear if pixels
are closely spaced. So, the quality of the picture is directly proportional to the density
of pixels on the screen.

Pixels are also defined as the smallest addressable unit or element of the screen. Each
pixel can be assigned an address as shown in Fig. 4.1.

Fig. 4. 1 Illustration of A Pixel


Different graphics objects can be generated by setting the different intensity of pixels
and different colors of pixels. Each pixel has some co-ordinate value. The coordinate
is represented using row and column.

P (5, 5) used to represent a pixel in the 5th row and the 5th column. Each pixel has
some intensity value which is represented in memory of computer called a frame
buffer. Frame Buffer is also called a refresh buffer. This memory is a storage area for
storing pixels values using which pictures are displayed. It is also called as digital
memory. Inside the buffer, image is stored as a pattern of binary digits either 0 or 1. So
there is an array of 0 or 1 used to represent the picture. In black and white monitors,
black pixels are represented using 1's and white pixels are represented using 0's. In
case of systems having one bit per pixel frame buffer is called a bitmap. In systems
with multiple bits per pixel it is called a pixmap.

4.3 Scan Converting a Point

Each pixel on the graphics display does not represents a mathematical point. Instead,
it means a region which theoretically can contain an infinite number of points. Scan-
Converting a point involves illuminating the pixel that contains the point.

Example: Display coordinates points as shown in fig would


both be represented by pixel (2, 1). In general, a point p (x, y) is represented by the
integer part of x & the integer part of y that is pixels [(INT (x), INT (y). Fig. 4.2 illustrates
the Scan Conversion of a Point.

Fig. 4.2 Illustration of Scan Converting a Point

4.4 Scan Converting a Straight Line


A straight line may be defined by two endpoints & an equation. In fig the two
endpoints are described by (x1,y1) and (x2,y2). The equation of the line is used to
determine the x, y coordinates of all the points that lie between these two endpoints.
The Scan Converting a Straight Line is shown in Fig. 4.3

Fig. 4.3 Illustration of a Scan Converting A Straight Line

Using the equation of a straight line, y = mx + b where m = & b = the y interrupt, we


can find values of y by incrementing x from x =x1, to x = x2. By scan-converting these
calculated x, y values, we represent the line as a sequence of pixels.

4.5 Properties of Good Line Drawing Algorithm:

1. Line should appear Straight: We must appropriate the line by choosing addressable
points close to it. If we choose well, the line will appear straight, if not, we shall produce
crossed lines. Fig. 4.4 Poor Line Generating Algorithm.

Fig. 4.4 Image of a Poor Line Generating algorithm


The lines must be generated parallel or at 45° to the x and y-axes. Other lines cause a
problem: a line segment through it starts and finishes at addressable points, may
happen to pass through no another addressable points in between. Fig. 4.5 explains
how a Straight-Line Segment of 2 grid intercession is formed

Fig. 4.5 Diagrammatic illustration of Straight-line Segment of 2 grid intercession

2. Lines should terminate accurately: Unless lines are plotted accurately, they may
terminate at the wrong place. Fig. 4.6 shows a drawing of an Uneven Line.

Fig 4.6 Illustration of an Uneven Line

4. Line density should be independent of line length and angle: This can be done by
computing an approximating line-length estimate and to use a line-generation
algorithm that keeps line density constant to within the accuracy of this estimate.

5. Line should be drawn rapidly: This computation should be performed by special-


purpose hardware.
4.6 Algorithm for line Drawing:

1. Direct use of line equation


2. DDA (Digital Differential Analyzer)
3. Bresenham's Algorithm

4.6.1 Direct use of line equation:

It is the simplest form of conversion. First of all scan P1 and P2 points. P1 has co-ordinates
(x1',y1') and (x2' y2' ).

Then m = (y2', y1') / ( x2', x1') and b =

If value of |m|≤1 for each integer value of x. But do not consider

If value of |m|>1 for each integer value of y. But do not consider

Example: A line with starting point as (0, 0) and ending point (6, 18) is given. Calculate
value of intermediate points and slope of line. Fig. 4.7 illustrates Graph.

So, points are P1 (0,0)

1. P2 (1,3)
2. P3 (2,6)
3. P4 (3,9)
4. P5 (4,12)
5. P6 (5,15)
6. P7 (6,18)

Fig. 4.7 Graph


4.6.2 Algorithm for drawing line using equation:
Step1: Start Algorithm
Step2: Declare variables x1,x2,y1,y2,dx,dy,m,b,
Step3: Enter values of x1,x2,y1,y2.
The (x1,x2) are co-ordinates of a starting point of the line.
The (x2,y2) are co-ordinates of a ending point of the line.
Step4: Calculate dx = x2- x1
Step5: Calculate dy = y2-y1
Step6: Calculate m = Scan Converting a Straight Line
Step7: Calculate b = y1-m* x1
Step8: Set (x, y) equal to starting point, i.e., lowest point and xendequal to largest
value of x.
If dx < 0
then x = x2
y = y2
xend= x1
If dx > 0
then x = x1
y = y1
xend= x2
Step9: Check whether the complete line has been drawn if x=xend, stop
Step10: Plot a point at current (x, y) coordinates
Step11: Increment value of x, i.e., x = x+1
Step12: Compute next value of y from equation y = mx + b
Step13: Go to Step9.

4.7 DDA Algorithm

DDA stands for Digital Differential Analyzer. It is an incremental method of scan


conversion of line. In this method calculation is performed at each step but by using
results of previous steps.

Suppose at step i, the pixels is (xi, yi)

The line of equation for step i

yi=mxi+b......................equation 1

Next value will be


yi+1=mxi+1+b.................equation 2
m =DDA Algorithm
yi+1-yi=∆y.......................equation 3
yi+1-xi=∆x......................equation 4
yi+1=yi+∆y
∆y=m∆x
yi+1=yi+m∆x
∆x=∆y/m
xi+1=xi+∆x
xi+1=xi+∆y/m

Case1: When |M|<1 then (assume that x12)


x= x1, y=y1 set ∆x=1
yi+1=y1+m, x=x+1
Until x = x2

Case2: When |M|<1 then (assume that y12)


x= x1,y=y1 set ∆y=1

xi+1= , y=y+1
Until y → y2

Advantage:

1. It is a faster method than method of using direct use of line equation.


2. This method does not use multiplication theorem.
3. It allows us to detect the change in the value of x and y, so plotting of same
point twice is not possible.
4. This method gives overflow indication when a point is repositioned.
5. It is an easy method because each step involves just two additions.

Disadvantage:

1. It involves floating point additions rounding off is done. Accumulations of round


off error cause accumulation of error.
2. Rounding off operations and floating-point operations consumes a lot of time.
3. It is more suitable for generating line using the software. But it is less suited for
hardware implementation.

DDA Algorithm:
Step1: Start Algorithm
Step2: Declare x1,y1,x2,y2,dx,dy,x,y as integer variables.
Step3: Enter value of x1,y1,x2,y2.
Step4: Calculate dx = x2-x1
Step5: Calculate dy = y2-y1
Step6: If ABS (dx) > ABS (dy)
Then step = abs (dx)
Else
Step7: xinc=dx/step
yinc=dy/step
assign x = x1
assign y = y1
Step8: Set pixel (x, y)
Step9: x = x + xinc
y = y + yinc
Set pixels (Round (x), Round (y))
Step10: Repeat step 9 until x = x2
Step11: End Algorithm

Example: If a line is drawn from (2, 3) to (6, 15) with use of DDA. How many points will
be needed to generate such line?

Solution: P1 (2,3) P11 (6,15)

▪ x1=2
▪ y1=3
▪ x2= 6
▪ y2=15
▪ dx = 6 - 2 = 4
▪ dy = 15 - 3 = 12

▪ m=

For calculating next value of x takes x = x +


Fig 4.8 DDA Graph Points

4.8 Bresenham's Line Algorithm

This algorithm is used for scan converting a line. It was developed by Bresenham. It is
an efficient method because it involves only integer addition, subtractions, and
multiplication operations. These operations can be performed very rapidly so lines can
be generated quickly.

In this method, next pixel selected is that one who has the least distance from true
line.

The method works as follows:

Assume a pixel P1'(x1',y1'),then select subsequent pixels as we work our may to the
night, one pixel position at a time in the horizontal direction toward P2'(x2',y2').

Once a pixel in choose at any step

The next pixel is

1. Either the one to its right (lower-bound for the line)


2. One top its right and up (upper-bound for the line)

The line is best approximated by those pixels that fall the least distance from the path
between P1',P2'. Fig. 4.9 Scan Converting A Line.
Fig 4. 9 Illustration of Scan Converting a line

To chooses the next one between the bottom pixel S and top pixel T.

If S is chosen

We have xi+1=xi+1 and yi+1=yi

If T is chosen

We have xi+1=xi+1 and yi+1=yi+1

The actual y coordinates of the line at x = xi+1is

y=mxi+1+b

The distance from S to the actual line in y direction

s = y-yi

The distance from T to the actual line in y direction

t = (yi+1)-y

Now consider the difference between these 2 distance values

s-t

When (s-t) <0 ⟹ s < t


The closest pixel is S

When (s-t) ≥0 ⟹ s < t

The closest pixel is T

This difference is

s-t = (y-yi)-[(yi+1)-y]

= 2y - 2yi -1

Substituting m by and introducing decision variable

di=△x (s-t)

di=△x (2 (xi+1)+2b-2yi-1)

=2△xyi-2△y-1△x.2b-2yi△x-△x

di=2△y.xi-2△x.yi+c

Where c= 2△y+△x (2b-1)

We can write the decision variable di+1 for the next slip on

di+1=2△y.xi+1-2△x.yi+1+c

di+1-di=2△y.(xi+1-xi)- 2△x(yi+1-yi)

Since x_(i+1)=xi+1,we have

di+1+di=2△y.(xi+1-xi)- 2△x(yi+1-yi)

Special Cases

If chosen pixel is at the top pixel T (i.e., di≥0)⟹ yi+1=yi+1

di+1=di+2△y-2△x

If chosen pixel is at the bottom pixel T (i.e., di<0)⟹ yi+1=yi


di+1=di+2△y

Finally, we calculate d1

d1=△x[2m(x1+1)+2b-2y1-1]

d1=△x[2(mx1+b-y1)+2m-1]

Since mx1+b-yi=0 and m = , we have

d1=2△y-△x

Advantage:

1. It involves only integer arithmetic, so it is simple.


2. It avoids the generation of duplicate points.
3. It can be implemented using hardware because it does not use
multiplication and division.
4. It is faster as compared to DDA (Digital Differential Analyzer) because it
does not involve floating point calculations like DDA Algorithm.

Disadvantage:

1. This algorithm is meant for basic line drawing only Initializing is not a part of
Bresenham's line algorithm. So to draw smooth lines, you should want to look
into a different algorithm.

Bresenham's Line Algorithm:

Step1: Start Algorithm

Step2: Declare variable x1,x2,y1,y2,d,i1,i2,dx,dy

Step3: Enter value of x1,y1,x2,y2

Where x1,y1are coordinates of starting point And x2,y2 are coordinates of


Ending point

Step4: Calculate dx = x2-x1


Calculate dy = y2-y1

Calculate i1=2*dy

Calculate i2=2*(dy-dx)

Calculate d=i1-dx

Step5: Consider (x, y) as starting point and xend as maximum possible value of x.

If dx < 0
Then x = x2
y = y2
xend=x1
If dx > 0
Then x = x1
y = y1
xend=x2

Step6: Generate point at (x,y)coordinates.

Step7: Check if whole line is generated.

If x > = xend

Stop.

Step8: Calculate co-ordinates of the next pixel

If d < 0
Then d = d + i1
If d ≥ 0
Then d = d + i2
Increment y = y + 1

Step9: Increment x = x + 1

Step10: Draw a point of latest (x, y) coordinates

Step11: Go to step 7

Step12: End of Algorithm

Example: Starting and Ending position of the line are (1, 1) and (8, 5). Find
intermediate points. Table 4.1 shows the Formula used to gain the D values.

Solution:

x1=1
y1=1
x2=8
y2=5
dx= x2-x1=8-1=7
dy=y2-y1=5-1=4
I1=2* ∆y=2*4=8
I2=2*(∆y-∆x)=2*(4-7)=-6
d = I1-∆x=8-7=1

Table 4. 1 D=D+l1 or l2
x y d=d+I1 or I2
1 1 d+I2=1+(-6)=-5
2 2 d+I1=-5+8=3
3 2 d+I2=3+(-6)=-3
4 3 d+I1=-3+8=5
5 3 d+I2=5+(-6)=-1
6 4 d+I1=-1+8=7
7 4 d+I2=7+(-6)=1
8 5

Fig. Illustrates the Graph plotting.

4.9 The DDA Algorithm and the Bresenham’s Line Algorithm has many differences.

Within the Table 4.2 is a summary of the Major differences between the two:
DDA Algorithm Bresenham's Line Algorithm

1. DDA Algorithm use floating 1. Bresenham's Line Algorithm use fixed point, i.e.,
point, i.e., Real Arithmetic. Integer Arithmetic
2. DDA Algorithms uses
2.Bresenham's Line Algorithm uses only
multiplication & division its
subtraction and addition its operation
operation
3. DDA Algorithm is slowly than
3. Bresenham's Algorithm is faster than DDA
Bresenham's Line Algorithm in line
Algorithm in line because it involves only addition
drawing because it uses real
& subtraction in its calculation and uses only
arithmetic (Floating Point
integer arithmetic.
operation)
4. DDA Algorithm is not accurate
4. Bresenham's Line Algorithm is more accurate
and efficient as Bresenham's Line
and efficient at DDA Algorithm.
Algorithm.
5.DDA Algorithm can draw circle 5. Bresenham's Line Algorithm can draw circle
and curves but are not accurate and curves with more accurate than DDA
as Bresenham's Line Algorithm Algorithm.
CHAPTER 5

3D COMPUTER GRAPHICS

5.1 Three-Dimensional Graphics

The three-dimensional transformations are extensions of two-dimensional


transformation. In 2D two coordinates are used, i.e., x and y whereas in 3D three co-
ordinates x, y, and z are used.

For three dimensional images and objects, three-dimensional transformations are


needed. These are translations, scaling, and rotation. These are also called as basic
transformations are represented using matrix. More complex transformations are
handled using matrix in 3D.

The 2D can show two-dimensional objects. Like the Bar chart, pie chart, graphs. But
some more natural objects can be represented using 3D. Using 3D, we can see
different shapes of the object in different sections.

In 3D when a translation is done we need three factors for rotation also, it is a


component of three rotations. Each can be performed along any three Cartesian
axes. In 3D also, we can represent a sequence of transformations as a single matrix.

Computer Graphics uses CAD. CAD allows manipulation of machine components


which are 3 Dimensional. It also provides automobile bodies, aircraft parts study. All
these activities require realism. For realism 3D is required. In the production of a realistic
3D scene from 2D is tough. It requires three dimensions, i.e., depth.

5.2 3D Geometry

Three-dimension system has three axis x, y, z. The orientation of a 3D coordinate system


is of two types. Right-handed system and left-handed system.

In the right -handed system thumb of right- hand points to positive z-direction and left-
hand system thumb point to negative two directions. Following Fig. show right-hand
orientation of the cube. Fig. 5.1 shows the image of a Three-Dimensional Graphical
events
Fig. 5.1 Photo of a Three-Dimensional Graphics

Using right-handed system co-ordinates of corners, A, B, C, D of the cube

Point A x, y, z

Point B x, y, 0

Point C 0, y, 0

Point D 0, y, z

Producing realism in 3D: The three-dimensional objects are made using computer
graphics. The technique used for two Dimensional displays of three-Dimensional
objects is called projection. Several types of projection are available, i.e.,

1. Parallel Projection
2. Perspective Projection
3. Orthographic Projection

Parallel Projection: In this projection point on the screen is identified within a point in
the three-dimensional object by a line perpendicular to the display screen. The
architect Drawing, i.e., plan, front view, side view, elevation is nothing but lines of
parallel projections.

Perspective Projection: This projection has a property that it provides idea about
depth. Farther the object from the viewer, smaller it will appear. All lines in perspective
projection converge at a center point called as the center of projection.
Orthographic Projection: It is simplest kind of projection. In this, we take a top, bottom,
side view of the object by extracting parallel lines from the object.

5.3 Three Dimensional Models

The techniques for generating different images of a solid object depend upon the
type of object. Two viewing techniques are available for viewing three-dimensional
objects.

1. Geometry: It is concerned with measurements. Measurement is the location of


a point concerning origin or dimension of an object.
2. Topological Information: It is used for the structure of a solid object. It is mainly
concerned with the formation of polygons with the help of points of objects or
the creation of the object with polygons.

5.4 Three Dimensional Transformations

The geometric transformations play a vital role in generating images of three-


Dimensional objects with the help of these transformations. The location of objects
relative to others can be easily expressed. Sometimes viewpoint changes rapidly, or
sometimes objects move in relation to each other. For this number of transformations
can be carried out repeatedly.

5.4.1 Translation

It is the movement of an object from one position to another position. Translation is


done using translation vectors. There are three vectors in 3D instead of two. These
vectors are in x, y, and z directions. Translation in the x-direction is represented using
Tx. The translation is y-direction is represented using Ty. The translation in the z- direction
is represented using Tz.

If P is a point having co-ordinates in three directions (x, y, z) is translated, then after


translation its coordinates will be (x1 y1 z1) after translation. Tx Ty Tz are translation vectors
in x, y, and z directions respectively.

x1=x+ Tx

y1=y+Ty
z1=z+ Tz

Three-dimensional transformations are performed by transforming each vertex of the


object. If an object has five corners, then the translation will be accomplished by
translating all five points to new locations. Following Fig. 1 shows the translation of point
Fig. 2 shows the translation of the cube. Fig. 5.2 illustrates the Three Dimensional
Transformations axis

Fig. 5.2 Illustrates A Three-Dimensional Transformations


5.4.2 Matrix for translation

Matrix representation of point translation


Point shown in fig is (x, y, z). It became (x1,y1,z1) after translation. Tx Ty Tz are translation
vector.

Example: A point has coordinates in the x, y, z direction i.e., (5, 6, 7). The translation is
done in the x-direction by 3 coordinate and y direction. Three coordinates and in the
z- direction by two coordinates. Shift the object. Find coordinates of the new position.

Solution:
Co-ordinate of the point are (5, 6, 7)
Translation vector in x direction = 3
Translation vector in y direction = 3
Translation vector in z direction = 2
Translation matrix is

Multiply co-ordinates of point with translation matrix


= [5+0+0+30+6+0+30+0+7+20+0+0+1] = [8991]
x becomes x1=8
y becomes y1=9
z becomes z1=9

5.5 3D Scaling

Scaling is used to change the size of an object. The size can be increased or
decreased. The scaling three factors are required Sx Sy and Sz. Fig. 5.3 shows the various
forms of 3D Scaling.

Sx=Scaling factor in x- direction

Sy=Scaling factor in y-direction

Sz=Scaling factor in z-direction

Fig. 5.3 Photo of A 3D Scaling Forms


5.5.1 Matrix for Scaling

Scaling of the object relative to a fixed point


Following are steps performed when scaling of objects with fixed point (a, b, c). It can
be represented as below:

1. Translate fixed point to the origin


2. Scale the object relative to the origin
3. Translate object back to its original position.

Note: If all scaling factors Sx=Sy=Sz. Then scaling is called as uniform. If scaling is done
with different scaling vectors, it is called a differential scaling.

In Fig. (a) point (a, b, c) is shown, and object whose scaling is to done also shown in
steps in fig (b), fig (c) and fig (d). Fig. 5.4 Object and Point Scaling, Fig. 5.6 Object
shifts.

Fig 5.4 Illustration of an Object and its Scaling Point


Fig. 5.5 Illustration of the various forms of Object shifts

5.6 Rotation

It is moving of an object about an angle. Movement can be anticlockwise or


clockwise. 3D rotation is complex as compared to the 2D rotation. For 2D we describe
the angle of rotation, but for a 3D angle of rotation and axis of rotation are required.
The axis can be either x or y or z. Fig. 5.6 illustrates the Rotation movements in the X-y-
z axis plane.

Following figures shows rotation about x, y, z- axis


Fig. 5.6 Diagrams of the Various Angle Rotation within the X-Y-Z plane/

Following Fig. 5.7 show rotation of the object about the Y axis

Fig. 5.7 Diagram of the Rotation about Y-axis


Fig. 5.8 show rotation of the object about the Z axis.
Fig. 5.8 Diagram of the Rotation about Z-axis

5.7 Rotation about Arbitrary Axis

When the object is rotated about an axis that is not parallel to any one of co-ordinate
axis, i.e., x, y, z. Then additional transformations are required. First of all, alignment is
needed, and then the object is being back to the original position. Following steps are
required

1. Translate the object to the origin


2. Rotate object so that axis of object coincides with any of coordinate axis.
3. Perform rotation about co-ordinate axis with whom coinciding is done.
4. Apply inverse rotation to bring rotation back to the original position.
5. Apply inverse translation to bring rotation axis to the original position.

Photo in Fig. 5.9 shows the Rotation about the Y-axis in Clockwise motion.

Fig. 5.9 Diagram illustrating the Rotation of an object about Y-axis


Matrix for representing three-dimensional rotations about the Z axis

Matrix for representing three-dimensional rotations about the X axis

Matrix for representing three-dimensional rotations about the Y axis

Fig. show the original position of object and position of object after rotation about the
x-axis. Fig. 5.10 shows the Rotation of an object about the X- Axis.

Fig. 5.10 Illustration of The Rotation About the X- Axis


For such transformations, composite transformations are required. All the above steps
are applied on points P' and P". Each step is explained using a separate figure.

Step1: Initial position of P' and P" is shown. Fig. 5.11 shows the Rotation of an object
about the arbitrary axis.

Fig 5.11 Illustration of the Rotation About Arbitrary Axis

Step2: Translate object P' to origin. Fig. 5.12 shows the translation of an object about
arbitrary axis.

Fig. 5.12 Illustration of The Translation About Arbitrary Axis

Step3: Rotate P" to z axis so that it aligns along the z-axis. Fig. 5.13 depicts the rotation
of an object about the arbitrary z-axis.
Fig. 5. 13 Illustrates The Rotation About Arbitrary Z-Axis

Step4: Rotate about around z- axis. The rotation of an object around the Z-axis is
illustrated in Fig. 5.14

Fig. 5. 14 Illustrates the Rotation Around Arbitrary Z-Axis

Step5: Rotate axis to the original position. Fig. 5.15 shows the axis rotation upon the
original positions.

Fig. 5.15 Illustrates how to Rotate Arbitrary Axis to the Original Position
Step6: Translate axis to the original position. Illustration of the axis transformation is
displayed in the Fig. 5.16.

Fig. 5. 16 Illustrates how to translate Arbitrary Axis to Original Position

5.8 Inverse Transformations

These are also called as opposite transformations. If T is a translation matrix than


inverse translation is representing using T-1. The inverse matrix is achieved using the
opposite sign.

Example1: Translation and its inverse matrix

Translation matrix

Inverse Transformations

Inverse translation matrix


Inverse Transformations

Example2: Rotation and its inverse matrix

Inverse Transformations

Inverse Rotation Matrix

Inverse Transformations

5.8 Reflection

It is also called a mirror image of an object. For this reflection axis and reflection of
plane is selected. Three-dimensional reflections are similar to two dimensions.
Reflection is 180° about the given axis. For reflection, plane is selected (xy,xz or yz).
Following matrices show reflection respect to all these three planes. Fig. 5. 17 shows
the Reflection of the XY plane.
5.8.1 Reflection relative to XY plane

Fig 5.17 Illustration of Reflection reflective to the XY Plane

Reflection
5.8.2 Reflection relative to YZ plane

Reflection

Reflection relative to ZX plane

Reflection

5.9 Shearing

It is change in the shape of the object. It is also called as deformation. Change can
be in the x -direction or y -direction or both directions in case of 2D. If shear occurs in
both directions, the object will be distorted. But in 3D shear can occur in three
directions. Fig. 5.18 depicts the Shear in Y direction.

5.9.1 Matrix for shear


Fig. 5.18 Illustration of the Shear in Y direction
CHAPTER 6

PROJECTION

6.1 Projection

It is the process of converting a 3D object into a 2D object. It is also defined as


mapping or transformation of the object in projection plane or view plane. The view
plane is displayed surface. Fig. 6. 1 shows the Block Diagram of the Types of Projection.

Fig. 6.1 Types of Projection

6.2 Perspective Projection

In perspective projection farther away object from the viewer, small it appears. This
property of projection gives an idea about depth. The artist use perspective projection
from drawing three-dimensional scenes.

Two main characteristics of perspective are vanishing points and perspective


foreshortening. Due to foreshortening object and lengths appear smaller from the
center of projection. More we increase the distance from the center of projection,
smaller will be the object appear.

6.3 Vanishing Point

It is the point where all lines will appear to meet. There can be one point, two point,
and three-point perspectives.

One Point: There is only one vanishing point as shown in fig (a)

Two Points: There are two vanishing points. One is the x-direction and other in the y -
direction as shown in fig (b)

Three Points: There are three vanishing points. One is x second in y and third in two
directions.

In Perspective projection lines of projection do not remain parallel. The lines converge
at a single point called a center of projection. The projected image on the screen is
obtained by points of intersection of converging lines with the plane of the screen.
The image on the screen is seen as of viewer's eye were located at the center of
projection, lines of projection would correspond to path travel by light beam
originating from object.

6.4 Important terms related to perspective

1. View plane: It is an area of world coordinate system which is projected into


viewing plane.
2. Center of Projection: It is the location of the eye on which projected light rays
converge.
3. Projectors: It is also called a projection vector. These are rays start from the
object scene and are used to create an image of the object on viewing or
view plane. Fig. 6.2 portrays the Perspective of Projection views.
Fig. 6.2 Drawing of the Perspective Views of Projection

6.5 Anomalies in Perspective Projection

It introduces several anomalies due to these object shape and appearance gets
affected.
1. Perspective foreshortening: The size of the object will be small of its distance
from the center of projection increases.
2. Vanishing Point: All lines appear to meet at some point in the view plane.
3. Distortion of Lines: A range lies in front of the viewer to back of viewer is
appearing to six rollers.
Fig. 6.3 exhibits images of the Anomalies in Perspective views in Projection

Fig. 6.3 Drawing of The Anomalies in Perspective Projection

Foreshortening of the z-axis in fig (a) produces one vanishing point, P1. Foreshortening
the x and z-axis results in two vanishing points in fig (b). Adding a y-axis foreshortening
in fig (c) adds vanishing point along the negative y-axis.
6.6 Parallel Projection

Parallel Projection use to display picture in its true shape and size. When projectors are
perpendicular to view plane then is called orthographic projection. The parallel
projection is formed by extending parallel lines from each vertex on the object until
they intersect the plane of the screen. The point of intersection is the projection of
vertex.

Parallel projections are used by architects and engineers for creating working drawing
of the object, for complete representations require two or more views of an object
using different planes. Fig. 6.4 shows the directions in Parallel Projections and Fig. 6. 5
reveals the Block of the Types of the Parallel Projection.
Fig. 6.4 Illustrations of the Directions in Parallel Projection

Fig. 6.5 Block Diagram of the Types of Parallel Projection


1. Isometric Projection: All projectors make equal angles generally angle is of 30°.
2. Dimetric: In these two projectors have equal angles. With respect to two
principle axes.
3. Trimetric: The direction of projection makes unequal angle with their principle
axis.
4. Cavalier: All lines perpendicular to the projection plane are projected with no
change in length.
5. Cabinet: All lines perpendicular to the projection plane are projected to one
half of their length. These give a realistic appearance of object.
Fig. 6.6 image reveals the process in Orthographic projection view. As Fig. 6.7
illustrates the Various dimensional views, and angles in orthographic projections.
Fig. 6.6 Illustration of the Orthographic Projection
Fig. 6.7 Illustrations of the Projection Angles and points
CHAPTER 7

A-FRAME

7.1 Introduction

A-Frame is a web framework for building virtual reality (VR) experiences. A-Frame is
based on top of HTML, making it simple to get started. But A-Frame is not just a 3D
scene graph or a markup language; the core is a powerful entity-component
framework that provides a declarative, extensible, and composable structure to
three.js.

Fig. 7.1 Image of the A-Frame Web Image

Originally conceived within Mozilla and now maintained by the co-creators of A-


Frame within Super medium, A-Frame was developed to be an easy yet powerful way
to develop VR content. As an independent open source project, A-Frame has grown
to be one of the largest VR communities.

A-Frame supports most VR headsets such as Vive, Rift, Windows Mixed Reality,
Daydream, GearVR, Cardboard, Oculus Go, and can even be used for augmented
reality. Although A-Frame supports the whole spectrum, A-Frame aims to define fully
immersive interactive VR experiences that go beyond basic 360° content, making full
use of positional tracking and controllers.
7.2 Getting Started

A-Frame can be developed from a plain HTML file without having to install anything.
A great way to try out A-Frame is to remix the starter example on Glitch, an online
code editor that instantly hosts and deploys for free. Alternatively, create an .html file
and include A-Frame in the <head>:

<html>

<head>

<script src="https://aframe.io/releases/1.2.0/aframe.min.js"></script>

</head>

<body>

<a-scene>

<a-box position="-1 0.5 -3" rotation="0 45 0" color="#4CC3D9"></a-box>

<a-sphere position="0 1.25 -5" radius="1.25" color="#EF2D5E"></a-sphere>

<a-cylinder position="1 0.75 -3" radius="0.5" height="1.5" color="#FFC65D"></a-cylinder>

<a-plane position="0 0 -4" rotation="-90 0 0" width="4" height="4" color="#7BC8A4"></a-plane>

<a-sky color="#ECECEC"></a-sky>

</a-scene>

</body>

</html>

7.2 Features

VR Made Simple: Just drop in a <script> tag and <a-scene>. A-Frame will handle
3D boilerplate, VR setup, and default controls. Nothing to install, no build steps.

Declarative HTML: HTML is easy to read, understand, and copy-and-paste. Being


based on top of HTML, A-Frame is accessible to everyone: web developers, VR
enthusiasts, artists, designers, educators, makers, kids.

Entity-Component Architecture: A-Frame is a powerful three.js framework,


providing a declarative, composable, reusable entity-component structure. HTML is
just the tip of the iceberg; developers have unlimited access to JavaScript, DOM APIs,
three.js, WebVR, and WebGL.

Cross-Platform VR: Build VR applications for Vive, Rift, Windows Mixed Reality,
Daydream, GearVR, and Cardboard with support for all respective controllers. Don’t
have a headset or controllers? No problem! A-Frame still works on standard desktop
and smartphones.
Performance: A-Frame is optimized from the ground up for WebVR. While A-Frame
uses the DOM, its elements don’t touch the browser layout engine. 3D object updates
are all done in memory with little garbage and overhead. The most interactive and
large scale WebVR applications have been done in A-Frame running smoothly at
90fps.

Visual Inspector: A-Frame provides a handy built-in visual 3D inspector. Open up


any A-Frame scene, hit <ctrl> + <alt> + i, and fly around to peek under the hood!

Components: Hit the ground running with A-Frame’s core components such as
geometries, materials, lights, animations, models, raycasters, shadows, positional
audio, text, and controls for most major headsets. Get even further from the hundreds
of community components including environment, state, particle systems, physics,
multiuser, oceans, teleportation, super hands, and augmented reality.

Proven and Scalable: A-Frame has been used by companies such as Google,
Disney, Samsung, Toyota, Ford, Chevrolet, Amnesty International, CERN, NPR, Al
Jazeera, The Washington Post, NASA. Companies such as Google, Microsoft, Oculus,
and Samsung have made contributions to A-Frame.

7.3 Software Requirement

▪ AMPP (Webserver): AMPPS is a LAMP/MAMP/WAMP stack. AMPPS is a stack of


Apache, MySQL, PHP, Perl & Python.
▪ Apache
▪ MySQL
▪ PHP
▪ Perl & Python
▪ Notepad ++ or Bracket IDE
▪ Web browsers: IE 9, Google Chrome 10+, Opera 10+, Safari 5+, Mozilla Firefox

7.4 Entity

A-Frame represents an entity via the <a-entity> element. As defined in the entity-
component-system pattern, entities are placeholder objects to which we plug in
components to provide them appearance, behavior, and functionality. In A-Frame,
entities are inherently attached with the position, rotation, and scale components.
E.g., consider the entity below. By itself, it has no appearance, behavior, or
functionality. It does nothing:

<a-entity>

We can attach components to it to make it render something or do something. To


give it shape and appearance, we can attach the geometry and material
components:

<a-entity geometry="primitive: box" material="color: red">

Or to make it emit light, we can further attach the light component:


<a-entity geometry="primitive: box" material="color: red"
light="type: point; intensity: 2.0">

Retrieving an Entity

We can simply retrieve an entity using DOM APIs.

<a-entity id="mario"></a-entity>
var el = document.querySelector('#mario');

Once we have an entity, we have access to its properties and methods detailed
below.
▪ Properties
▪ components

<a-entity>. components are an object of components attached to the entity. This


gives us access to the entity’s components including each component’s data, state,
and methods.

For example, if we wanted to grab an entity’s three.js camera object or material


object, we could reach into its components:

var camera = document.querySelector('a-


entity[camera]').components.camera.camera;
var material = document.querySelector('a-
entity[material]').components.material.material;

Or if a component exposes an API, we can call its methods.


7.5 Entity

In the entity-component-system pattern, a component is a reusable and modular


chunk of data that we plug into an entity to add appearance, behavior, and/or
functionality.

In A-Frame, components modify entities which are 3D objects in the scene. We mix
and compose components together to build complex objects. They let us
encapsulate three.js and JavaScript code into modules that we can use declaratively
from HTML.

As an abstract analogy, if we define a smartphone as an entity, we might use


components to give it appearance (color, shape), to define its behavior (vibrate
when called, shut down on low battery), or to add functionality (camera, screen).

Components are roughly analogous to CSS. Like how CSS rules modify the
appearance of elements, component properties modify the appearance, behavior,
and functionality of entities.

7.5.1 Component HTML Form

A component holds a bucket of data in the form of one or more component


properties. Components use this data to modify entities. Consider an engine
component, we might define properties such as horsepower or cylinders.

HTML attributes represent component names and the value of those attributes
represent component data.

7.5.2 Single-Property Component

If a component is a single-property component, meaning its data consists of a single


value, then in HTML, the component value looks like a normal HTML attribute:

<!-- `position` is the name of the position component. -->


<!-- `1 2 3` is the data of the position component. -->
<a-entity position="1 2 3"></a-entity>
7.5.2 Multi-Property Component

If a component is a multi-property component, meaning the data consists of


multiple properties and values, then in HTML, the component value resembles
inline CSS styles:

<!-- `light` is the name of the light component. -->


<!-- The `type` property of the light is set to `point`. -->
<!-- The `color` property of the light is set to `crimson`. -->
<a-entity light="type: point; color: crimson"></a-entity>

7.5.3 Register a Component

AFRAME.registerComponent (name, definition)


Register an A-Frame component. We must register components before we use
them anywhere in <a-scene>. Meaning from an HTML file, components should
come in order before <a-scene>.
{string} name - Component name. The component’s public API as represented
through an HTML attribute name.
{Object} definition - Component definition. Contains schema and lifecycle
handler methods.
// Registering component in foo-component.js
AFRAME.registerComponent('foo', {

// Registering component in foo-component.js


AFRAME.registerComponent('foo', {
schema: {},
init: function () {},
update: function () {},
tick: function () {},
remove: function () {},
pause: function () {},
play: function () {}
});
<!-- Usage of `foo` component. -->
<html>
<head>
<script src="aframe.min.js"></script>
<script src="foo-component.js"></script>
</head>
<body>
<a-scene>
<a-entity foo></a-entity>
</a-scene>
</body>
</html>

7.6 System

A system, of the entity-component-system pattern, provides global scope,


services, and management to classes of components. It provides public APIs
(methods and properties) for classes of components. A system can be
accessed through the scene element, and can help components interface
with the global scene.
For example, the camera system manages all entities with the camera
component, controlling which camera is the active camera.

7.6.1 Registering a System

A system is registered similarly to a component.


If the system name matches a component name, then the component will
have a reference to the system as this.system:

AFRAME.registerSystem('my-component', {
schema: {}, // System schema. Parses into `this.data`.

init: function () {
// Called on scene initialization.
},

// Other handlers and methods.


});

AFRAME.registerComponent('my-component', {
init: function () {
console.log(this.system);
}
});

7.7 Scene

A scene is represented by the <a-scene> element. The scene is the global root
object, and all entities are contained within the scene.
The scene inherits from the Entity class so it inherits all of its properties, its
methods, the ability to attach components, and the behavior to wait for all of
its child nodes (e.g., <a-assets> and <a-entity>) to load before kicking off the
render loop.
<a-scene> handles all of the three.js and WebVR/WebXR boilerplate for us:
Set up canvas, renderer, render loop
Default camera and lights
Set up webvr-polyfill, VREffect
Add UI to Enter VR that calls WebVR API
ConFig. WebXR devices through the webxr system
Example:

<a-scene>
<a-assets>
<img id="texture" src="texture.png">
</a-assets>

<a-box src="#texture"></a-box>
</a-scene>
There are many properties and their descriptions. Below are just a few enlisted
in the Table 7.1.
Table 7.1 Properties and Description Table
Name Description
1. behaviors Array of components with tick methods that will be run on
every frame
2. camera Active three.js camera.
3. canvas Reference to the canvas element.
4. isMobile Whether or not environment is detected to be mobile
5. object3D THREE.Scene object.

7.8 Asset Management System

A-Frame has an asset management system that allows us to place our assets
in one place and to preload and cache assets for better performance. Note
the asset management system is purely for preloading assets. Assets that are
set on entities at runtime could be done via direct URLs to the assets.
Games and rich 3D experiences traditionally preload their assets, such as
models or textures, before rendering their scenes. This makes sure that assets
aren’t missing visually, and this is beneficial for performance to ensure scenes
don’t try to fetch assets while rendering.
We place assets within <a-assets>, and we place <a-assets> within <a-scene>.
Assets include:
• <a-asset-item> - Miscellaneous assets such as 3D models and materials
• <audio> - Sound files
• <img> - Image textures
• <video> - Video textures
The scene won’t render or initialize until the browser fetches (or errors out) all
the assets or the asset system reaches the timeout. E.g., we can define our
assets in <a-assets> and point to those assets from our entities using selectors:

<a-scene>
<!-- Asset management system. -->
<a-assets>
<a-asset-item id="horse-obj" src="horse.obj"></a-asset-item>
<a-asset-item id="horse-mtl" src="horse.mtl"></a-asset-item>
<a-mixin id="giant" scale="5 5 5"></a-mixin>
<audio id="neigh" src="neigh.mp3"></audio>
<img id="advertisement" src="ad.png">
<video id="kentucky-derby" src="derby.mp4"></video>
</a-assets>

<!-- Scene. -->


<a-plane src="#advertisement"></a-plane>
<a-sound src="#neigh"></a-sound>
<a-entity geometry="primitive: plane" material="src: #kentucky-derby"></a-entity>
<a-entity mixin="giant" obj-model="obj: #horse-obj; mtl: #horse-mtl"></a-entity>
</a-scene>

The scene and its entities will wait for every asset (up until the timeout) before
initializing and rendering.

You might also like