Computer Graphics - Chapter 4 - 10

Download as pdf or txt
Download as pdf or txt
You are on page 1of 226

1

Chapter Four: Geometry and Line Generation

COMPUTER GRAPHICS
COURSE NUMBER: COSC3072
PREREQUISITE: COMPUTER PROGRAMMING (COSC 1012)

Compiled by: Kidane W.


2
Introduction 3

A commonly used approach in computer graphics for


the representation of (complex) objects is the modelling
of their surfaces using basic geometric objects.
 Pointplotting is done by converting a single coordinate
position furnished by an application program into
appropriate operations for the output device in use.
 Line drawing is done by calculating intermediate
positions along the line path between two specified
endpoint positions.
 The output device is then directed to fill in those positions between the
end points with some color.
Introduction 4

For some device such as a pen plotter or random


scan display, a straight line can be drawn
smoothly from one end point to other.
Digital devices display a straight line segment by
plotting discrete points between the two
endpoints.
Discrete coordinate positions along the line path
are calculated from the equation of the line.
Introduction 5

 Fora raster video display, the line intensity is loaded in


frame buffer at the corresponding pixel positions.
 Reading from the frame buffer, the video controller then
plots the screen pixels.
 Screen locations are referenced with integer values, so
plotted positions may only approximate actual line
positions between two specified endpoints.
Introduction 6

For example line position of (12.36, 23.87) would


be converted to pixel position (12, 24).
This rounding of coordinate values to integers
causes lines to be displayed with a stair step
appearance The stair step shape is noticeable in low
resolution system, and we can improve their
appearance somewhat by displaying them on
high resolution system. More effective
techniques for smoothing raster lines are based
on adjusting pixel intensities along the line
paths.

Pixel position will referenced according to scan-


Stair step effect produced when line is generated as line number and column number which is
a series of pixel positions. illustrated by
following figure.
Line Drawing Algorithms 7

 The Line drawing algorithm is a graphical algorithm which


is used to represent the line segment on discrete
graphical media, i.e., printer and pixel-based media.
A line contains two points. The point is an important
element of a line.
 Line drawing is fundamental to computer graphics.
 We must have fast and efficient line drawing functions.
Line Algorithm 8

 We can define a straight line with the help of the


following equation. y= mx + a Where, (x, y) = axis of the
line. m = Slope of the line. a = Interception point.

There are following algorithms used for drawing a line:


1. DDA (Digital Differential Analyzer) Line Drawing
Algorithm
2. Bresenham’s Line Drawing Algorithm
DDA (Digital Differential Analyzer) 9
Line Drawing Algorithm
 The Digital Differential Analyzer (DDA) Line Drawing
Algorithm is a simple and efficient algorithm used to draw
lines in computer graphics.
 It incrementally plots points along the line between two
endpoints, calculating intermediate pixel positions based
on the slope of the line.
 Key Concepts:
 Start Point: (𝑥0,𝑦0)
 End Point: (𝑥1,𝑦1) As we know the general equation of the straight line is:
y = mx + c
 Slope: The slope 𝑚 of the line is calculated as Here, m is the slope of (x1, y1) and (x2, y2).
m = (y2 – y1)/ (x2 – x1)
Now, we consider one point (xk, yk) and (xk+1, yk+1) as the next point.
Then the slope m = (yk+1 – yk)/ (xk+1 – xk)
DDA (Digital Differential Analyzer) 10
Line Drawing Algorithm
DDA (Digital Differential Analyzer) 11
Line Drawing Algorithm
A line has a starting point (1,7) and ending point (11,17). Apply the
Digital Differential Analyzer algorithm to plot a line.
Solution: We have two coordinates,
Starting Point = (x1, y1) = (1,7)
Ending Point = (x2, y2) = (11,17)

Step 1: First, we calculate ▲x, ▲y and m.


▲x = x2 – x1 = 11-1 = 10
▲y = y2 – y1 = 17-7 = 10
m = ▲y/▲x = 10/10 = 1

Step 2: Now, we calculate the number of steps.


▲x = ▲y = 10
Then, the number of steps = 10
Step 3: We will repeat step 3 until we get the
Xinc = 1 endpoints of the line.
Yinc = 1
Step 4: Stop
Start Point: (𝑥0,𝑦0)=(2,3 12
End Point: (𝑥1,𝑦1)=(10,8)
13
Bresenham’s Line Drawing 14
Algorithm
 Thisalgorithm was introduced by “Jack Elton
Bresenham” in 1962.
 It is a powerful, useful, and accurate method.
 We use incremental integer calculations to
draw a line.
 The integer calculations include addition,
subtraction, and multiplication.
 Unlike the DDA algorithm, which uses floating-
point arithmetic, Bresenham's algorithm uses
only integer arithmetic, making it much faster
and suitable for real-time graphics applications.
Bresenham’s Line Drawing 15
Algorithm
Step 1: Start. Case 1: If
Step 2: Now, we consider Starting point as (x1, y1) pk < 0
and ending point (x2, y2). Then
pk+1 =pk +2▲y
Step 3: Now, we have to calculate ▲x and ▲y.
xk+1 = xk +1
▲x = x2-x1 yk+1 = yk
▲y = y2-y1 Case 2: If
m = ▲y/▲x pk >= 0
Step 4: Now, we will calculate the decision Then
parameter p0 with following formula. pk+1 =pk +2(▲y-▲x)
p0 = 2▲y-▲x xk+1 =xk +1
Step 5: The initial coordinates of the line are (xk, yk), yk+1 =yk +1
and the next coordinates are (xk+1, yk+1).
Step 6: We will repeat step 5 until we found the
ending point of the line and the total number of
Now, we are going to calculate two cases for iterations =▲x-1.
decision parameter pk
Step 7: Stop.
16
17
Bresenham's Line Drawing from 18
point (2,3) to (10,8).
Bresenham’s Line Drawing 19
Algorithm (Advantages)
Advantages
It is simple to implement because it only contains
integers.
 It is quick and incremental
 Itis fast to apply but not faster than the Digital Differential Analyzer
(DDA) algorithm.
 The pointing accuracy is higher than the DDA algorithm.
Disadvantages
 The Bresenham’s Line drawing algorithm only helps to draw the
basic line.
 The resulted draw line is not smooth.
Line Thickness and Line Style 20

 In computer graphics, line thickness and line style


algorithms are essential for enhancing the visual quality
and representation of lines.
 These algorithms determine how lines are drawn with
varying widths and patterns (such as dashed or dotted
lines) on raster displays.
 The thickness is defined by the number of pixels around
this axis.
Midpoint Line Thickening 21
Algorithm
 This algorithm generalizes Bresenham’s line drawing
algorithm by plotting multiple adjacent lines or pixels to
create thickness.
 It works by displacing pixels orthogonally (relates to a 90-
degree angle) from the line's original path to fill in extra
thickness.

1. Draw the Core Line: Use Bresenham’s or any other line algorithm to plot the central line.
2. For each point on the central line, plot additional pixels above and below or to the left and right.
3. For steep lines, the displacement is vertical (above/below); for shallow lines, the displacement is horizontal
(left/right).
Parallel Line Algorithm 22

Instead of thickening the line by plotting extra


pixels around a core line, this algorithm draws
multiple parallel lines to simulate thickness.

1. Compute the Perpendicular Direction: Determine the direction perpendicular


to the original line using the slope.
2. Draw Parallel Lines: Shift the line by incremental distances (depending on
thickness) along the perpendicular direction and draw additional lines.
Line Style Algorithms 23

Line style algorithms modify the appearance of


lines by applying different patterns such as
dashed, dotted, or combinations of both.
Dashed Line Algorithm

1. Determine Dash Length and Gap: Decide on the length of each dash
and the gap between dashes.
2. Draw the Line in Segments: Use a line drawing algorithm to draw a
segment (dash), then skip a corresponding gap before drawing the next
segment. function DashedLine(x1, y1, x2, y2, dashLength, gapLength):
currentLength = 0
while currentLength < totalLength(x1, y1, x2, y2):
drawLineSegment(x1, y1, x2, y2, dashLength)
skipLineSegment(x1, y1, x2, y2, gapLength)
currentLength += dashLength + gapLength
Dotted Line Algorithm 24

A dotted line consists of individual points (or very


short line segments) spaced at regular intervals.
1. Set Dot Spacing: Define the distance between dots (e.g., every few pixels).
2. Plot Dots: Plot individual points or very short line segments at regular intervals
along the line.

The dotSpacing parameter defines how frequently the points are plotted. For
example, if dotSpacing = 3, every third point will be plotted.

function DottedLine(x1, y1, x2, y2, dotSpacing):


for each pixel along the line: //pindex =0
if (pixelIndex % (dotSpacing+1) == 0): //# Plot only if pixelIndex is a multiple of
dotSpacing
plot(x, y)
Plotting General Curves 25

 Plotting a general curve involves representing a


mathematical function or equation visually on a
coordinate plane. This function can be of various forms,
including polynomials, trigonometric functions,
exponential functions, and more.
1. Define the Function
2. Identify the range of x-values (domain) for which the function is defined.
3. Calculate the corresponding range of y-values (range)
4. Generate Points
5. Connect Points
Example: (Plotting a Parabola): 26

1. Function: y = x^2
2. Domain: All real numbers
3. Range: y ≥ 0
4. Select a set of x-values, such as: -3, -2, -1, 0, 1, 2, 3.
5. Calculate the corresponding y-values: 9, 4, 1, 0, 1, 4, 9.
6. Connect the points using a smooth curve.
27
Example2 Polyline 28
Circle Drawing Algorithms 29
A circle is a closed two-dimensional figure in which the set of all the
points in the plane is equidistant from a given point called “centre”.

The circle formula in the plane is given as:


( x − xc ) 2 + ( y − yc ) 2 = r 2
 where (xc, yc) is the centre of the circle.

Alternatively, in polar coordinates we can write:


Circle Formulas 30

Circumference (C) = πd = 2 π r
Area of a circle = πr2 We know that Area is the
space occupied by the circle.
Circle Drawing Algorithms 31

 Forexample, suppose we want to draw a circle with (xc,


yc) = (5,5) and r = 10. We start with θ = 0 degree and
compute x and y as:
x = 5 + 10 cos 0 degree = 15
y = 5 + 10 sin 0 degree = 5
 Therefore we plot (15,5). Next, we increase θ to 5 degree:
x = 5 + 10 cos 5o = 14.96
y = 5 + 10 sin 5o = 5.87 Therefore we plot (15,6)
 Thisprocess would continue until we had plotted the
entire circle (i.e. θ = 360 degree).
Bresenham - Circle Generating 32
Algorithm
 Drawing a circle on the screen is a little
complex than drawing a line.
 Thereare two popular algorithms for generating
a circle − Bresenham’s Algorithm and Midpoint
Circle Algorithm.
 Thesealgorithms are based on the idea of
determining the subsequent points required to
draw the circle.
 The algorithm uses symmetry (8-way symmetry
of the circle) to reduce the number of points
calculated.
Bresenham - Circle Generating 33
Algorithm
Step 1 − Get the coordinates of the center of the circle and radius, and store
them in x, y, and R respectively. (x,y) = (0,r)
1. Set 𝑥=0, 𝑦=r (starting at the top of the circle).
2. Set decision parameter d = 3 – 2r.
3. Plot the first point and its symmetric points in all 8 octants.
Step 2 − Iterate Over Points:
1. For each point, check the value of 𝑑 (the decision parameter).
2. If 𝑑 < 0, the next point is horizontally to the right: (𝑥+1,𝑦). Update the
decision parameter: d=d+4x+6
3. If 𝑑 ≥ 0, the next point is diagonally down and to the right: (𝑥+1,𝑦−1).
Update the decision parameter: d=d+4(x−y)+10.
4. Continue plotting symmetric points in all 8 octants.
Step 2 − Termination: The algorithm stops when 𝑥 exceeds 𝑦, as the circle is
complete. Repeat until x > y
function BresenhamCircle(x_centre, y_centre, radius):
x=0
y = radius
d = 3 - 2 * radius // Initial decision parameter
34
// Plot the first set of points in all octants
PlotCirclePoints(x_centre, y_centre, x, y)

// Loop until x >= y


while x <= y:
x=x+1

// Check the decision parameter and update accordingly


if d < 0:
function PlotCirclePoints(x_centre, y_centre, x, y):
d = d + 4 * x + 6 // Move to (x + 1, y)
// Using symmetry to plot in 8 octants
else:
plot(x_centre + x, y_centre + y)
y=y-1
plot(x_centre - x, y_centre + y)
d = d + 4 * (x - y) + 10 // Move to (x + 1, y - 1)
plot(x_centre + x, y_centre - y)
plot(x_centre - x, y_centre - y)
// Plot points for all 8 octants
plot(x_centre + y, y_centre + x)
PlotCirclePoints(x_centre, y_centre, x, y)
plot(x_centre - y, y_centre + x)
plot(x_centre + y, y_centre - x)
plot(x_centre - y, y_centre - x)
Midpoint Algorithm 35

The midpoint algorithm takes advantage of the


symmetry property of circles to produce a more
efficient algorithm for drawing circles. The
algorithm works in a similar way to Bresenham’s
line-drawing algorithm, in that it formulates a
decision variable that can be computed using
integer operations only.
The Midpoint Circle Drawing Algorithm is an
efficient algorithm used to plot the points of a
circle in computer graphics.
Midpoint Algorithm 36

1. Initialization: The algorithm starts at the circle's radius (i.e., (0, r)) and
iteratively steps along the perimeter of the circle.
2. Decision Parameter: determine whether the next point should move
horizontally or diagonally.

# Update the decision parameter


if p < 0:
p next = p + 2 * x + 1
else:
p next = p + 2 * x + 1 - 2 * y

3. Since circles are symmetric, the algorithm only calculates points for
one-eighth of the circle and reflects these points across all octants.
4. The process continues until the point reaches the x > y diagonal.
Midpoint Circle Drawing Algorithm 37

void circleMidpoint (int xCenter, int yCenter, int radius)


{
circlePlotPoints(xCenter, yCenter, x, y);
int x = 0; while (x < y) {
Int y = radius; x++;
if (f < 0)
int f = 1 – radius; f += 2*x+1;
else {
y--;
f += 2*(x-y)+1;
}
}
circlePlotPoints(xCenter, yCenter, x, y);
}
Midpoint Circle Drawing Algorithm 38

void circlePlotPoints (int xCenter, int yCenter, int x, int y)


{
setPixel (xCenter + x, yCenter + y);
setPixel (xCenter – x, yCenter + y);
setPixel (xCenter + x, yCenter – y);
setPixel (xCenter – x, yCenter – y);
setPixel (xCenter + y, yCenter + x);
setPixel (xCenter – y, yCenter + x);
setPixel (xCenter + y, yCenter – x);
setPixel (xCenter – y, yCenter – x);
}
For example, given a circle of radius 39
r=10, centered at the origin, the steps are:

First,compute the initial decision variable:


Plot (x0,y0) = (0,r) = (0,10) p0 = 1 − r = −9
Iteration 0:
p0 < 0, so
Plot (x1,y1) = (x0+1,y0) = (1,10)

p1 = p0 + 2 x1 + 1 = −9 + 3 = −6
OpenGL Circle Example 40
41
Polygon Plotting 42

A polygon algorithm generally refers to methods


used to draw, analyze, or manipulate polygons.
This includes basic algorithms for plotting
polygons, calculating their properties (e.g., area,
perimeter), determining if a point lies inside a
polygon, and performing geometric operations.
Algorithm 43

Input: A set of vertices [(x1, y1), (x2, y2), ..., (xn, yn)]
defining the polygon.
1. Start with the first vertex (x1, y1).
2. For i = 1 to n-1: - Draw a line from (xi, yi) to (xi+1, yi+1).
3. Connect the last vertex (xn, yn) back to the first vertex
(x1, y1) to close the polygon.
4. Optionally fill the polygon if needed.
Filling 44

 Fillinga polygon means determining and coloring all the


pixels inside a polygon. One of the most common
algorithms used for this is the Scan-line Fill Algorithm,
which is efficient and widely used in computer graphics.
 The Scanline Fill Algorithm works by drawing horizontal
lines (scanlines) across the polygon and determining the
range of pixels that are inside the polygon on each
scanline.
 Ituses the edges of the polygon to compute where the
scanlines intersect the polygon's boundaries, and then fills
in between those intersections.
Algorithm 45

Input: A polygon defined by vertices [(x1, y1), (x2, y2), ..., (xn, yn)].
1. Sort the vertices of the polygon by their y-coordinates.
2. Initialize the Edge Table and fill it with the polygon’s edges.
3. For each scanline from the bottom y-coordinate to the top y-
coordinate:
Find the intersection points of the scanline with the polygon edges.
Sort the intersection points by their x-coordinates.
Fill the pixels between each pair of intersection points.
4. Repeat for all scanlines to fill the polygon.
Polygon Examples 46
Text and Characters in Computer 47
Graphics
Text and characters play This means that the
characters are rendered directly from a pixel grid
instead of vector graphics, making them useful
for applications that require fixed-size fonts or
low-resolution displays a significant role in
computer graphics, especially in user interfaces,
digital art, games, and other visual media.
They provide essential information and enhance
the visual experience.
Text Representation 48

Bitmap Fonts:

Bitmap fonts are stored as an array of pixels for


each character.
Advantages: Simple and efficient for displaying
text at fixed sizes.
Disadvantages: Lack scalability; resizing can lead
to pixelation.
Text Representation 49

Vector Fonts:

Vector fonts are defined by mathematical


equations and curves (e.g., Bezier curves).
Advantages: Scalable to any size without loss of
quality; smooth rendering.
Examples: TrueType, OpenType.
Text Representation 50

Unicode:

A character encoding standard that represents


characters from most of the world's writing
systems.
Advantages: Supports multiple languages and
symbols, enabling internationalization of
software.
Text Rendering Techniques 51

Rasterization Process:
Converting vector text into a bitmap image suitable for display
on a raster device. Techniques: Font Smoothing: Techniques like
anti-aliasing to improve the appearance of text edges.
TextShading: Adding shadows or gradients to enhance
readability.
Texture Mapping:
 Applying a bitmap font as a texture to a surface in 3D
OpenGL Text Rendering 52
53
54

End of Chapter 4
55

Chapter Five: Geometrical Transformations

COMPUTER GRAPHICS
COURSE NUMBER: COSC3072
PREREQUISITE: COMPUTER PROGRAMMING (COSC 1012)

Compiled by: Kidane W.


Chapter Contents 56

Introduction
Mathematical Foundations
Transformation
Transformation in Practice (OpenGL)
Introduction 57

 Transformation means changing some graphics into


something else by applying rules.
Operations that alter the position, size, and orientation
of geometric objects in a coordinate system.
 Transformations play an important role in computer
graphics to reposition the graphics on the screen and
change their size or orientation.
 Importance: Essential for modeling, animation, and
rendering in computer graphics.
 Applications: Video games (moving characters),
simulations (object interactions), CAD (designing).
Homogenous Coordinates 58
The Cartesian coordinate system is a coordinate system that uses two or more
perpendicular axes to specify the location of a point in space. The point where
the x-axis and y-axis intersect is called the origin (denoted as (0,0)(0, 0)(0,0)).
 To perform a sequence of transformation such as
translation followed by rotation and scaling, we need to
follow a sequential process.
1. Translate the coordinates,
2. Rotate the translated coordinates, and then
3. Scale the rotated coordinates to complete the
composite transformation.
 To shorten this process, we have to use 3×3
transformation matrix instead of 2×2 transformation
matrix. To convert a 2×2 matrix to 3×3 matrix, we have to
add an extra dummy coordinate W.
Conversion Between Cartesian and
Homogeneous Coordinates. To convert
from homogeneous coordinates to

Homogenous Coordinates 59
Cartesian coordinates, you can divide the
first three components by the last
component 𝑤.

A point (𝑥,𝑦) in Cartesian coordinates is represented


as:(𝑥,𝑦)→(𝑤𝑥,𝑤𝑦,𝑤) where 𝑤≠0. If 𝑤=1, the point is
represented as (𝑥,𝑦,1).
 To represent points and transformations using
homogeneous coordinates, you extend the traditional
Cartesian coordinate system by adding an additional
dimension.
 Here’sa breakdown of how to represent homogeneous
coordinates in both 2D and 3D systems:
Where:
𝑥 and 𝑦 are the original Cartesian coordinates.
In 2D, a point in Cartesian 𝑤 is a non-zero scalar value that allows for the representation of
coordinates (𝑥,𝑦) is represented points at infinity when set to 0.
in homogeneous coordinates Commonly, 𝑤 is set to 1 for finite points.
as:(𝑥,𝑦)→(𝑥,𝑦,𝑤)
Examples: Finite Point: The point (2,3) in Cartesian coordinates
can be represented as:(2,3,1)
Advantages of Using 60
Homogeneous Coordinates
 Unified Representation: All transformations, including translation,
scaling, rotation, and shearing, can be represented uniformly using
matrix multiplications.
 Single Matrix Multiplication: You can apply a series of transformations
with a single matrix multiplication, simplifying the computational
process.
 Perspective Projection: Easily handle perspective projections,
enabling depth representation and vanishing points.
 Efficiency: More efficient computations in graphics rendering,
especially for real-time applications.
 Simpler Interpolations: Allows for smooth transitions and easier
implementation of key-frame animations.
Geometrical 61
Transformations
Geometrical transformations are
fundamental to computer graphics
as they allow the manipulation of
objects in a scene, such as moving,
scaling, rotating, or skewing objects.
Now, let’s dive into matrices. Think
of matrices as a grid of numbers
arranged in rows and columns.
A transformation matrix is a special
type of matrix used to perform
operations such as shifting, rotating,
or scaling objects in a coordinate
system.
62
Linear Algebra – Vectors and 63
Matrices
 Understanding vectors and matrices is
fundamental for representing and
processing image data.
 Vectors:In computer vision, a vector is a one-
dimensional array of values. Vectors can represent
various types of data, including pixel intensities,
image coordinates, feature descriptors, and more.
 Matrices are two-dimensional arrays of values,
consisting of rows and columns.
 Operations like addition, subtraction,
multiplication, and transformation are
essential for image manipulation.
Matrix 64

 Perhaps the most universal tools of graphics programs are


the matrices that change or transform points and vectors.
 Theidentity matrix contains all zeros, with ones along the
diagonal:

Any point or matrix multiplied by the identity matrix is


unchanged.
Matrix 65
Matrix multiplication is a bit trickier. You multiply rows of the first matrix by columns of the second
matrix, and then add the results. For matrix multiplication to work, the number of columns in the
first matrix must match the number of rows in the second matrix.
Matrix 66

 Addition/Subtraction: Add/subtract corresponding elements.


 Scalar Multiplication: Multiply every element by a scalar (a number).
 Matrix Multiplication: Multiply rows by columns and sum the products.
 Transpose: Flip rows and columns.
 Identity Matrix: Acts like "1" in matrix multiplication.
 Inverse: Undoes the effect of a matrix.
Matrix * Matrix

a b  x y  Does A*B = B*A? NO


A=  ,B =  
 c d   z w  What does the identity do?

ax + bz ay + bw AI=A


A* B =  
 cx + dz cy + dw 
a b  1 0 To multiply two matrices, the number of columns in the
?=  0 1  first matrix must equal the number of rows in the
 c d   second matrix. If: Matrix 𝐴 has dimensions 𝑚×𝑛 (m
rows and n columns).Matrix 𝐵 has dimensions 𝑛×𝑝 (n
rows and p columns). The resulting matrix 𝐶 will have
dimensions 𝑚×𝑝:
Identity Matrix 68

 An identity matrix is a special type of


square matrix that plays a similar role to
the number 1 in arithmetic.
 An identity matrix is a square matrix
(having the same number of rows and
columns), in which all the elements of the
main diagonal (from the top-left to the
bottom-right) are 1, and all other elements
are 0.
Multiplicative Identity: When any matrix A of size n×n
is multiplied by the identity matrix of the same size In,
the matrix A remains unchanged.
Transformation Matrices 69

 Transformation matrices are typically used with


homogeneous coordinates, which enable uniform
treatment of all transformations, including translation,
that wouldn’t otherwise be possible using traditional
Cartesian coordinates.

Where:
• The upper-left 3x3 part represents rotation and scaling.
• The right-most column represents translation.
• The bottom row handles perspective transformations
(usually 0, 0, 0, 1 for standard transformations).
Types of Transformations 70

1. Translation: Moving an object without altering its shape.


2. Rotation: Rotating an object around a pivot point.
3. Scaling: Changing the size of an object while
maintaining its proportions.
4. Reflection: Flipping an object over a specified axis.
5. Shearing: Distorting an object along a specific direction.
 x   1 0 t x   x  71
 y   =  0 1 t    y  , P = T t , t  P
2D Translation
   y   ( x y)
 1  0 0 1   1 

 x  cos  − sin  0  x 
2D Rotation  y =  sin  cos  0   y  , P = R ( )  P
  
 1   0 0 1   1 

 x   S x 0 0  x 
2D Scaling  y =  0 Sy 0    y  , P = S ( S x , S y )  P
  
 1   0 0 1   1 
Why Matrix Multiplication? 72

 While it may seem that adding transformation values


could be a simpler approach, using matrix multiplication
for transformations offers significant advantages,
particularly in computer graphics and linear algebra.
 Here are the key reasons why we use matrices for
transformations instead of simple addition:
1. Combining Transformations
2. Order Matters
3. Unified Representation
4. Single Calculation
Translation 73
Translation involves shifting an object from one location to another without
changing its shape, size, or orientation.

 Innormal Cartesian coordinates, translation (moving an


object by a certain amount) can’t be easily represented
by matrix multiplication.
 However, in homogeneous coordinates, it becomes
straightforward. For example, if we want to translate a
point (x, y) by (tx, ty):
Translation x = x + t x , y  = y + t y 74
 x  x  t x 
P =   , P =   , T =  

April 2010
 y  y t y 
y P = P + T
P

P x

Translation matrix is used to move items from one location to


another.
Scaling 75
Scaling alters the size of an object while maintaining its proportions. It can be uniform
(same scale factor for both axes) or non-uniform (different scale factors for each axis).

Sx

x = x  s x , y  = y  s y Sy

 x   s x 0   x
 y =  0 s y   y 
  
P = S  P
2D Rotation 76
y


yr
x
xr
x = xr + ( x − xr ) cos  − ( y − yr ) sin 
y
( x, y)
y = yr + ( x − xr ) sin  + ( y − yr ) cos 

P = Pr + R  ( P − Pr )
 ( x, y ) Rotation in angle  about a
cos  − sin  
( xr , yr ) x R= pivot (rotation) point ( xr , yr ) .
 sin  cos  
Rotationsin3D y
Viewlookingdown-xaxis:
y

x z

xcoordinateisunchangedby
z rotationaboutx

z
y

x
zcoordinateisunchangedby
rotationaboutz
z
78
Reflection 79
(left) Original Image (right)
Image after horizontal
reflection.
Reflection is a vital geometric
transformation in both 2D and 3D space,
allowing for the mirroring of points over
specific lines or planes.
The reflection of a point (x, y) over the x-axis
results in the point (x,−y).
The reflection of a point (𝑥,𝑦) over the y-axis
results in the point (−𝑥,𝑦).
The reflection of a point (𝑥,𝑦) over the line 𝑦=𝑥
results in the point (𝑦,𝑥).
Reflection in 3D 80

Using Homogeneous Coordinates In 3D,


a point (𝑥,𝑦,𝑧) is represented as (𝑥,𝑦,𝑧,1).
1. Reflection over the XY-plane: The
reflection over the XY-plane transforms
(𝑥,𝑦,𝑧) to (𝑥,𝑦,−𝑧).
2. Reflection over the XZ-plane: The reflection
over the XZ-plane transforms (𝑥,𝑦,𝑧) to
(𝑥,−𝑦,𝑧).
3. Reflection over the YZ-plane: The reflection
over the YZ-plane transforms (𝑥,𝑦,𝑧) to
(−𝑥,𝑦,𝑧).
Shearing 81

 Shearing is a transformation that shifts each


point of an object by a distance
proportional to its distance from a specific
axis.
 The result is a slanted version of the object,
while preserving its area and parallelism of
lines.
 In2D, shearing transforms points by shifting
them along the x-axis or y-axis, depending
on the shear factor.
Transformation in Practice: 82
OpenGL
 glTranslatef(tx,
ty, tz); // tx, ty, and tz are
translation amounts
 glScalef(sx, sy, sz); // sx, sy, and sz are scaling
factors
 glRotatef(angle, x, y, z); // glRotatef(45.0,
0.0, 0.0, 1.0);
83

Parameters:
• eyeX, eyeY, eyeZ: The position of the camera or the
viewer in 3D space (the eye point).
• centerX, centerY, centerZ: The point in 3D space that
the camera is looking at (the target point or center).
• upX, upY, upZ: The direction that defines "up" for the
camera. This is typically the positive Y-axis (0, 1, 0) but
can vary depending on the desired orientation.
Transforms in Professional Software 84

● Godot: https://godotengine.org/
● Unity: https://unity.com/
● Unreal: https://www.unrealengine.com/en-US
● Blender: https://www.blender.org/
● GLM: https://github.com/g-truc/glm
Classroom Exercise 85

1. Given a point 𝑃(3,4), translate it by 𝑡𝑥=5 and 𝑡𝑦=−2.


What is the new position of the point, and and its
representation in HC?
2. Scale a point 𝑃(2,3) by factors 𝑠𝑥=2 and 𝑠𝑦=3.What is the
new position of the point and its representation in HC?.
3. Rotate the point 𝑃(1,0) by 90° counterclockwise about
the origin. What is the new position of the point.
4. Apply an x-shear with a shear factor of 1.5 to the point
𝑃(3,2). What is the new position of the point?
5. Reflect the point 𝑃(2,−3) about the x-axis. What is the
new position of the point?
86

Chapter Six: State Management and Drawing


Geometric Objects
COMPUTER GRAPHICS
COURSE NUMBER: COSC3072
PREREQUISITE: COMPUTER PROGRAMMING (COSC 1012)

Compiled by: Kidane W.


Outline 87

 Clear the window to an arbitrary color


 Force any pending drawing to complete
 Draw geometric primitive points, lines, and polygons
 Turn states on and off and query state variables
 Control the display of primitives
 Specify normal vectors
 Use vertex arrays
 Save and restore several state variables at once.
Introduction 88
 Although you can draw complex and
interesting pictures using OpenGL,
they’re all constructed from a small
number of primitive graphical items.
This shouldn’t be too surprising - look at
what Leonardo da Vinci accomplished
with just pencils and paintbrushes.
 At the highest level of abstraction,
there are three basic drawing
operations: clearing the window,
drawing a geometric object, and
drawing a raster object.
Introduction 89

In computer graphics, state management and


drawing geometric objects are foundational
concepts, especially when working with OpenGL.
OpenGL is a state machine, meaning its operations
are determined by its current "state" settings.
OpenGL can be thought of as a switchboard with
numerous switches (states) that control how
graphics are processed. Each state can have
multiple configurations, leading to a vast number
of possible states
One effective way to manage states is

Types of States: through the use of state blocks. A state


block encapsulates a set of states that
90
can be applied before rendering an
object.

1. Functional States: These include states


that enable or disable certain features
(e.g., blending, depth testing). They
are generally more expensive to
change and should be managed
carefully to avoid performance hits.
2. Value States: These are states that hold
specific values (e.g., color, texture
parameters) and can be changed
more frequently without significant
performance penalties.
Introduction 91

Inthe previous sections, you saw an example of


a state variable, the current RGBA color, and
how it can be associated with a primitive.
OpenGL maintains many states and state
variables.
An object may be rendered with lighting,
texturing, hidden surface removal, fog, or some
other states affecting its appearance.
Introduction 92

 Bydefault, most of these states are initially inactive. These


states may be costly to activate; for example, turning on
texture mapping will almost certainly slow down the
speed of rendering a primitive. However, the quality of
the image will improve and look more realistic, due to
the enhanced graphics capabilities.
 Toturn on and off many of these states, use these two
simple commands: You can also check if a state is currently enabled or
❑ void glEnable(GLenum cap); disabled. GLboolean glIsEnabled(GLenum capability)
Returns GL_TRUE or GL_FALSE, depending upon whether
❑ void glDisable(GLenum cap); the queried capability is currently
activated.
Introduction 93

 glEnable()turns on a capability, and glDisable() turns it


off. More than 60 enumerated values can be passed as
parameters to glEnable() or glDisable().
 Some examples are GL_BLEND (which controls blending
of RGBA values), GL_DEPTH_TEST (which controls depth
comparisons and updates to the depth buffer), GL_FOG
(which controls fog), GL_LINE_STIPPLE (patterned lines),
and GL_LIGHTING (controlling lighting effect).
Introduction 94

The states you have just seen have two settings:


on and off.
 However,most OpenGL routines set values for more
complicated state variables.
Forexample, the routine glColor3f() sets three values,
which are part of the GL_CURRENT_COLOR state.
OpenGL State Management 95

OpenGL state management involves setting


parameters that control how graphics are rendered.
These parameters include:
 Transformations,
State management in OpenGL is crucial for
 Colors, controlling how graphics are rendered. OpenGL is
 Lighting, a state machine, meaning that the behavior of
the rendering pipeline is determined by its state.
 Textures, and
 Buffers.
OpenGL maintains an internal state that influences how
objects are drawn and displayed on the screen.
Clearing the Window 96

Drawing on a computer screen is different from


drawing on paper in that the paper starts out
white, and all you have to do is draw the picture.
On a computer, the memory holding the picture
is usually filled with the last picture you drew, so
you typically need to clear it to some
background color before you start to draw the
new scene.
Commands to Clear the Windows 97

1. Set the Clear Color: Use


glClearColor to specify the // Set the clear color to light blue (R: 0.53, G: 0.81,
color that will fill the window B: 0.92, Alpha: 1.0)
when you clear it. The color is glClearColor(0.53f, 0.81f, 0.92f, 1.0f);
set with four components: red,
green, blue, and alpha // Clear the color buffer - clears the screen to the
(transparency), each ranging specified color.
from 0.0 to 1.0. glClear(GL_COLOR_BUFFER_BIT);
2. Clear the Color Buffer: Call // Swap buffers if using double buffering (for
glClear with the smooth rendering)
GL_COLOR_BUFFER_BIT to clear glutSwapBuffers(); // Only needed if using GLUT;
the color buffer using the color otherwise use an equivalent.
set in glClearColor.
QA? 98
As an example, these lines of code clear an RGBA mode window to black:

glClearColor(0.0, 0.0, 0.0, 0.0); The single parameter to glClear()


glClear(GL_COLOR_BUFFER_BIT); indicates which buffers are to be
cleared.

At this point, you might be wondering why


we keep talking about clearing the window,
why not just draw a rectangle of the
appropriate color that’s large enough to
cover the entire window?
many machines, the
graphics hardware consists of multiple
command to clear a window can be much more
buffers in addition to the buffer containing
efficient than a general−purpose drawing
colors of the pixels
command.
that are displayed.
Viewport State 99

Function: glViewport(x, y, width, height)


Description: Defines the drawable area of the
window. It maps normalized device coordinates
to window coordinates.

glViewport(0, 0, windowWidth, windowHeight);


glViewport() 100

glViewport() adjusts the pixel rectangle for


drawing to be the entire new window.
The next three routines adjust the coordinate
system for drawing so that the lower left corner is
(0, 0) and the upper right corner is (w, h)

void reshape(int w, int h)


{
glViewport(0,0,(GLsizei) w, (GLsizei) h);
glMatrixMode(GL_PROJECTION); glLoadIdentity();
gluOrtho2D(0.0, (GLdouble) w, 0.0, (GLdouble) h);
}
glutReshapeFunc(reshape);
Clear Depth 101
glClearColor(0.0, 0.0, 0.0, 0.0);
glClearDepth(1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

 In this case, the call to glClearColor() is the


same as before, the glClearDepth()
Function:
command specifies the value to which glEnable(GL_DEPTH_TEST)
every pixel of the depth buffer is to be set, Description: Enables depth
and the parameter to the glClear() testing, which ensures that
closer objects obscure farther
command now consists of the bitwise OR of ones.
all the buffers to be cleared.
By default, glClearDepth is set to 1.0,
which is the maximum depth value in the
Hardware that doesn’t support normalized depth range [0, 1], with 0
simultaneous clears performs them representing the near clipping plane and
1 representing the far clipping plane.
sequentially.
Specify Colour 102
Coloring, lighting, and shading are all large topics with
entire chapters or large sections devoted to them
WithOpenGL, the description of the shape of an
object being drawn is independent of the
description of its color.
To set a color, use the command glColor3f().

draws objects A and B in red, and object


C in blue. The command on the fourth line
that sets the
current color to green is wasted.
Force any Pending Drawing To 103
Complete
 To ensure that all pending drawing commands in
OpenGL complete, you can use the following functions:
❑ glFlush: This function forces the execution of any pending
OpenGL commands in the pipeline. It ensures that all commands
will complete "as soon as possible" without waiting for the
graphics hardware to finish.
❑ glFinish:This function waits until all OpenGL commands have been
fully executed, including those in the graphics hardware. It is a
more strict operation than glFlush, ensuring that every operation
has completed before moving forward.
glFlush is commonly used when Use glFinish when you need all commands to
rendering to single-buffered complete before proceeding, especially in time-
contexts where immediate drawing sensitive applications where you cannot afford
updates are required. partial renders or incomplete frames.
Draw geometric primitive points, 104
lines, and polygons
In OpenGL, you can draw various geometric
primitives such as points, lines, and polygons
using different primitive types.
glPointSize(5.0f); // Set point size
glEnable(GL_POINT_SMOOTH); // Enable smooth points
glPointSize(5.0f); // Set point size glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); // Set
best quality for point smoothing
glBegin(GL_POINTS); // Start specifying points
glVertex2f(-0.5f, -0.5f); // Specify point at (-0.5, -0.5) glBegin(GL_POINTS);
glVertex2f(0.5f, 0.5f); // Specify point at (0.5, 0.5) glVertex2f(0.0f, 0.0f);
glEnd(); glVertex2f(0.5f, 0.5f);
glEnd();

The geometry is specified by vertices.


There are ten primitive types
105
106
107
glVertexAttrib*
functions are used to
specify vertex
attributes for vertex
shaders in modern
OpenGL

Edge flag: A Boolean


value (GL_TRUE or
GL_FALSE). When
GL_TRUE, the edge is
included. When
GL_FALSE, the edge is
excluded.
Control Line Primitives 108
Line Width: Set line thickness using glLineWidth.
Line Smoothing: Enable anti-aliasing with GL_LINE_SMOOTH.

glLineWidth(2.0f); // Set line width


glEnable(GL_LINE_SMOOTH); // Enable line smoothing
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);

glBegin(GL_LINES); When GL_LINE_SMOOTH is enabled, OpenGL uses anti-


glVertex2f(-0.5f, -0.5f); aliasing techniques to blend the colors of the line edges with
glVertex2f(0.5f, 0.5f); the background.
glEnd();
This makes lines appear smoother, especially useful for
diagonal or curved lines. It’s particularly beneficial in
applications like CAD, computer graphics education, or any
visual context where precision and smoothness matter.
109
glEnable(GL_BLEND); and glBlendFunc
(GL_SRC_ALPHA,
GL_ONE_MINUS_SRC_ALPHA);

These commands enable blending, which is


necessary for smoothing lines in OpenGL.

Blending allows for partially transparent


pixels, helping create a smoother
appearance along the line edges.
Polygon Front Facing 110
In a completely enclosed surface constructed from opaque polygons with a
consistent orientation, none of the back-facing polygons are ever visible

void glFrontFace(GLenum mode);

Controls how front-facing polygons are


determined.
By default, mode is GL_CCW, which corresponds to a
counterclockwise orientation of the ordered vertices of
a projected polygon in window coordinates.
If mode is GL_CW, faces with a clockwise orientation
are considered front-facing.
Culling 111

 void glCullFace(GLenum mode);


 Indicateswhich polygons should be discarded (culled)
before they’re converted to screen coordinates.
 The mode is either GL_FRONT, GL_BACK, or
GL_FRONT_AND_BACK to indicate front-facing, back-
facing, or all polygons.
 Totake effect, culling must be enabled using glEnable()
with GL_CULL_FACE; it can be disabled with glDisable()
and the same argument.
112
Polygon Stippling 113

void glPolygonStipple(const GLubyte *mask);

Defines the current stipple pattern for filled


polygons.
The argument mask is a pointer to a 32 × 32
bitmap that’s interpreted as a mask of 0s and 1s.
Where a 1 appears, the corresponding pixel in
the polygon is drawn, and where a 0 appears,
nothing is drawn.
Specifies which polygons will be affected (GL_FRONT,

Polygon Mode GL_BACK, or GL_FRONT_AND_BACK).


114
Wireframe Mode (GL_LINE): Shows only the edges of polygons, useful for viewing the structure of models or for debugging.
Point Mode (GL_POINT): Shows only the vertices of polygons as points. This can be helpful for identifying vertices and
checking for errors in vertex placement.
Fill Mode (GL_FILL): Renders the polygons as solid shapes. This is the default mode and is used for standard rendering.

void glPolygonMode(GLenum face,


GLenum mode);
Controls the drawing mode for a polygon’s front
and back faces.
The parameter face can be:

• GL_FRONT_AND_BACK,
• GL_FRONT, or
• GL_BACK;
Mode can be GL_POINT, GL_LINE, or GL_FILL to
Important Considerations:
indicate whether the polygon should be drawn as Performance: Wireframe and point modes can improve rendering
performance, particularly when testing or visualizing complex models.
points, outlined, or filled. By default, both the front
Shader Compatibility: If you are using shaders, ensure the shader
and back faces are drawn filled. handles line and point modes properly, as some shader programs may
assume solid fills by default.
Control the Display of Polygons 115

glPolygonMode(GL_FRONT_AND_BACK, Polygon Mode: Control how polygons are drawn


GL_LINE); // Draw polygons in wireframe using glPolygonMode.
mode
GL_FILL: Solid-filled polygons (default).
glBegin(GL_TRIANGLES);
glVertex3f(-0.5f, -0.5f, 0.0f); GL_LINE: Wireframe mode for polygons.
glVertex3f(0.5f, -0.5f, 0.0f);
glVertex3f(0.0f, 0.5f, 0.0f); GL_POINT: Only polygon vertices are drawn as points.
glEnd();
Since rectangles are so common in graphics applications, OpenGL provides a
filled-rectangle drawing primitive, glRect*(). void glRect(TYPE x1, TYPE y1, TYPE x2, TYPE y2);

You can draw a rectangle as a polygon, as described in “OpenGL Geometric


Drawing Primitives” on page 47, but your particular implementation of OpenGL
might have optimized glRect*() for rectangles.
Steps to Implement 2D Texture 116
Mapping
1. Load the texture image and create an
OpenGL texture.
2. Enable and configure texturing settings.
3. Bind the texture and map it to a polygon (e.g.,
a quad).
Normal Vectors 117

A normal vector (or normal, for short) is a vector that points in


a direction that’s perpendicular to a surface.
 In graphics and computational geometry, a normal vector is essential
because it defines how a surface interacts with light and is key for
calculating lighting and shading on 3D objects.
 WithOpenGL, you can specify a normal for each polygon or
for each vertex.
 Vertices of the same polygon might:
1. share the same normal (for a flat surface) or
2. have different normals (for a curved surface).
Properties of Normal Vectors 118
In OpenGL and computer graphics, normal vectors are essential for accurately simulating lighting and shading
on surfaces. A normal vector is a perpendicular vector to a surface or a vertex and is crucial in determining how
light interacts with that surface. By defining the direction each surface faces, normal vectors help to create
realistic lighting effects, such as highlights, shadows, and reflections.

 Direction: The normal vector points


perpendicularly away from the surface it
represents.
 Unit Length: Typically, normal vectors are
normalized to unit length (length of 1), as this
simplifies lighting calculations and ensures
consistent lighting results.
 Perpendicularity: For polygons like triangles and
quads, the normal vector is perpendicular to the
plane formed by the vertices of the polygon.

// Define a normal for a face pointing in the positive


Z directionglNormal3f(0.0f, 0.0f, 1.0f);
Normal Vector Computations 119

Step 1: Define the Vertices


To calculate the normal vector for a triangle, we
need three vertices of the triangle, which we
can denote as 𝑉0, 𝑉1, and 𝑉2.
Let’s define the vertices in 3D space:
• V0=(x0,y0,z0)
• V1=(x1,y1,z1)
• V2=(x2,y2,z2)
Normal Vector Computations 120

Step2: Create Edge Vectors


From these vertices, we can create two edge
vectors:
Edge vector 𝑢=𝑉1−𝑉0=(𝑥1−𝑥0,𝑦1−𝑦0,𝑧1−𝑧0)
Edge vector 𝑣=𝑉2−𝑉0=(𝑥2−𝑥0,𝑦2−𝑦0,𝑧2−𝑧0)
Step 3: Calculate the Cross Product
The normal vector 𝑁 can be found by taking the cross
product of the edge vectors 𝑢 and 𝑣: 𝑁=𝑢×𝑣
Normal Vector Computations 121

Step4: Normalize the Normal Vector


The resulting normal vector 𝑁 should be
normalized to have a unit length. The
normalization process involves dividing the
vector by its length:
Calculate the length (magnitude) of the vector:

Normalize N:
Vertex Arrays 122

Drawing a 20-sided polygon requires at least 22


function calls: one call to glBegin(), one call for
each of the vertices, and a final call to glEnd().
OpenGL has vertex array routines that allow you
to specify a lot of vertex-related data with just a
few arrays and to access that data with equally
few function calls.
Using vertex array routines, all 20 vertices in a 20-
sided polygon can be put into one array and
called with one function.
Vertex Array Workflow 123

1. Enable Vertex Arrays: Specify what types of


data (vertices, colors, normals) you want to
use.
2. Specify the Array Data: Define the array data
(e.g., positions, colors) in memory.
3. Draw Arrays: Issue a draw call to render the
vertices using the data from the arrays.
4. Disable Vertex Arrays: Optionally, disable the
vertex arrays to return to immediate mode or
another configuration.
Enable Client State 124

void glEnableClientState(GLenum array)

Specifies the array to enable. The symbolic


constants
❑ GL_VERTEX_ARRAY, GL_COLOR_ARRAY,
❑ GL_SECONDARY_COLOR_ARRAY, GL_INDEX_ARRAY,
❑ GL_NORMAL_ARRAY, GL_FOG_COORD_ARRAY,
❑ GL_TEXTURE_COORD_ARRAY, and
❑ GL_EDGE_FLAG_ARRAY are acceptable parameters.
Specifying Data for the Arrays 125

 void glVertexPointer(GLint size, GLenum type, GLsizei


stride, const GLvoid *pointer);
 Specifies where spatial coordinate data can be
accessed.
1. Stride is the byte offset between consecutive vertices. If stride is
0, the vertices are understood to be tightly packed in the array.
2. Type specifies the data type (GL_SHORT, GL_INT, GL_FLOAT, or
GL_DOUBLE) of each coordinate in the array.
3. size is the number of coordinates per vertex, which must be 2, 3,
or 4.
4. Pointer is the memory address of the first coordinate of the first
vertex in the array.
126
glDrawArrays (GLenum mode, GLint first,
GLsizei count);:

Renders primitives (e.g., triangles) using


data from enabled arrays.

1. mode: Specifies the type of primitive to


render (e.g., GL_TRIANGLES).

2. first: Starting index in the enabled arrays.

3. count: Number of vertices to process.


To access the other seven arrays, 127
there are seven similar routines:

 void glColorPointer(GLint size, GLenum type, GLsizei


stride, const GLvoid *pointer);
 void glSecondaryColorPointer(GLint size, GLenum
type, GLsizei stride, const GLvoid *pointer);
 void glIndexPointer(GLenum type, GLsizei stride,
const GLvoid *pointer);
 void glNormalPointer(GLenum type, GLsizei stride, In OpenGL, the edgeFlag function is
const GLvoid *pointer); used to define which edges of a
 void glFogCoordPointer(GLenum type, GLsizei stride, polygon should be considered
const GLvoid *pointer); boundaries for features like
wireframe rendering or some custom
 void glTexCoordPointer(GLint size, GLenum type, rendering effects.
GLsizei stride, const GLvoid *pointer);
The idea behind fog is to create a
 void glEdgeFlagPointer(GLsizei stride, const GLvoid sense of depth by making distant
*pointer); objects appear less clear.
128
Turn states on and off and query 129
state variables
OpenGL offers functions to enable or disable specific
states, like lighting, depth testing, and blending.
The primary functions for this are glEnable and glDisable.

glEnable(GL_DEPTH_TEST); // Enable depth testing (for 3D rendering)


glDisable(GL_DEPTH_TEST); // Disable depth testing
if (glIsEnabled(GL_DEPTH_TEST)) {
printf("Depth test is enabled.\n");
} else {
printf("Depth test is disabled.\n");
}
130

GLint viewport[4]; GLfloat clearColor[4];


glGetIntegerv(GL_VIEWPORT, viewport); glGetFloatv(GL_COLOR_CLEAR_VALUE, clearColor);
printf("Viewport position and size: x = %d, y = %d, printf("Clear color: R = %f, G = %f, B = %f, A = %f\n",
width = %d, height = %d\n", clearColor[0], clearColor[1], clearColor[2],
viewport[0], viewport[1], viewport[2], clearColor[3]);
viewport[3]);

GL_DEPTH_TEST: Enables depth testing, which


if (!glIsEnabled(GL_DEPTH_TEST)) {
ensures closer objects obscure those behind
glEnable(GL_DEPTH_TEST);
them.
printf("Depth test was disabled. Now enabled.\n");
}
Store and Restore States 131
This state management provides flexibility in OpenGL, enabling different
visual effects and rendering configurations without permanently altering the
rendering state.
Using glPushAttrib and glPopAttrib for State
Management
OpenGL also offers a mechanism to save and
restore state using attribute stacks, which can
save multiple states at once.
glPushAttrib stores a set of states, and glPopAttrib
restores them.
GL_COLOR_BUFFER_BIT: Saves the color buffer state.
GL_DEPTH_BUFFER_BIT: Saves the depth buffer state.
GL_ENABLE_BIT: Saves all enabled or disabled capabilities.
GL_LIGHTING_BIT: Saves lighting state, including enabled lights and material properties.
132
Performance Considerations: 133

Minimize State Changes:


Frequentchanges to states can lead to performance
degradation.
It's
beneficial to sort objects by their state requirements
before rendering to reduce the number of state
changes.
Batch Rendering:
Grouping objects that share the same states can
significantly improve rendering performance.
This is often done by sorting objects based on their
state blocks during the rendering process.
134

Chapter 7/8/9: Representing 3D Objects, Lighting,


Color Models & Image Formats
COMPUTER GRAPHICS
COURSE NUMBER: COSC3072
PREREQUISITE: COMPUTER PROGRAMMING (COSC 1012)

Compiled by: Kidane W.


Introduction 135

3D object representation is a way to digitally


describe a physical object’s geometry, texture,
and sometimes even physical properties, for use
in computer graphics, simulations, and virtual
reality.
Several methods can be used to represent 3D
objects, each with specific benefits and
applications.
Modelling Using Polygons 136

Modeling using polygons is a


foundational technique in
computer graphics to create
3D shapes and surfaces.
Polygons, especially triangles
and quads, are used to
approximate complex shapes
by connecting vertices to
form the surfaces of objects.
Key Concepts in Polygonal 137
Modeling
1. Vertices: Points in 3D space, defined by their
coordinates (x, y, z), that serve as the corners of
polygons.
2. Edges: Lines connecting two vertices.
3. Polygons (Faces): Closed shapes formed by
connecting multiple edges, typically triangles
or quads.
4. Mesh: A collection of vertices, edges, and
polygons that represent the surface of a 3D
object.
138
139
High poly 3D model? 140

High poly models have a large number of


polygons, typically resulting in a highly detailed
and smooth appearance.
High poly 3D models are hyper-detailed and
ultra-realistic 3D replicas of real-world objects.
Often used in film, animation, and high-quality
renders where visual fidelity is prioritized and
performance constraints are less critical.
Low poly 3D model? 141

Low poly models use fewer polygons, creating a


simpler and often less detailed look.
A low poly 3D model is a 3D asset with a
comparatively lower poly count than high poly
models and is easy to edit, load, and view for
being lightweight.
Commonly used in video games, virtual reality,
mobile applications, and web-based 3D
environments where performance and quick
load times are critical.
142
Representation 143
The object is store by using three tables
Polygon Meshes 144

Advantages
❑ It can be used to model almost any object.
❑ They are easy to represent as a collection of vertices.
❑ They are easy to transform.
❑ They are easy to draw on computer screen.
 Disadvantages
❑ Curved surfaces can only be approximately described.
❑ It is difficult to simulate some type of objects like hair or liquid.
146
Modelling 147
Techniques
Box Modeling
Start with a simple box (or cube)
and refine its shape by adding
detail and manipulating its
vertices, edges, and faces.
Use Case: Ideal for creating
complex shapes from a basic
form, commonly used in
character modeling.
Modelling Techniques 148

Edge Loop Modeling:


Process:
Using loops of edges around 1. Edge loops are added and
key areas (like eyes or mouths placed strategically to define
contours and maintain structure.
on a character) to help with 2. By focusing on loops, you can
deformation in animation. control the mesh density in
specific areas, making it easier to
Helps maintain clean model detailed features, such as
topology for smooth, natural eyes, mouth, and joints, in
animation.
movement in animations. 3. This technique is often used
alongside edge loops and edge
rings to control the topology of
the model.
Modelling Techniques 149

Subdivision Surface
Modeling:
Increasesthe number of polygons
by subdividing faces, creating a
smoother surface.
Averages vertex positions for a
more natural, organic shape.
Popular in character modeling
and other organic shapes.
150
GLUT 3D Models 151
GLUT (OpenGL Utility Toolkit) provides a set of
predefined geometric shapes that you can
render in either solid or wireframe forms.
Two main categories
❑ Wireframe Models
❑ Solid Models
Basic Shapes
❑ Cube: glutWireCube( ), glutSolidCube( )
❑ Cone: glutWireCone( ), glutSolidCone( )
❑ Sphere, Torus, Tetrahedron
More advanced shapes
❑ Octahedron (8), Dodecahedron (12),
Icosahedron (20)
❑ Teapot (symbolic)
Predefined Geometric Shapes
152
Solid: glutSolidTeapot(GLdouble size) Solid: glutSolidCube(GLdouble size)
Wireframe: glutWireTeapot(GLdouble size) Wireframe: glutWireCube(GLdouble size)

Solid: glutSolidSphere(GLdouble radius, GLint slices, GLint stacks)


Wireframe: glutWireSphere(GLdouble radius, GLint slices, GLint stacks)
Solid: glutSolidSphere(GLdouble radius, GLint slices, GLint stacks)
Wireframe: glutWireSphere(GLdouble radius, GLint slices, GLint stacks)
Solid: glutSolidCone(GLdouble base_radius, GLdouble height, GLint slices, GLint stacks)
Wireframe: glutWireCone(GLdouble base, GLdouble height, GLint slices, GLint stacks)

Solid: glutSolidCylinder(GLdouble radius, GLdouble height, GLint slices, GLint stacks)


Wireframe: glutWireCylinder(GLdouble radius, GLdouble height, GLint slices, GLint stacks)

Solid: glutSolidDodecahedron() Solid: glutSolidOctahedron() Solid: glutSolidTetrahedron()


Wireframe: glutWireDodecahedron() Wireframe: glutWireOctahedron() Wireframe: glutWireTetrahedron()
//8 TRIANGULAR FACES
A dodecahedron is a
A tetrahedron is a type of
type of polyhedron Solid: glutSolidIcosahedron() polyhedron that has 4 triangular
that has 12 flat faces, Wireframe: glutWireIcosahedron() faces.
with each face being // 20 triangular faces.
a regular pentagon.
153
#include <GL/glut.h>

void display() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
int main(int argc, char** argv) {
// Draw a solid teapot
glColor3f(0.0, 0.5, 0.0); glutInit(&argc, argv);
glutSolidTeapot(1.0); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);

// Move and draw a wireframe cube glutInitWindowSize(800, 600);


glPushMatrix(); glutCreateWindow("GLUT Solid and Wireframe Objects");
glTranslatef(2.0, 0.0, 0.0);
glColor3f(1.0, 0.0, 0.0); glEnable(GL_DEPTH_TEST);
glutWireCube(1.5); glutDisplayFunc(display);
glPopMatrix();
glutMainLoop();
glutSwapBuffers(); return 0;
}
}
3D modeling tools along with their 154
links
Blender: A comprehensive open-source 3D creation suite
supporting modeling, rigging, animation, simulation,
rendering, compositing, and motion tracking:
https://www.blender.org/download/
Tinkercad: A user-friendly, browser-based 3D design and
modeling tool ideal for beginners:
https://www.tinkercad.com/
Coordinate System 155

COORDINATE SYSTEMS IN OPENGL


 In OpenGL, switching between 1. Object (Local) Coordinates: Coordinates relative to
different coordinate systems an object’s local origin.
typically involves manipulating the: 2. World Coordinates: Object coordinates transformed
by modeling matrices to place objects in the scene.
❑ model, 3. View (Camera/Eye) Coordinates: World coordinates
❑ view, and transformed by the view matrix to position the
camera’s point of view.
❑ projection matrices to define 4. Clip Coordinates: View coordinates transformed by
different spaces. the projection matrix, defining the 3D volume seen
by the camera.
5. Normalized Device Coordinates (NDC): Clip
coordinates transformed into a cube ranging from -
1 to 1 on all axes.
6. Screen Coordinates: NDC coordinates mapped to
the 2D screen space
COORDINATE SYSTEM DESCRIPTION TRANSFORMATION EXAMPLE

Local (Model) Space Describes the object's


coordinates relative to its origin,
Model Transformation 156
glVertex3f(-0.5f, 0.5f, 0.0f);

before any transformations.

World Space Positions the object in a larger Translation, Rotation, glTranslatef(2.0f, 0.0f, 0.0f);
scene, transforming from local Scaling
space to its place in the world.

View (Camera) Adjusts the scene to the camera’s gluLookAt or equivalent gluLookAt(0.0, 0.0, 3.0, 0.0, 0.0, 0.0,
Space perspective, simulating the rotation 0.0, 1.0, 0.0);
viewpoint of the viewer.

Clip Space Transforms coordinates into the Projection glOrtho(-1.0, 1.0, -1.0, 1.0, 1.0, -1.0);
canonical viewing volume for Transformation
visibility determination (range [-1,
1]).

Normalized Device Scales clip space to fit the device Perspective Division Internally handled by OpenGL
Coordinates (NDC) screen, mapping coordinates to
[-1, 1] in each axis.

Screen Space Maps NDC to pixel positions on Viewport glViewport(0, 0, width, height);
the screen. This is what is actually Transformation
rendered.
Projection Types: Perspective and 157
Orthographic
Perspective Projection
❑ Mimics human vision, where objects appear smaller as they get farther
from the viewer.
❑ Mathematics: Objects are scaled based on distance from the viewpoint.
❑ Usage: Applied in realistic scenes (e.g., video games, simulations).
❑ gluPerspective( fieldOfViewAngle, aspect, near, far );

Typical values are in the


range 30 to 60 degrees.
158
Projection Types: 159
Orthographic
Orthographic Projection
❑ Objects maintain their actual sizes regardless
of distance, appearing parallel to the viewer. • left: minimum x we see
❑ Usage:
Useful in technical drawings and • right: maximum x we see
CAD, where size consistency is needed. • bottom: minimum y we see
❑ glOrtho( xmin, xmax, ymin, ymax, near, far ); • top: maximum y we see
• -near: minimum z we see. Yes, this is -1
times near. So a negative input means
positive z.
• -far: maximum z we see. Also negative.
Both camera types involve two 160
essential matrices:
1. View Matrix:
 Represents the camera's position, GL_MODELVIEW: Handles transformations
orientation, and target. that affect model and view (camera)
 Ittransforms the world to the camera's coordinates.
local space.
2. Projection Matrix: GL_PROJECTION: Defines the projection
 Defines the type of camera matrix, transitioning from view space to clip
(orthographic or perspective) and space (either perspective or orthographic).
 How objects are projected onto the
screen.
void setupCamera()
{
glMatrixMode(GL_PROJECTION); 163
glLoadIdentity();
gluPerspective(45.0, 1.0, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.0, 1.5, 5.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0);
}
glMatrixMode(GL_PROJECTION); // Sets the current matrix mode to the projection matrix.
//The projection matrix controls how objects are projected onto the screen (perspective or orthographic).

glLoadIdentity(); // Resets the projection matrix to an identity matrix. This clears any previous
transformations, so the following projection transformations start fresh.
gluPerspective: This function defines a perspective projection glMatrixMode(GL_MODELVIEW); // Switches to the
matrix. In this context, it sets the viewing volume parameters: model-view matrix, which controls object
• 45.0: Field of view angle in the y-direction (vertical FOV) in transformations (translations, rotations, scaling) and
degrees. Here, it’s set to 45 degrees, meaning the viewer can camera positioning.
see 45 degrees above and below the center of the view.
• 1.0: Aspect ratio (width/height) of the viewing volume. Here, 1.0 gluLookAt(
sets a square aspect ratio. 0.0, 1.5, 5.0, // Camera Position
• 1.0: Near clipping plane. Objects closer than this distance from 0.0, 1.0, 0.0, // Look-at Point (Target)
the camera are not visible. 0.0, 1.0, 0.0 // Up Vector
• 100.0: Far clipping plane. Objects farther than this distance from );
(x, y, z) = (0.0, 1.0, 0.0) means that the positive y-axis is
the camera are also not visible. considered "up" for the camera. This is used to orient
the camera properly in 3D space.
In computer graphics, cameras
are the windows through which
we view 3D scenes.
164
Non-Polygonal Representations 165

Non-polygonal representations in OpenGL are


techniques used to create shapes or visuals that
aren't based on polygons (like triangles or
quads).
These techniques are especially useful for
rendering objects that need to appear smooth
or natural, like clouds, fire, or liquids, without the
hard edges that polygons produce.
Read More 166
Particle Systems: Used for effects like fire, smoke, and water droplets.

“Each "particle" is rendered as a point or small texture that can be


blended with others for a smooth, dynamic effect.

Volumetric Rendering: Useful for representing fog, clouds, and other


effects with depth. This technique often involves ray marching or slicing


a 3D volume and blending layers to achieve a smooth, volumetric
appearance.

Implicit Surfaces (Metaballs): Used to create blobs or organic shapes


that merge smoothly. Metaballs are mathematically defined so that
when they are close together, they blend into each other.

Billboarding: A technique where a 2D texture always faces the camera,


which can be used for effects like vegetation or smoke without requiring
complex 3D geometry.
Introduction to Local Illumination 167
Models
Local illumination models are fundamental in
rendering as they determine how light interacts
with the surface of an object to produce realistic
effects.
Unlike global illumination, which considers
multiple light bounces, local illumination only
considers light that directly reaches the object
from a light source.
https://math.hws.edu/eck/cs424/graphicsbook2018/
demos/c3/transform-equivalence-3d.html
Types of Light Sources 168
In computer graphics, light sources define how light is emitted
in a scene. Key types include:

Ambient Light
Represents indirect light scattered from the environment.
Creates a base level of brightness and does not have a specific direction.
Point Light
Emits light equally in all directions from a single point (e.g., light bulbs).
Light intensity decreases with distance.
Directional Light
Light rays are parallel and consistent, simulating a distant light source (e.g., sunlight).
Position is effectively "at infinity," so intensity does not change with distance.
Spotlight
Emits light within a cone shape, with intensity fading toward the edge of the cone.
Useful for focused lighting (e.g., stage lights).
void glLightfv(GLenum light, GLenum 169
pname, const GLfloat *params);
The glLightfv function in OpenGL is used to define
various properties of a light source.
Each light in OpenGL can have parameters for
position, direction, ambient intensity, diffuse
intensity, specular intensity, and more.
light: Specifies which light source you're setting properties for. OpenGL supports multiple lights, labeled
GL_LIGHT0 to GL_LIGHT7.

pname: Specifies the property of the light source to set (e.g., position, ambient, diffuse, specular, etc.).

params: An array containing the values for the specified property. The number of elements required
depends on pname.
Parameter Name: GL_POSITION 170

GL_POSITION – Specifies the position or direction


of the light:
If the last value in the params array (often called
the w component) is 1.0, the light is a point light
(i.e., it has a specific position in space).
If the w component is 0.0, the light is directional
(i.e., it comes from an infinite distance in a
particular direction).
GLfloat position[ ] = {1.0f, 2.0f, 3.0f, 1.0f}; // Point light
glLightfv(GL_LIGHT0, GL_POSITION, position);
171
Light Material 172
Parameter - GL_AMBIENT 173

GL_AMBIENT – Sets the ambient color of the light:


Ambient light is a general light that affects the
entire scene equally, creating a base level of
brightness.
Takes an array of four values {R, G, B, A},
representing the red, green, blue, and alpha
(opacity) components of the color.
GLfloat ambient[ ] = {0.2f, 0.2f, 0.2f, 1.0f}; // Slightly dim, gray ambient light
glLightfv(GL_LIGHT0, GL_AMBIENT, ambient);
Parameter - GL_DIFFUSE 174

GL_DIFFUSE – Sets the diffuse color of the light:


Diffuse light represents direct light shining on an
object, typically scattered in many directions
upon hitting a surface.
Takes an array of four values {R, G, B, A}, defining
the diffuse color and intensity.

GLfloat diffuse[] = {0.8f, 0.8f, 0.8f, 1.0f}; // Bright, white diffuse light
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuse);
Parameter - GL_SPECULAR 175

GL_SPECULAR – Sets the specular color of the


light:
Specular light creates shiny highlights on an
object, appearing strongest in surfaces that
directly reflect light towards the viewer.
Takes an array of four values {R, G, B, A}, defining
the color and intensity of the specular light.
GLfloat specular[] = {1.0f, 1.0f, 1.0f, 1.0f}; // Bright white specular highlight
glLightfv(GL_LIGHT0, GL_SPECULAR, specular);
Parameter - GL_SPOT_DIRECTION 176

GL_SPOT_DIRECTION – Specifies the direction of a


spotlight:
Only relevant for spotlight light sources.
Takes an array of three values {x, y, z}, defining
the direction in which the spotlight shines.

GLfloat spot_direction[] = {0.0f, -1.0f, 0.0f}; // Downward spotlight


glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, spot_direction);
Parameter - GL_SPOT_CUTOFF 177

GL_SPOT_CUTOFF – Defines the angle of the


spotlight’s cone:
Only relevant for spotlights.
Specifies the maximum angle, in degrees, from
the center of the spotlight’s direction, where the
light intensity starts to decrease.

glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 45.0f); // 45-degree spotlight


Parameter - 178
GL_CONSTANT_ATTENUATION, GL_LINEAR_ATTENUATION, and
GL_QUADRATIC_ATTENUATION

Controls how light intensity decreases over distance:


❑Constant attenuation keeps intensity constant over
distance.
❑Linear attenuation decreases intensity linearly with
distance.
❑Quadratic attenuation decreases intensity with the
square of the distance.
glLightf(GL_LIGHT0, GL_CONSTANT_ATTENUATION, 1.0f);
glLightf(GL_LIGHT0, GL_LINEAR_ATTENUATION, 0.0f);
glLightf(GL_LIGHT0, GL_QUADRATIC_ATTENUATION, 0.05f);
// Initialize light properties
GLfloat light_position[] = {1.0f, 1.0f, 1.0f, 0.0f}; // Directional light 179
GLfloat ambient[] = {0.2f, 0.2f, 0.2f, 1.0f}; // Ambient color
GLfloat diffuse[] = {0.8f, 0.8f, 0.8f, 1.0f}; // Diffuse color
GLfloat specular[] = {1.0f, 1.0f, 1.0f, 1.0f}; // Specular color

// Apply properties to GL_LIGHT0


glLightfv(GL_LIGHT0, GL_POSITION, light_position);
glLightfv(GL_LIGHT0, GL_AMBIENT, ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, specular);

// Enable the light


glEnable(GL_LIGHTING); // Enable lighting in the scene
glEnable(GL_LIGHT0); // Enable GL_LIGHT0
Reflectance Models 180

Reflectance models are essential in computer


graphics for simulating how light interacts with
surfaces, giving objects a sense of realism and
depth.
These models determine how light reflects off
surfaces, creating effects like brightness, color,
and shading. Two widely-used reflectance
models are:
1. The Lambertian (Diffuse) Reflectance Model and
2. The Phong Reflectance Model (which includes both specular
and diffuse components).
Lambertian Reflectance (Diffuse 181
Lighting)
 TheLambertian Reflectance Model (or diffuse reflection)
models how light interacts with rough or matte surfaces,
such as unpolished wood or concrete.
 When light hits these surfaces, it scatters uniformly in all
directions, resulting in an even and soft shading effect.
 Real-WorldAnalogy: Imagine light shining on a chalkboard.
The light scatters in all directions, and the chalkboard
appears equally bright from any angle.
OpenGL handles diffuse reflection with glMaterialfv 182
and glLightfv functions to set the diffuse properties of
both the surface and light source.

GLfloat mat_diffuse[] = {1.0, 0.5, 0.31, 1.0}; // Orange color for the
material
GLfloat light_diffuse[] = {1.0, 1.0, 1.0, 1.0}; // White diffuse light

glMaterialfv(GL_FRONT, GL_DIFFUSE, mat_diffuse);


glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glEnable(GL_LIGHT0);
Phong Reflection Model 183

 The Phong Reflectance Model simulates shinier, polished


surfaces.
 Specular reflection models the way light reflects off smooth,
shiny surfaces, creating a bright highlight known as a
specular highlight.
 This is dependent on the viewer's position because the
reflection is focused in a particular direction.
OpenGL allows setting up both diffuse and
specular properties for a material and light source
184
to simulate the Phong model:

GLfloat mat_specular[] = {1.0, 1.0, 1.0, 1.0}; // White specular highlight


GLfloat mat_shininess[] = {50.0}; // Shininess coefficient
GLfloat light_specular[] = {1.0, 1.0, 1.0, 1.0};

glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular);


glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess);
glLightfv(GL_LIGHT0, GL_SPECULAR, light_specular);
Gouraud and Phong Interpolation 185
(Shading Models)
GOURAUD SHADING PHONG SHADING
(VERTEX SHADING) (PIXEL SHADING)
 Calculates lighting at each vertex,  Interpolates normals across the polygon
then interpolates colors across the surface, calculating lighting at each
surface of the polygon. pixel (fragment).
 Performance: Fast, as it computes  Performance: More computationally
lighting only at vertices. intensive.
 Drawback: Less accurate, can lead  Advantage: Produces smoother, more
to visible artifacts in areas with high realistic lighting, especially for specular
specular highlights. highlights.
Gouraud is the default shading model in
OpenGL.For Phong Shading, use GLSL
fragment shaders. ==> READ MORE
186
187
Steps for Implementing Lighting and Shading in OpenGL:
1. Enable Lighting and Light Source: #include <GL/glut.h>
void initLighting() {
188
GLfloat light_position[] = { 1.0, 1.0, 1.0, 0.0 };
glEnable(GL_LIGHTING); GLfloat light_ambient[] = { 0.2, 0.2, 0.2, 1.0 };
GLfloat light_diffuse[] = { 0.8, 0.8, 0.8, 1.0 };
glEnable(GL_LIGHT0); // Activate a light source GLfloat light_specular[] = { 1.0, 1.0, 1.0, 1.0 };

glLightfv(GL_LIGHT0, GL_POSITION, light_position);


2. Set Up Material Properties: glLightfv(GL_LIGHT0, GL_AMBIENT, light_ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, light_specular);
GLfloat mat_ambient[] = {0.2f, 0.2f, 0.2f, 1.0f};
GLfloat mat_diffuse[] = {0.8f, 0.0f, 0.0f, 1.0f}; glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
GLfloat mat_specular[] = {1.0f, 1.0f, 1.0f, 1.0f}; glEnable(GL_COLOR_MATERIAL);
}
glMaterialfv(GL_FRONT, GL_AMBIENT, mat_ambient); void display() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMaterialfv(GL_FRONT, GL_DIFFUSE, mat_diffuse); glLoadIdentity();
glMaterialfv(GL_FRONT, GL_SPECULAR, glTranslatef(0.0, 0.0, -5.0);

mat_specular); // Draw a cube


glBegin(GL_QUADS);
glMaterialf(GL_FRONT, GL_SHININESS, 50.0f); // // Specify each face with normal vectors and vertices
Specular exponent glEnd();

glutSwapBuffers();
}
3. Add Light Parameters: int main(int argc, char** argv) {
glutInit(&argc, argv);
GLfloat light_position[] = {1.0f, 1.0f, 1.0f, 0.0f}; glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutCreateWindow("Lighting and Shading");
glLightfv(GL_LIGHT0, GL_POSITION, light_position); glEnable(GL_DEPTH_TEST);
initLighting();
glutDisplayFunc(display);
glutMainLoop();
return 0;
}
189
190
Facts About Colour 191

The human eye can see about 10 million colours


Sir Isaac Newton invented the colour wheel
(which shows the relationship between primary,
secondary, and tertiary colours)
In the absence of light, everything appears
black.
Colours can change your mood
Colour Theory 193

In color theory, colors are categorized into:-


❑Primary,

❑Secondary (from two primary colors), and

❑Tertiary (primary with neighbor secondary color) colors


based on how they are created and their relationships
on the color wheel.
194

Complementary Monochromatic
180 Degree
WEBSITE & APP
LOGO - Attention DESIGN

Triadic
Analogous
EDUCATION
INTERIOR DESIGN
What is colour Model? 195

A color model is a system that helps us to define and


describe colors through numerical values.
 Thereare many types of color models that use different
mathematical systems to represent colors, although most
color models typically use a combination of three or four
values or color components.

Additive (Light-Based): RGB


for digital screens.

Subtractive: Used in CMY


for printing and traditional
painting.
RGB is an additive color model 196
used primarily in digital displays.
 It creates colors by combining the primary
colors Red, Green, and Blue in various
intensities.
 How It Works: In RGB, color is created by
adding light.
 Each color channel (Red, Green, Blue) has a value
typically ranging from 0 to 255. By adjusting the
intensity of each channel, a wide range of colors
can be produced.
 Applications: RGB is the standard for computer
monitors, TVs, cameras, and digital displays since these
devices emit light.
HSL (Hue, Saturation, Lightness) 197
Applications: HSL is frequently used in design software where users
need precise control over color selection and adjustments.

HSL is an intuitive way to represent colors, making


it popular for color pickers and digital design
tools.
It breaks down color into Hue, Saturation, and
Lightness.
❑ Hue: The color type, measured in degrees on a 360° circle (e.g., 0°
for red, 120° for green, 240° for blue).
❑ Saturation:The intensity or purity of the color. 0% saturation means
no color (gray), and 100% means full color.
❑ Lightness:The brightness of the color. 0% lightness is black, and
100% is white.
HSL - HUE 198

It’s calculated in degrees of the color wheel and


it’s refers to a color wheel that goes from red, to
yellow, to lime, to aqua, to blue, to magenta,
and finally back to red.
For this reason, 0° on the hue color wheel is red
and then 360° is red again. Hue always refers to
the base color.
HSL - Saturation 199

Saturation is how pure the hue is. A full saturation


means that the pure base hue is used. Saturation
is calculated as a percentage value between
0% and 100%.
0% saturation will always be black.
Lightness / Brightness 200

Lightness (or brightness) is the amount of white or


black mixed in with the color.
It’s also calculated as a percentage value
between 0% and 100%.
0% lightness will also always be black.

In Photoshop, you’ll be able to see how HSL get calculated. In the main
color picker, if you move the color point from left to right, you’ll see that
the Saturation value gets updated, then if you move the color point from
top to bottom, it’s the Lightness (Brightness) value that gets updated.
Finally, if you select a different base color to the right of the vertical color
picker, the hue gets updated.
201
202
CYAN (GREEN +
BLUE)

CMYK 203

MAGENTA (RED
CMYK is a subtractive color model used in + BLUE)

printing.
Unlike RGB, which is based on light, CMYK works
by subtracting light from white paper using ink.
YELLOW (RED +
Colors are created by mixing the four primary GREEN)

colors Cyan, Magenta, Yellow, and Black


(referred to as K for "Key" color, typically black)
Cyan and Yellow produce green. Magenta and
Yellow produce red. Cyan and Magenta
produce blue.
204
CIE (Commission Internationale 205
de l'Éclairage)
The CIE color model, developed by the
Commission Internationale de l'Éclairage, is
based on human color perception and aims to
standardize color representation in a way that
matches how the human eye perceives color.
❑ CIEXYZ: The fundamental color space in the CIE model, where X,
Y, and Z are abstract components not directly representing colors
but derived from human vision.
❑ CIE Lab: A perceptually uniform color space where L represents
lightness, and a and b represent color-opponent dimensions
(green-red and blue-yellow axes).
206
Choosing the Right Color Model 207

1. Use RGB for any display-based OpenGL primarily operates in the


content, such as websites, apps, RGB color space, so:
and digital graphics.
2. Use HSL for design applications 1. For RGB colors, use glColor3f or glColor4f.
where you need precise control
over color tones and shades. 2. For HSL or CMYK colors, convert them to
RGB using mathematical transformations.
3. Use CMYK for print materials to
ensure colors print accurately. 3. For CIE XYZ or CIE Lab colors, use
calibrated conversions to RGB if high
4. Use CIE for applications requiring
fidelity is required.
color accuracy across devices, like
color calibration and industrial
design.
208
Image Format 209

An image format is a standardized way to


encode and store visual data in a file.
Each format has different methods for handling
data compression, color depth, transparency,
and metadata.
Choosing the right image format is important for
optimizing quality, file size, and compatibility with
different platforms or applications.
Image Formats - GIF 210

GIF (Graphics Interchange Format)Compression:


Uses lossless compression (reduces file size
without losing detail).
However, it’s limited to 256 colors in an 8-bit color
palette.
Transparency: Supports binary transparency,
allowing only one color to be fully transparent,
not partially transparent.
Animation: One of the unique features of GIF is its
support for animation..
Image Formats - JPG / JPEG 211
Joint Photographic Experts Group (JPEG)

Useslossy compression that discards some image


data to significantly reduce file size.
❑JPEGcompression can be adjusted to balance quality
and size, but details are lost, especially at high
compression levels.
❑ColorDepth: Supports 24-bit color (16 million colors),
making it suitable for photographs and realistic images.
❑Transparency: Does not support transparency, which
limits its use for images that need to overlay other
elements.
Image Formats PNG 212

Uses lossless compression with a higher


compression ratio than GIF, preserving original
quality without discarding image data.
Color Depth: Supports 8-bit, 24-bit, and 48-bit
color depth, making it versatile for different
quality needs.
Transparency: Supports alpha transparency,
which allows for variable transparency levels
(partial transparency) – useful for overlay images.
Image Description Best Used For OpenGL Support
Format
Convert to texture for OpenGL
Limited to 256 colors, supports Simple animations and icons rendering (e.g., convert to PNG or
GIF animation and transparency. with limited colors. JPEG for better quality).
Compressed format with lossy
quality; supports millions of Photographs and high-color Use as texture directly in OpenGL,
JPEG colors. images with smooth gradients. but beware of artifacts.
Lossless compression, High-quality images with Ideal for OpenGL textures,
supports transparency (alpha transparency, icons, and UI particularly for elements with
PNG channel). elements. transparency.
Bitmaps for internal graphics Load as texture in OpenGL,
Uncompressed, large file size, processing; rarely used in though less efficient than PNG or
BMP supports high color depth. production. JPEG.

Supports lossless compression Not natively supported; needs


and transparency, used in Medical imaging, high-res pre-processing or conversion for
TIFF high-quality imaging. images in professional editing. OpenGL textures.

Vector format, resolution- Not natively supported in


independent, supports Scalable graphics like icons, OpenGL; use libraries (e.g.,
SVG animations. logos, and animations. NanoSVG) or convert to raster.
Texture Mapping 214

 Texture mapping is a computer


graphics technique used to apply
a 2D image (called a texture) to
a 3D model.
 Thisprocess enhances the realism
and detail of 3D objects by
overlaying textures such as colors,
patterns, or complex surface
details, rather than relying solely
on geometric shapes.
Texture Mapping 215

Use Cases
 Games: Applying textures to make objects appear as stone, metal, or wood
without adding extra polygons.
 Simulations and VR: Enhancing realism for objects like terrain, skies, and
characters.
 ArchitecturalVisualization: Adding textures like brick, wood, and metal to make
structures more lifelike.
Benefits of Texture Mapping
 Realism: Adds complex details without extra geometry.
 Efficiency: Reduces the number of polygons needed by using detailed images.
 Flexibility:
Easily change textures to modify an object's appearance without
changing its structure.
Texture Mapping 216

1. Load Image Data: stbi_load reads image data and stores its dimensions.
2. Generate Texture ID: glGenTextures creates a unique texture identifier.
3. Bind Texture: glBindTexture makes the texture active.
4. Generate Mipmaps: gluBuild2DMipmaps creates scaled-down versions of the
texture.
5. Set Wrapping Options: glTexParameteri for GL_TEXTURE_WRAP_S and
GL_TEXTURE_WRAP_T defines how the texture should behave when wrapping.
6. Set Filtering Options: glTexParameteri for GL_TEXTURE_MIN_FILTER and
GL_TEXTURE_MAG_FILTER sets filtering for scaling.
7. Free Image Data: stbi_image_free releases memory allocated for the image.
Texture Mapping 217

GLUT for window handling.


stb_image.h to load texture images.
#include <GL/glut.h> // GLUT and OpenGL functions
#define STB_IMAGE_IMPLEMENTATION
#include "stb_image.h"

Define Global Variables:


GLuint textureID; // Global texture ID
void loadTexture(const char* filename) {
// Declare variables to store the image's width, height, and number of color channels.
int width, height, nrChannels; 218
// The width, height, and nrChannels variables are populated with the image's properties.
unsigned char* data = stbi_load(filename, &width, &height, &nrChannels, 0);
if (data) {
glGenTextures(1, &textureID); // This ID will be used to refer to this texture in OpenGL.
glBindTexture(GL_TEXTURE_2D, textureID); // // Bind the texture as a 2D texture.

// Create scaled-down versions of the texture (Texture type, Store Format (RGB), W, H, Color, Type, Pointe
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGB, width, height, GL_RGB, GL_UNSIGNED_BYTE, data);

// Set texture wrapping and filtering options S-Horizontal, T –Vertical, Smooth Transition, Smooth Scaling
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

stbi_image_free(data); // Free image data


} The loadTexture function uses stbi_load to
else { load the image. We set the wrapping and
std::cerr << "Failed to load texture" << std::endl; filtering parameters to control how the texture
} appears when scaled or repeated.
}
219
Imagine you have a texture of a simple brick pattern,
and you want to apply this texture to a large surface,
like a wall. The texture might be a small image that
represents one brick. If you apply this image to a
larger area (like tiling it across the surface), you would
typically want the texture to repeat seamlessly.

If you do not set GL_TEXTURE_WRAP_S to GL_REPEAT,


OpenGL will clamp the texture coordinates to stay
within the 0.0 to 1.0 range. If a texture coordinate
exceeds this range (e.g., 1.2), OpenGL will just use the
color of the edge pixels of the texture, causing the
texture to not repeat but rather stay static at the
edges.
void glTexParameteri(GLenum target, 220
GLenum pname, GLint param);
221

void initGL() {
glEnable(GL_TEXTURE_2D); // Enable 2D texture mapping
loadTexture("path_to_image.jpg"); // Load your texture here

glClearColor(0.2f, 0.3f, 0.3f, 1.0f); // Set background color


glMatrixMode(GL_PROJECTION); // Set up orthographic projection
glLoadIdentity();
gluOrtho2D(-1.0, 1.0, -1.0, 1.0);
}
222
void display() {
glClear(GL_COLOR_BUFFER_BIT); // Clear screen

glBindTexture(GL_TEXTURE_2D, textureID); // Bind the loaded texture

// Draw a textured quad


glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f(-0.5f, -0.5f); // Bottom-left corner
glTexCoord2f(1.0f, 0.0f); glVertex2f( 0.5f, -0.5f); // Bottom-right corner
glTexCoord2f(1.0f, 1.0f); glVertex2f( 0.5f, 0.5f); // Top-right corner
glTexCoord2f(0.0f, 1.0f); glVertex2f(-0.5f, 0.5f); // Top-left corner
glEnd();

glutSwapBuffers(); // Swap the front and back buffers


}
223
int main(int argc, char** argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); // Enable double buffering and RGB color mode
glutInitWindowSize(800, 600); // Set window size
glutCreateWindow("GLUT Texture Mapping Example");

initGL(); // Initialize OpenGL settings

glutDisplayFunc(display); // Set display callback


glutMainLoop(); // Enter the main event loop

return 0;
}
224

Chapter 10: Application Modeling and Rendering

COMPUTER GRAPHICS
COURSE NUMBER: COSC3072
PREREQUISITE: COMPUTER PROGRAMMING (COSC 1012)

Compiled by: Kidane W.


Modeling (Representation) and 225
Graphics (Rendering)
Immediate Mode Versus Retained 226
Mode – Rendering
Storage Strategies 227
Matrix Stack 228
Display List 229
OpenGL and Java3D 230

You might also like