Chapter 2 Digital Image Fundamentalsn Final
Chapter 2 Digital Image Fundamentalsn Final
Chapter 2 Digital Image Fundamentalsn Final
1
2.1 Basic concept of image
2.2 Digital image Representation
2.3 Digital image acquisition process
2.4 Image sampling and quantization
2.5 Representation of different image type’s
2.6 Mathematical Tools used in Digital Image
Processing
2
Simple Image Model
Images are denoted by two-dimensional functions of the
form f(x, y). The value or amplitude of / at spatial coordinates
(x, y) is a positive scalar quantity .
f(x, y) must be nonzero and finite; that is,
0 < f(x, y) <
The function f(x, y) may be characterized by two components:
1. The amount of source illumination incident on the scene
being viewed,
2. The amount of illumination reflected by the objects in the
scene. Appropriately, these are called the illumination and
reflectance components and are denoted by i(x,y) and r(x,
y), respectively.
3
What is an Image?
4
Reflection r(x, y) Values
Typical values of reflection r(x, y) are:
0.01 for black velvet
0.65 for stainless steel
0.80 for flat-white wall paint
0.90 for silver-plated metal
0.93 for snow.
5
What is an Image…
• Illumination is the amount of light falling on the
object, and , this is property of light source.
• Reflectance is the light reflected back from object
and this remains between 0 & 1.
6
Digital Image…
– A digital image is an image f(x,y) that has been
“discritized” both in spatial & in brightness.
– A 2D matrix whose rows & columns identify a
unique point in the image.
– The corresponding matrix element value identifies
the gray-level level at that point.
8
Dimension of an Digital Image.
9
image dimension 1080×720
For Example:
1080×720 = 5,097,600 is the total number of pixels in
a 1080×720 dimension image.
10
Cell/ Pixel of an Digital Image.
• Each cell of a Image Grid represents a pixel.
• But what is the size of a Pixel.?
• Pixel Size can be determined by the picture PPI( Pixel
Per Inch ) which is a measure of how many pixels are
present in an inch of space.
• PPI determined the smoothness or quality of an
image.
• PPI can be referred to as the density of pixel in an
inch space.
11
Breaking A pixel..
• By Analyzing a pixel of a image you can conclude what type
of picture it is.
• Is it a colored image, Gray scale, or binary (only black and
white) image?
13
• range(0-255) it is a gray scale image which ranges 0-
255 represent how much dark or light the pixel is
14
• If pixel has multiple values we talk about
channels.
Channels in a Digital Image
• A channel depicts a single saturated color. For
Example- Red or Green or Blue
15
• A 3 channel RGB image will have a pixel with 3
numeric value such as [125,254,185] where each
value represent how much each channel is mixed to
generate the actual color of the pixel.
16
• What is the depth of Image?
• The depth of image is determined by how many
different colors can be produced by mixing channels.
• For Example,
• A 1 bit image can have two numeric value “0” or “1”
hence can only show black or white pixels.
• A 8 bit image RBG image will have 3 bit Red channel,
3 bit Green, 2 Bit Blue channel.
• A 24 bit image will have 8 bit Red Channel, 8 Bit Blue
Channel and 8 Bit Green Channel which inturn gives
2^24 = 16,777,216 color combination
17
Digital image Representation
18
19
20
21
22
23
24
25
26
Digital image acquisition process
• Before image processing can commence
(begin) an image must be captured by a
camera and converted into a manageable
entity. This is the process known as image
acquisition.
27
• The image acquisition process consists of three
steps;
– energy reflected from the object of interest,
– an optical system which focuses the energy
– and
– finally a sensor which measures the amount of
energy.
28
Fig. 2.1 Overview of the typical image acquisition
process, with the sun as light source, a tree as object
and a digital camera to capture the image. An analog
camera would use a film where the digital camera uses a
sensor.
29
Energy
• In order to capture an image a camera requires
some sort of measurable energy. The energy of
interest in this context is light or more generally
electromagnetic waves.
• An electromagnetic (EM) wave can be described
as massless entity, a photon, whose electric and
magnetic fields vary sinusoidally, hence the
name wave.
30
Image sampling and quantization
Digitization:
• A process of converting Analog Images in to Digital.
– Sampling
• Digitization of spatial coordinates.
– Quantization
• Digitization of Amplitude Values.
31
32
33
34
35
36
37
38
39
40
41
Sampling:
Digitization of spatial coordinates (x, y ) is referred to as Image Sampling.
How much samples are required to extract the enough information from Analog
Image?
42
Digital Image Approximation:
43
44
Some Basic Relationships Between Pixels
• Neighbors of a pixel
– : 4-neighbors of p
N 4 ( p ), , ,
( x 1, y ) ( x 1, y ) ( x, y 1) ( x, y 1)
: four diagonal neighbors of p N D ( p )
, , ,
( x 1, y 1) ( x 1, y 1) ( x 1, y 1)
( x 1, y 1)
: 8-neighbors of p N ( p )
8
N 4 ( p)
and N ( p )
D
45
• Adjacency
– V : The set of gray-level values used to define
adjacency
– 4-adjacency: Two pixels p and q with values from
V are 4-adjacency if q is in the set N 4 ( p )
– 8-adjacency: Two pixels p and q with values from
V are 8-adjacency if q is in the set N 8 ( p )
46
Subset adjacency
S1 and S2 are adjacent if some pixel in
S1 is adjacent to some pixel in S2
Path
A path from p with coordinates ( x, y )to
pixel q with coordinates ( s, t ) is a
sequence of distinct pixels with
coordinates
( x0 , y0 ), ( x1 , y1 ,…,
) ( xn , y n )
where ( x , y = ) ( x , y ), ( xn , yn= ) ( s , t ,)
0 0
and pixels ( xi , yi )and ( xi 1 , yi 1are
)
adjacent
47
• Region
– We call R a region of the image if R is a
connected set
• Boundary
– The boundary of a region R is the set of pixels
in the region that have one or more
neighbors that are not in R
• Edge
– Pixels with derivative values that exceed a
preset threshold
48
• Distance measures
– Euclidean distance
1
De ( p, q) [( x s ) 2 ( y t ) ] 2 2
– City-block distance
D4 ( p, q ) | ( x s ) | | ( y t ) |
– Chessboard distance
D8 ( p, q ) max(| ( x s ) |, | ( y t ) |)
49
Mathematical Tool for Image Processing
50
Mathematical Tool for Image Processing
Array versus Matrix Operation :
• Images are viewed as the matrix. But in this
series of DIP, we are using array operation.
There is a difference is Matrix and Array
Operation. In Array, the operation is carried
out by Pixel by Pixel in Image.
51
Array vs. Matrix operations
Let us consider two images as follows
a11 a12 b11 b12
a21 a22
and b21 b22
52
53
54
55
56
57
Addition
• The addition of two images and of the same
size results in a new image of the same size
whose pixels are to the sum of the pixels in
the original images:
58
Image Enhancement
59
Fig. 17 The image on the right is the sum between the two
images on the left.
60
Adding constant to the image makes
the image brighter i.e s(x,y)=f(x,y)
+constant
61
Figure 4: a) Image 1 b)Image 2 c) Image1+Image2 d)
Image1+constant
62
Subtraction Operation :
• The subtraction between two images is
s(x,y)=f(x,y)-y(x,y). Where f(x,y) is image 1 and
g(x,y) is image 2.
• Subtracting the constant from the original
image makes it darker.
63
Figure 5: (a) image1 (b)Image2 ( c) image1-image2 (d)
image1-constant
64
The subtraction of two images is used for example to detect
changes (Fig. 18).
66
Figure 6: (a) image 1 i.e. the original image (b) multiplying
image1 with 1.25 with increases the contrast on the image
67
4.Division Operation
• In division operation h(x,y)=f(x,y)/g(x,y) where
f(x,y) and g(x,y) are two images and h(x,y) is the
new image formed. We can also divided it by
constant i.e. h(x,y)=f(x,y)/constant.
68
Fig. 19 The right image is the division of the left image by the
right image.
69
70
71
72
73
Basic Logic Operations
Operation Definition
NOT
OR
AND
XOR
74
• The truth table of OR is :
75
The Truth table of AND is:
76
77
Figure8: (a) image1 (b) image2 ( c)
image1 AND image2 (d) image1 OR
image2
78
Spatial Operations
• Spatial operations are performed directly on the pixels of a
given image
• There are three main types of spatial operations:
1. Single Pixel Operations: on which the changes are made on
each pixel independently, such as generate negative image
(inverse intensities), log,inverse log and gamma corrections.
2. Neighborhood Operations: on which the new intensity value
for each pixel depends on the intensity values of its neighbors,
with a specific modifications, such as averaging (Blurring).
Spatial filtering is a common example for this type of operations.
3. Geometric Spatial Transformations: on which the image
coordinates are first changed, then pixels are mapped. For
example: scaling, rotating, translating and shearing.
79
• Affine transformation is a linear mapping method
that preserves points, straight lines, and planes.
• Sets of parallel lines remain parallel after an affine
transformation.
• The affine transformation technique is typically used
to correct for geometric distortions or deformations
that occur with non-ideal camera angles.
• Affine Transformation helps to modify the geometric
structure of the image, preserving parallelism of
lines but not the lengths and angles.
• The Affine Transformation relies on matrices to
handle rotation, shear, translation and scaling.
80
• Affine transformations
81
Translation
• A translation is a function that moves every
point with a constant distance in a specified
direction.
• it is specified as tx and ty which will provide
the orientation and the distance.
82
Rotation
83
Scaling
• Scaling is a linear transformation that enlarges
or shrinks objects by a scale factor that is the
same in all directions.
• We can specify the values of the sx and sy to
enlarge or shrink our images.
• It is basically zooming in the image or
zooming out the image.
84
• Shear
85