Chapter 2 Digital Image Fundamentalsn Final

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 85

Digital image fundamentals

1
2.1 Basic concept of image
2.2 Digital image Representation
2.3 Digital image acquisition process
2.4 Image sampling and quantization
2.5 Representation of different image type’s
2.6 Mathematical Tools used in Digital Image
Processing

2
Simple Image Model
 Images are denoted by two-dimensional functions of the
form f(x, y). The value or amplitude of / at spatial coordinates
(x, y) is a positive scalar quantity .
 f(x, y) must be nonzero and finite; that is,
0 < f(x, y) < 
The function f(x, y) may be characterized by two components:
1. The amount of source illumination incident on the scene
being viewed,
2. The amount of illumination reflected by the objects in the
scene. Appropriately, these are called the illumination and
reflectance components and are denoted by i(x,y) and r(x,
y), respectively.

3
What is an Image?

 The two functions combine as a product to form f(x, y):


 f(x,y) = i(x,y)r(x,y) where
0 < i(x, y) <  and 0 < r(x,y) < 1.
The following average numerical figures illustrate some typical
ranges of i(x, y) for visible light.
 On a clear day, the sun may produce in excess of 90,000
lm/m2 of illumination on the surface of the Earth.
 On a cloudy day it decreases to less than 10,000 lm/m2.
 On a clear evening, a full moon yields about 0.1 lm/m2
of illumination.
 The typical illumination level in a commercial office is
about 1000 lm/m2.

4
Reflection r(x, y) Values
Typical values of reflection r(x, y) are:
 0.01 for black velvet
 0.65 for stainless steel
 0.80 for flat-white wall paint
 0.90 for silver-plated metal
 0.93 for snow.

5
What is an Image…
• Illumination is the amount of light falling on the
object, and , this is property of light source.
• Reflectance is the light reflected back from object
and this remains between 0 & 1.

Reflectance=0 (Transparent objects)


Reflectance=1 (Opaque objects)

6
Digital Image…
– A digital image is an image f(x,y) that has been
“discritized” both in spatial & in brightness.
– A 2D matrix whose rows & columns identify a
unique point in the image.
– The corresponding matrix element value identifies
the gray-level level at that point.

– The elements of such a digital array are called


image elements ,picture elements , pixels or pel.

– The size of the digital image varies with the


application.
7
Digital Image:

8
Dimension of an Digital Image.

– As said digital Image is a grid so it must have a definite


width and height.
– This width and height represents how many pixels are
spread across to make a grid.
– For Example: An Image of Dimension 1080×720 has
1080 pixels in width and 720 pixels in height.
– Since it is a grid total number of pixels in an image of
height Y and width X can be calculated by multiplying X
and Y.

9
image dimension 1080×720
For Example:
1080×720 = 5,097,600 is the total number of pixels in
a 1080×720 dimension image.

10
Cell/ Pixel of an Digital Image.
• Each cell of a Image Grid represents a pixel.
• But what is the size of a Pixel.?
• Pixel Size can be determined by the picture PPI( Pixel
Per Inch ) which is a measure of how many pixels are
present in an inch of space.
• PPI determined the smoothness or quality of an
image.
• PPI can be referred to as the density of pixel in an
inch space.

11
Breaking A pixel..
• By Analyzing a pixel of a image you can conclude what type
of picture it is.
• Is it a colored image, Gray scale, or binary (only black and
white) image?

What is the Depth of the image?


• How many Channels are there in this image?
• Each pixel is a single numeric value or an array of numeric
values.
• For Example., If a pixel has a single value and,
• range 0-1 it is a Black and White (Binary Image). Where 0
represents Black and 1 represent White.
12
Black and White image
(Threshholding)

13
• range(0-255) it is a gray scale image which ranges 0-
255 represent how much dark or light the pixel is

Gray scale image

14
• If pixel has multiple values we talk about
channels.
Channels in a Digital Image
• A channel depicts a single saturated color. For
Example- Red or Green or Blue

15
• A 3 channel RGB image will have a pixel with 3
numeric value such as [125,254,185] where each
value represent how much each channel is mixed to
generate the actual color of the pixel.

16
• What is the depth of Image?
• The depth of image is determined by how many
different colors can be produced by mixing channels.
• For Example,
• A 1 bit image can have two numeric value “0” or “1”
hence can only show black or white pixels.
• A 8 bit image RBG image will have 3 bit Red channel,
3 bit Green, 2 Bit Blue channel.
• A 24 bit image will have 8 bit Red Channel, 8 Bit Blue
Channel and 8 Bit Green Channel which inturn gives
2^24 = 16,777,216 color combination

17
Digital image Representation

18
19
20
21
22
23
24
25
26
Digital image acquisition process
• Before image processing can commence
(begin) an image must be captured by a
camera and converted into a manageable
entity. This is the process known as image
acquisition.

27
• The image acquisition process consists of three
steps;
– energy reflected from the object of interest,
– an optical system which focuses the energy
– and
– finally a sensor which measures the amount of
energy.

28
Fig. 2.1 Overview of the typical image acquisition
process, with the sun as light source, a tree as object
and a digital camera to capture the image. An analog
camera would use a film where the digital camera uses a
sensor.

29
Energy
• In order to capture an image a camera requires
some sort of measurable energy. The energy of
interest in this context is light or more generally
electromagnetic waves.
• An electromagnetic (EM) wave can be described
as massless entity, a photon, whose electric and
magnetic fields vary sinusoidally, hence the
name wave.

30
Image sampling and quantization

• In order to become suitable for digital processing, an image function


f(x,y) must be digitized both spatially and in amplitude.
• in order to create an image which is digital, we need to convert
continuous data into digital form. There are two steps in which it is done:
– Sampling
– Quantization

Digitization:
• A process of converting Analog Images in to Digital.

• Consist of two steps.

– Sampling
• Digitization of spatial coordinates.

– Quantization
• Digitization of Amplitude Values.
31
32
33
34
35
36
37
38
39
40
41
Sampling:
 Digitization of spatial coordinates (x, y ) is referred to as Image Sampling.

 How much samples are required to extract the enough information from Analog
Image?

 Decision is made by using famous “Sampling” Theorem.

 Digitization process requires that a decision be made on the number of discrete


grey levels allowed for each pixel.

 The result of sampling and quantization is a matrix of real numbers.

42
Digital Image Approximation:

• Suppose that a continuous Image f(x,y) is approximated by equally spaced samples


to form a N*N array, such that:

• f(x,y)= f(0,0) f(0,1) f(0,2) f(0,N-1)


f(1,0) f(1,1) f(1,2) f(1,N-1)
f(2,0) f(2,1) f(2,2) f(2,N-1)
. . . .
. . . .
. . . .
f(N-1,0) f(N-1,1) f(N-1,2) f(N-1,N-1)

43
44
Some Basic Relationships Between Pixels
• Neighbors of a pixel
– : 4-neighbors of p
N 4 ( p ), , ,
( x  1, y ) ( x  1, y ) ( x, y  1) ( x, y  1)
: four diagonal neighbors of p N D ( p )
, , ,
( x  1, y  1) ( x  1, y  1) ( x  1, y  1)
( x  1, y  1)
: 8-neighbors of p N ( p )
8

N 4 ( p)
and N ( p )
D

45
• Adjacency
– V : The set of gray-level values used to define
adjacency
– 4-adjacency: Two pixels p and q with values from
V are 4-adjacency if q is in the set N 4 ( p )
– 8-adjacency: Two pixels p and q with values from
V are 8-adjacency if q is in the set N 8 ( p )

46
 Subset adjacency
 S1 and S2 are adjacent if some pixel in
S1 is adjacent to some pixel in S2
 Path
A path from p with coordinates ( x, y )to
pixel q with coordinates ( s, t ) is a
sequence of distinct pixels with
coordinates

( x0 , y0 ), ( x1 , y1 ,…,
) ( xn , y n )
where ( x , y = ) ( x , y ), ( xn , yn= ) ( s , t ,)
0 0
and pixels ( xi , yi )and ( xi  1 , yi  1are
)
adjacent
47
• Region
– We call R a region of the image if R is a
connected set
• Boundary
– The boundary of a region R is the set of pixels
in the region that have one or more
neighbors that are not in R
• Edge
– Pixels with derivative values that exceed a
preset threshold

48
• Distance measures
– Euclidean distance
1
De ( p, q) [( x  s ) 2  ( y  t ) ] 2 2

– City-block distance
D4 ( p, q ) | ( x  s ) |  | ( y  t ) |

– Chessboard distance
D8 ( p, q ) max(| ( x  s ) |, | ( y  t ) |)

49
Mathematical Tool for Image Processing

• Array vs Matrix operations


• Linear vs Non-Linear operations
• Arithmetic operations
• Set and Logic operations
• Spatial operations

50
Mathematical Tool for Image Processing
Array versus Matrix Operation :
• Images are viewed as the matrix. But in this
series of DIP, we are using array operation.
There is a difference is Matrix and Array
Operation. In Array, the operation is carried
out by Pixel by Pixel in Image.

51
Array vs. Matrix operations
Let us consider two images as follows
a11 a12 b11 b12
a21 a22
and b21 b22

Array product is given by


a11 a12 b11 b12 a11b11 a12b12
a21 a22 b21 b22 = a21b21 a22b22

Now Matrix product is given by

a11 a12 b11 b12 a11b11+a12b21 a11b12+a12b22


a21 a22 b21 b22
= a21b11+a22b21 a21b12+a22b22

52
53
54
55
56
57
Addition
• The addition of two images and of the same
size results in a new image of the same size
whose pixels are to the sum of the pixels in
the original images:

Fig. 17 gives an example of image addition to produce an artistic


effect. Addition can also be used to denoise a series of images.

58
Image Enhancement

• An enhancement algorithm is one that yields a


better quality image for the purpose of some
particular application which can be done by
either suppressing the noise or increasing the
image contrast.

59
Fig. 17 The image on the right is the sum between the two
images on the left.

60
Adding constant to the image makes
the image brighter i.e s(x,y)=f(x,y)
+constant

61
Figure 4: a) Image 1 b)Image 2 c) Image1+Image2 d)
Image1+constant

62
Subtraction Operation :
• The subtraction between two images is
s(x,y)=f(x,y)-y(x,y). Where f(x,y) is image 1 and
g(x,y) is image 2.
• Subtracting the constant from the original
image makes it darker.

63
Figure 5: (a) image1 (b)Image2 ( c) image1-image2 (d)
image1-constant

64
The subtraction of two images is used for example to detect
changes (Fig. 18).

Fig. 18 The image on the right is the difference between the


two images on the left. Note that the image of difference has
values between
65
3. Multiplication Operation :

• In this equation h(x,y)=f(x,y)*g(x,y) , h(x,y) is


the new image formed f(x,y) is image1 and
g(x,y) is image2. We can also multiple constant
to an image like h(x,y)=f(x,y)*constant.
Multiplication Operation is used in shadowing
correction.

66
Figure 6: (a) image 1 i.e. the original image (b) multiplying
image1 with 1.25 with increases the contrast on the image

67
4.Division Operation
• In division operation h(x,y)=f(x,y)/g(x,y) where
f(x,y) and g(x,y) are two images and h(x,y) is the
new image formed. We can also divided it by
constant i.e. h(x,y)=f(x,y)/constant.

• The division of two images is used to correct


non-homogeneous illumination. Fig. 19
illustrates the removal of shadow.

68
Fig. 19 The right image is the division of the left image by the
right image.

69
70
71
72
73
Basic Logic Operations

Operation Definition
NOT
OR
AND
XOR

74
• The truth table of OR is :

75
The Truth table of AND is:

76
77
Figure8: (a) image1 (b) image2 ( c)
image1 AND image2 (d) image1 OR
image2

78
Spatial Operations
• Spatial operations are performed directly on the pixels of a
given image
• There are three main types of spatial operations:
1. Single Pixel Operations: on which the changes are made on
each pixel independently, such as generate negative image
(inverse intensities), log,inverse log and gamma corrections.
2. Neighborhood Operations: on which the new intensity value
for each pixel depends on the intensity values of its neighbors,
with a specific modifications, such as averaging (Blurring).
Spatial filtering is a common example for this type of operations.
3. Geometric Spatial Transformations: on which the image
coordinates are first changed, then pixels are mapped. For
example: scaling, rotating, translating and shearing.

79
• Affine transformation is a linear mapping method
that preserves points, straight lines, and planes.
• Sets of parallel lines remain parallel after an affine
transformation.
• The affine transformation technique is typically used
to correct for geometric distortions or deformations
that occur with non-ideal camera angles.
• Affine Transformation helps to modify the geometric
structure of the image, preserving parallelism of
lines but not the lengths and angles.
• The Affine Transformation relies on matrices to
handle rotation, shear, translation and scaling.
80
• Affine transformations

81
Translation
• A translation is a function that moves every
point with a constant distance in a specified
direction.
• it is specified as tx and ty which will provide
the orientation and the distance.

82
Rotation

• Rotation is a circular transformation around a


point or an axis.
• We can specify the angle of rotation to rotate
our image around a point or an axis.

83
Scaling
• Scaling is a linear transformation that enlarges
or shrinks objects by a scale factor that is the
same in all directions.
• We can specify the values of the sx and sy to
enlarge or shrink our images.
• It is basically zooming in the image or
zooming out the image.

84
• Shear

• Shear is sometimes also referred to as


transvection.
• A transvection is a function that shifts every
point with constant distance in a basis
direction(x or Y).

85

You might also like