21EC732_Module1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

21EC732 MODULE 1

What Is Digital Image Processing?


• An image may be defined as a two-dimensional function, where x and y are spatial (plane)
coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level
of the image at that point.
• When x, y, and the intensity values of f are all finite, discrete quantities, we call the image a digital
image.
• Digital images are composed of picture elements called pixels. Pixel is the smallest element of the
image
• The value of f(x,y) at any point gives the pixel value at that point of an image. It represents brightness
level at that point. Each pixel has a particular value and location.
• Digital image processing refers to processing digital images by means of a digital computer. The input
to the computer system is digital image, the system processes that image using effective algorithms
in order to get an image or to extract any useful information from it.
• A digital image f(x,y) contains M rows and N columns which is given by

f(x,y) = f(0,0) f(0,1) f(0,2)………………………..……f(0,N-1)


f(1,0) f(1,1) f(1,2)……………………………..f(1,N-1)
f(2,0) f(2,1) f(2,2)……………………………..f(2,N-1)
.
.
.
f(M-1,0) f(M-1,1) f(M-1,2)………….…f(M-1,N-1)

• In 8 bit representation, pixel intensity values changes between 0 (black) and 255 (white).

X X X X X X
X X X X X X pixels

X X X X X X
X X X X X X

Dept of ECE , KSIT DR SALEEM S TEVARAMANI


21EC732 MODULE 1
Fundamental Steps in Digital Image Processing.
• The fundamental steps used in digital image processing are shown in the figure below. The diagram
does not imply that every process is applied to an image. Rather, the intention is to convey an idea
of all the methodologies that can be applied to images for different purposes and possibly with
different objectives.

FIGURE 1.23 Fundamental steps in digital image processing. The chapter(s) indicated in the boxes is where the material
described in the box is discussed.

• Image acquisition is the first process in Fig. It is the process of capturing real world images in digital
form and storing them.

• Image enhancement is the process of manipulating an image so that the result is more suitable than
the original for a specific application. It is the process of enhancing the visual quality of the image
which are corrupted due to noise, poor illuminations, coarse quantization etc.
Various enhancement techniques are contrast, edge enhancement, noise filtering, sharperning,
pseudo coloring enhancement can be done in spatial domain and frequency domain.

• Image restoration is an area that also deals with improving the appearance of an image. However,
unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration
techniques tend to be based on mathematical or probabilistic models of image degradation.

• Color image processing is an area that has been gaining in importance because of the significant
increase in the use of digital images over the Internet. RGB [red, blue, green] color model is used for
color moniters, video cameras etc. CMY [cyan, magenta, yellow] and CMYB [cyan, magenta, yellow,
black] color models are used for color printing. HIS [hue, saturation, intensity] is used for image
analysis.
Dept of ECE , KSIT DR SALEEM S TEVARAMANI
21EC732 MODULE 1

• Wavelets are the foundation for representing images in various degrees of resolution. Wavelets are
used in multiresolution analysis for sub-band coding for signal processing, quaderature mirror
filtering of speech signals and pyramidal image precessing.

• Compression deals with techniques for reducing the storage required to save an image, or the
bandwidth required to transmit it. Image compression is familiar (perhaps inadvertently) to most
users of computers in the form of image file extensions, such as the jpg file extension used in the
JPEG (Joint Photographic Experts Group) image compression standard.

• Morphological processing deals with tools for extracting image components that are useful in the
representation and description of shape.

• Segmentation procedures partition an image into its constituent parts or objects. In general,
autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged
segmentation procedure brings the process a long way toward successful solution of imaging
problems that require objects to be identified individually. On the other hand, weak or erratic
segmentation algorithms almost always guarantee eventual failure. In general, the more accurate
the segmentation, the more likely recognition is to succeed.

• Representation and description almost always follow the output of a segmentation stage, which
usually is raw pixel data, constituting either the boundary of a region (i.e., the set of pixels separating
one image region from another) or all the points in the region itself. In either case, converting the
data to a form suitable for computer processing is necessary. The first decision that must be made is
whether the data should be represented as a boundary or as a complete region.
Description, also called feature selection, deals with extracting attributes that result in some
quantitative information of interest or are basic for differentiating one class of objects from another.

• Recognition is the process that assigns a label (e.g., “vehicle”) to an object based on its descriptors.
Learning and classifications are two phases of object recognition. Learning is a developmental model
based on features, recognition is based on information given by features about objects.

• Knowledge base controls the interaction between the modules by saving the intermediate result and
use the same for further processing steps.

Components of an Image Processing System.


Dept of ECE , KSIT DR SALEEM S TEVARAMANI
21EC732 MODULE 1

• Image sensors can vary from simple cameras to multi spectral scanners. CCD’s or CMOS devices are
used as sensors to convert light energy into electrical energy.
• Specialized image processing hardware consists of digitizer and hardware to perform operations
such as arithmetic/logical on images at very high speed (real time).
A digitizer takes sensor inputs and produces digital output composed of discrete intensity levels at
discrete positions.
Specialized image processing hardware is used for noise reduction and other processing of digital
images at very high speed.
• Computer used for image processing system can range from personal computers to a super
computer. In dedicated applications, sometimes custom computers are used to achieve a required
level of performance
• Image processing software consists of specialized modules to perform specific tasks. These tasks can
be enhancement edge/corner detection, boundary/region extraction.
• Mass storage facility has to be huge for image processing applications. Images are associated with
huge amount of memory with one image requiring up to megabytes of storage space.
Storage space can be short term storage, online storage or mass storage facility.
• Image displays in use today are mainly colour TV monitors. Monitors are driven by the output of the
images and graphics display cards that are an integral part of a computer system.
A display device produces visual form of data stored in computer.
• Hard copy are printers ranging from line printers, dot matrix printers to laser printers.
• Networking is almost a default function in any computer system in use today. Lot of information and
images need to be shared between different people.

Dept of ECE , KSIT DR SALEEM S TEVARAMANI


21EC732 MODULE 1

Applications of image processing


1. Medicine
• CT scan, xray imaging, magnetic resonance imaging (MRI) are used in medical imaging, filtering,
segmentation, pattern recognition techniques are used for identifying various abnormalities in
human body
• Image processing in medicine has a vital role play as the ratio of number of doctors to number of
patients is very low.
2. Industrial automation
• Examples are automatic inspection system, non destructive technique (NDT), automatic
assembly, process control etc.
• The objective of industrial inspection is to find damaged or incorrectly manufactured products
automatically before packaging.
3. Remote sensing
• Some of the applications on remote sensing are natural forest, ground water, minerals etc and
estimation related to agriculture, hydrology, urban planning, environment, pollution, pattern
study etc
4. Office automation
• Optical character recognition (OCR) is used in banks to check number identification, in universities
for checking answer scripts for multiple choice questions, document processing, logo/icon
recognition helps in office automation.
5. Criminology
• Fingerprint recognition, face recognition and matching iris recognition etc are the automatic
algorithms that help investigating agencies to find criminals.
6. Security
• Use of surveillance and monitoring has increased in our daily life. All public places have
surveillance cameras to identify a possible threat, face recognition, vehicle tracking, intruder
alarms are few automated algorithms used for security.

Elements of Visual Perception Although


Structure of the Human Eye

• Figure shows a simplified horizontal cross section of the human eye.


Dept of ECE , KSIT DR SALEEM S TEVARAMANI
21EC732 MODULE 1
• Eye perceives an object/scene by light rays emitted and reflected. Eye sends the electrical signals
to brain through optic nerve and brain interrupts the object.
• The eye is nearly a sphere, with an average diameter of approximately 20 mm.
• Three membranes enclose the eye:
i. The cornea and sclera outer cover
ii. The choroid
iii. The retina
• The cornea is a tough, transparent tissue that covers the anterior surface of the eye. Continuous
with the cornea, the sclera is an opaque membrane that encloses the remainder of the optic
globe.
• The choroid lies directly below the sclera. This membrane contains a network of blood vessels
that serve as the major source of nutrition to the eye. At its anterior extreme, the choroid is
divided into the ciliary body and the iris.
• Iris forms a round aperture that can vary in size and determines the amount of light entering the
eye. During dark, iris will open widely and lets most of the light into eye. During day light, it shrinks
to control light.
• The lens is made up of concentric layers of fibrous cells. Lens can vary its shape to focus the object
on to the retina.
• The innermost membrane of the eye is the retina, which lines the inside of the wall’s entire
posterior portion. When the eye is properly focused, light from an object outside the eye is
imaged on the retina.
• There are two classes of receptors: cones and rods. Rods are sensitive to light. They serve to give
a general, overall picture of field of view. Rods are not involved in colour vision and are sensitive
to low level of illumination. Rod vision is called scotopic or dim light vision.
• The cones in each eye number between 6 and 7 million. They are located primarily in the central
portion of the retina, called the fovea, and are highly sensitive to colour. Cone vision is called
photopic or bright-light vision.

• Figure shows the density of rods and cones for a cross section of the eye. The distribution of
receptors is radially symmetric about the fovea. Receptor density is measured in degrees from
the fovea (that is, in degrees off axis, as measured by the angle formed by the visual axis and a
line passing through the center of the lens and intersecting the retina). In Fig , cones are most
dense in the center of the retina (fovea).

Dept of ECE , KSIT DR SALEEM S TEVARAMANI


21EC732 MODULE 1
• Rods increase in density from the center out to approximately 20° off axis and then decrease in
density out to the extreme periphery of the retina. The absence of receptors results in blind spot.

Image Formation in the Eye


• In an ordinary photographic camera, the lens has a fixed focal length, and focusing at various
distances is achieved by varying the distance between the lens and the imaging plane, where
the is located.
• In the human eye, the converse is true; the distance between the lens and the imaging region
(the retina) is fixed, and the focal length needed to achieve proper focus is obtained by varying
the shape of the lens. The fibers in the ciliary body accomplish this, flattening or thickening the
lens for distant or near objects.

• The geometry in Fig illustrates how to obtain the dimensions of an image formed on the retina.
C is the optical center of the lens.

• Suppose that a person is looking at a tree 15 m high at a distance of 100 m. Letting h denote
the height of that object in the retinal image, the geometry of Fig yields

15/100 = h/17
h = 2.55 mm
• The retinal image is focused primarily on the region of the fovea. Perception then takes place by
the relative excitation of light receptors, which transform radiant energy into electrical impulses
that ultimately are decoded by the brain.

Brightness Adaptation and Discrimination


• Because digital images are displayed as a discrete set of intensities, the eye’s ability to
discriminate between different intensity levels is an important consideration in presenting
image processing results.
• The range of light intensity levels to which the human visual system can adapt is enormous—on
the order of — from the scotopic threshold to the glare limit.
• Experimental evidence indicates that subjective brightness (intensity as perceived by the human
visual system) is a logarithmic function of the light intensity incident on the eye.

Dept of ECE , KSIT DR SALEEM S TEVARAMANI


21EC732 MODULE 1

• Figure , a plot of light intensity versus subjective brightness, illustrates this characteristic. The
long solid curve represents the range of intensities to which the visual system can adapt.
• In photopic vision alone, the range is about The transition from scotopic to photopic vision is
gradual over the approximate range from 0.001 to 0.1 millilambert (-3 to -1 in the log scale).
• Brightness adaptation is the ability of human eye to adapt to a wide range of incident light
energy.
• For any given set of conditions, the current sensitivity level of the visual system is called the
brightness adaptation level, which may correspond to brightness Ba.
• The short intersecting curve represents the range of subjective brightness that the eye can
perceive when adapted to this level. This range is rather restricted, having a level Ba at and below
which all stimuli are perceived as indistinguishable blacks.
• The upper portion of the curve is not actually restricted but, if extended too far, loses its meaning
because much higher intensities would simply raise the adaptation level higher than Ba.
• Human’s ability to detect brightness of a spot does not depend on huminance of the spot, but
depends more on the difference in the huminance of the spot and background.
• For example, consider a classic experiment having a subject look at a flat, uniformly illuminated
area large enough to occupy the entire field of view. This area typically is a diffuser such as
opaque glass, that is illuminated from behind by a light source whose intensity ‘I’ can be varied.
• To this field is added an increment of illuminations I ,in the form of short duration flash that
appears as a circle in the center of uniformly illuminated field as show in figure.

Fig : Basic experimental setup used to characterize brightness discrimination

Dept of ECE , KSIT DR SALEEM S TEVARAMANI


21EC732 MODULE 1
• If I is not bright enough, the subject says “no,” indicating no perceivable change. As I gets
stronger,the subject may give a positive response of “yes,” indicating a perceived change. Finally,
when I is strong enough, the subject will give a response of “yes” all the time. The quantity
IC/C where IC is the increment of illumination discriminable 50% of the time with background
illumination I, is called the Weber ratio.
• A small value of delta IC/I means that a small percentage change in intensity is discriminable.
This represents “good” brightness discrimination. Conversely, a large value of delta IC/I means
that a large percentage change in intensity is required. This represents “poor” brightness
discrimination.
• A plot delta IC/I of as a function of log I has the general shape shown in Fig. This curve shows
that brightness discrimination is poor (the Weber ratio is large) at low levels of illumination, and
it improves significantly (the Weber ratio decreases) as background illumination increases.

Fig : Typical Weber ratio as a function of intensity.

• Two phenomena clearly demonstrate that perceived brightness is not a simple function of
intensity. The first is based on the fact that the visual system tends to undershoot or overshoot
around the boundary of regions of different intensities. Figure 2.7(a) shows a striking example
of this phenomenon. Although the intensity of the stripes is constant, we actually perceive a
brightness pattern that is strongly scalloped near the boundaries [Fig. 2.7(c)]. These seemingly
scalloped bands are called Mach bands
• The second phenomenon, called simultaneous contrast, is related to the fact that a region’s
perceived brightness does not depend simply on its intensity but largely depends on the
background. All the center squares have exactly same intensity. However, they appear to the
eye to become darker as the background gets brighter.

FIG 2.7 Illustration of the


Mach band effect.
Perceived intensity is
not a simple function of
actual intensity

Dept of ECE , KSIT DR SALEEM S TEVARAMANI


21EC732 MODULE 1

FIGURE 2.8 Examples of simultaneous contrast. All the inner squares have the same
intensity, but they appear progressively darker as the background becomes lighter.

• Other examples of human perception phenomena are optical illusions, in which the eye fills in
nonexisting information or wrongly perceives geometrical properties of objects. Figure 2.9
shows some examples.
• In Fig. 2.9(a), the outline of a square is seen clearly, despite the fact that no lines defining such
a figure are part of the image.
• The same effect, this time with a circle, can be seen in Fig. 2.9(b); note how just a few lines are
sufficient to give the illusion of a complete circle.
• The two horizontal line segments in Fig. 2.9(c) are of the same length, but one appears shorter
than the other.
• Finally, all lines in Fig. 2.9(d) that are oriented at 45° are equidistant and parallel.

Dept of ECE , KSIT DR SALEEM S TEVARAMANI


21EC732 MODULE 1

Image Sensing and Acquisition

• The object to be imaged has to be well illuminated by a light source, the reflected energy will be
captured by camera. Illumination source emits EM energy on the object, the reflected energy is
converted into electrical signals by image sensors. Mainly used sensors are charge coupled device
sensors, CMOS image sensor. Depending on the image to be captured and the cost involved, image
acquisition systems are categorized into 3 types.
1. Image Acquisition Using a Single Sensor
2. Image Acquisition Using Sensor Strips
3. Image Acquisition Using Sensor Arrays

1. Image Acquisition Using a Single Sensor

FIGURE 2.12 (a) Single imaging sensor.


• Figure 2.12(a) shows the components of a single sensor. Perhaps the most familiar sensor of this type
is the photodiode, which is constructed of silicon materials and whose output voltage waveform is
proportional to light. The use of a filter in front of a sensor improves selectivity.
• In order to generate a 2-D image using a single sensor, there has to be relative displacements in both
the x- and y-directions between the sensor and the area to be imaged.

Dept of ECE , KSIT DR SALEEM S TEVARAMANI


21EC732 MODULE 1
• Figure 2.13 shows an arrangement used in high-precision scanning, where a film negative is mounted
onto a drum whose mechanical rotation provides displacement in one dimension.

2. Image Acquisition Using Sensor Strips

• A geometry that is used much more frequently than single sensors consists of an in-line arrangement
of sensors in the form of a sensor strip, as Fig shows.
• The strip provides imaging elements in one direction.Motion perpendicular to the strip provides
imaging in the other direction, as shown in Fig. 2.14(a). This is the type of arrangement used in most
flat bed scanners.
• Sensing devices with 4000 or more in-line sensors are possible. In-line sensors are used routinely in
airborne imaging applications.
• Sensor strips mounted in a ring configuration are used in medical and industrial imaging to obtain
cross-sectional (“slice”) images of 3-D objects, as Fig. 2.14(b) shows.
• A rotating X-ray source provides illumination and the sensors opposite the source collect the X-ray
energy that passes through the object.

3. Image Acquisition Using Sensor Arrays


Dept of ECE , KSIT DR SALEEM S TEVARAMANI
21EC732 MODULE 1

• CCD/CMOS cameras are used as sensors in an array. It can contain 4000 x 4000 elements. As sensor
array in 2D, complete image can be captured without any movement. This arrangement is very simple
and does not need any kind of mechanical motion which can be a source of noise. These cameras can
be expensive.
• The principal manner in which array sensors are used is shown in Fig. 2.15. This figure shows the
energy from an illumination source being reflected from a scene element. The first function
performed by the imaging system in Fig. 2.15(c) is to collect the incoming energy and focus it onto an
image plane. If the illumination is light, the front end of the imaging system is an optical lens that
projects the viewed scene onto the lens focal plane, as Fig. 2.15(d) shows.
• The sensor array, which is coincident with the focal plane, produces outputs proportional to the
integral of the light received at each sensor. Digital and analog circuitry sweep these outputs and
convert them to an analog signal, which is then digitized by another section of the imaging system.
The output is a digital image, as shown diagrammatically in Fig. 2.15(e).

A Simple Image Formation Model


Dept of ECE , KSIT DR SALEEM S TEVARAMANI
21EC732 MODULE 1
• A image is denoted by a 2D function f(x,y). The amplitude or value of ‘f ‘ is the intensity of image at
the spatial coordinate (x,y). An image is generated using a sensor, it cannot be negative or infinite.

Thus 0 < f(x,y) <

8
• f(x,y) has two components :
1. i(x,y) - amount of illumination incident.
2. r(x,y) – amount of illumination reflected.
The two functions combine as a product to form f(x,y)
f(x,y) = i(x,y).r(x,y)
where 0 < i(x,y) <
8

0 < r(x,y) < 1

• Reflectance is bounded by 0 (total absorption) and 1 (total reflectance). Nature of i(x,y) is determined
by illumination source and r(x,y) determined by the characteristics of imaged objects.

Image Sampling and Quantization


• The output of most sensors is a continuous voltage waveform whose amplitude and spatial behavior
are related to the physical phenomenon being sensed. To create a digital image, we need to convert
the continuous sensed data into digital form. This involves two processes: sampling and quantization.

f(s,t)
f(x,y)
sampling quantization
continuous image
digital image

• The basic idea behind sampling and quantization is illustrated in Fig. 2.16. Figure 2.16(a) shows a
continuous image f that we want to convert to digital form.
• An image may be continuous with respect to the x- and y-coordinates, and also in amplitude. To
convert it to digital form, we have to sample the function in both coordinates and in amplitude.
Digitizing the coordinate values is called sampling. Digitizing the amplitude values is called
quantization.
• The one-dimensional function in Fig. 2.16(b) is a plot of amplitude (intensity level) values of the
continuous image along the line segment AB in Fig. 2.16(a). The random variations are due to image
noise. To sample this function, we take equally spaced samples along line AB, as shown in Fig. 2.16(c).
• The spatial location of each sample is indicated by a vertical tick mark in the bottom part of the
figure.The samples are shown as small white squares superimposed on the function.The set of these
discrete locations gives the sampled function.

Dept of ECE , KSIT DR SALEEM S TEVARAMANI


21EC732 MODULE 1

• In order to form a digital function, the intensity values also must be converted (quantized) into
discrete quantities. The right side of Fig. 2.16(c) shows the intensity scale divided into eight discrete
intervals, ranging from black to white. The vertical tick marks indicate the specific value assigned to
each of the eight intensity intervals.
• The continuous intensity levels are quantized by assigning one of the eight values to each sample.The
assignment is made depending on the vertical proximity of a sample to a vertical tick mark.
• The digital samples resulting from both sampling and quantization are shown in Fig. 2.16(d). Starting
at the top of the image and carrying out this procedure line by line produces a two-dimensional digital
image. It is implied in Fig. 2.16 that, in addition to the number of discrete levels used, the accuracy
achieved in quantization is highly dependent on the noise content of the sampled signal.

Representing Digital Images


Dept of ECE , KSIT DR SALEEM S TEVARAMANI
21EC732 MODULE 1
• The result of sampling and quantization is a matrix of real numbers. Let us say that an image f(x, y) is
sampled so that the resulting digital image has m rows and n columns.
• The value of the coordinate (x, y) now become discrete quantities. Figure shows the coordinate
convention used.

0 1 2 N-1 y

1
. . . . . . .
2

3 . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
M-1
. . . . . . .
x

• The above notation allow us to write the complete MxN digital image in the following compact matrix
form.

• Each element of this matrix is called an image element, picture element, pixel. The digitalization
process requires decisions about value of M – no of rows, N – no of columns, L – no of discrete gray
levels.
• We assume that the discrete gray levels are equally spaced and that they are integers in the interval
[0, L-1]
Number of bits required to store a digital image, b= M x N x K.
K = 1, for binary image of size 512 x 512.
b = 512 x 512 x 1 = 262144 bits.
K=8, for gray scale image of size 512 x 512
b=512 x 512 x 8 = 262.14 kb
K=8x3, for a colour image of size 512 x 512
b= 512 x 512 x 8 x 3 = 786.43 kb.

Spatial and Intensity Resolution

Dept of ECE , KSIT DR SALEEM S TEVARAMANI


21EC732 MODULE 1
• Spatial resolution is a measure of the smallest discernible detail in an image. Quantitatively,spatial
resolution can be stated in a number of ways, with line pairs per unit distance, and dots (pixels) per
unit distance being among the most common measures.
• Spatial resolution can be expressed as total number of pixels in an image. A 1.3 megapixel camera
has 13,10,720 pixels, generating an image of size 1024 rows and 1280 columns. As number of pixels
in an image increases, pixel size gets smaller, which requires high-quality lens for focusing.

Some Basic Relationships between Pixels


• an image is denoted by f(x,y).When referring in this section to a particular pixel, we use lowercase
letters, such as p and q.

Adjacency, Connectivity, Regions, and Boundaries


• Let V be the set of intensity values used to define adjacency. In a binary image, V={1} if we are
referring to adjacency of pixels with value 1. In a gray-scale image, the idea is the same, but set V
typically contains more elements. For example, in the adjacency of pixels with a range of possible
intensity values 0 to 255, set V could be any subset of these 256 values. We consider three types of
adjacency:
(a) 4-adjacency. Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p)
(b) 8-adjacency. Two pixels p and q with values from V are 8-adjacent if q is in the set N8(p)
(c) m-adjacency (mixed adjacency). Two pixels p and q with values from V are m-adjacent if
(i) q is in N4(p) or
(ii) q is in ND(p) and the set N4(p) N4(q) has no pixels whose values are from V.

• A (digital) path (or curve) from pixel p with coordinates to pixel q with coordinates is a sequence of
distinct pixels with coordinates
(x0, y0), (x1, y1)………… (xn, yn)
where(x0, y0)= (x, y), (xn, yn) =(s,t) and pixels (xi, yi) and (xi-1, yi-1) are adjacent for In this case, n is the
length of the path. If (x0, y0)= (xn, yn) , the path is a closed path.

Dept of ECE , KSIT DR SALEEM S TEVARAMANI


21EC732 MODULE 1

Distance Measures

Dept of ECE , KSIT DR SALEEM S TEVARAMANI

You might also like