Mod 1 Ip
Mod 1 Ip
Mod 1 Ip
An image is defined as a two-dimensional function,F(x,y), where x and y are spatial coordinates, and
the amplitude of F at any pair of coordinates (x,y) is called the intensity of that image at that point.
When x,y, and amplitude values of F are finite, we call it a digital image.
In other words, an image can be defined by a two-dimensional array specifically arranged in rows
and columns.
Digital Image is composed of a finite number of elements, each of which elements have a particular
value at a particular location.These elements are referred to as picture elements,image elements,and
pixels.A Pixel is most widely used to denote the elements of a Digital Image.
Types of an image
1. BINARY IMAGE– The binary image as its name suggests, contain only two pixel elements i.e
0 & 1,where 0 refers to black and 1 refers to white. This image is also known as
Monochrome.
2. BLACK AND WHITE IMAGE– The image which consist of only black and white color is called
BLACK AND WHITE IMAGE.
3. 8 bit COLOR FORMAT– It is the most famous image format.It has 256 different shades of
colors in it and commonly known as Grayscale Image. In this format, 0 stands for Black, and
255 stands for white, and 127 stands for gray.
4. 16 bit COLOR FORMAT– It is a color image format. It has 65,536 different colors in it.It is also
known as High Color Format. In this format the distribution of color is not as same as
Grayscale image.
A 16 bit format is actually divided into three further formats which are Red, Green and Blue. That
famous RGB format.
Image as a Matrix
As we know, images are represented in rows and columns we have the following syntax in which
images are represented:
The right side of this equation is digital image by definition. Every element of this matrix is called
image element , picture element , or pixel.
DIGITAL IMAGE REPRESENTATION IN MATLAB:
COMPONENTS
Image Processing System is the combination of the different elements involved in the digital image
processing. Digital image processing is the processing of an image by means of a digital computer.
Digital image processing uses different computer algorithms to perform image processing on the
digital images.
It consists of following components:-
• Image Sensors:
Image sensors senses the intensity, amplitude, co-ordinates and other features of the
images and passes the result to the image processing hardware. It includes the problem
domain.
• Computer:
Computer used in the image processing system is the general purpose computer that is used
by us in our daily life.
• Mass Storage:
Mass storage stores the pixels of the images during the processing.
• Image Display:
It includes the monitor or display screen that displays the processed images.
• Network:
Network is the connection of all the above elements of the image processing system.
PHASES OF IMAGE PROCESSING:
1.ACQUISITION– It could be as simple as being given an image which is in digital form. The main
work involves:
a) Scaling
b) Color conversion(RGB to Gray or vice-versa)
2.IMAGE ENHANCEMENT– It is amongst the simplest and most appealing in areas of Image
Processing it is also used to extract some hidden details from an image and is subjective.
3.IMAGE RESTORATION– It also deals with appealing of an image but it is objective(Restoration is
based on mathematical or probabilistic model or image degradation).
4.COLOR IMAGE PROCESSING– It deals with pseudocolor and full color image processing color
models are applicable to digital image processing.
5.WAVELETS AND MULTI-RESOLUTION PROCESSING– It is foundation of representing images in
various degrees.
6.IMAGE COMPRESSION-It involves in developing some functions to perform this operation. It
mainly deals with image size or resolution.
7.MORPHOLOGICAL PROCESSING-It deals with tools for extracting image components that are
useful in the representation & description of shape.
8.SEGMENTATION PROCEDURE-It includes partitioning an image into its constituent parts or
objects. Autonomous segmentation is the most difficult task in Image Processing.
9.REPRESENTATION & DESCRIPTION-It follows output of segmentation stage, choosing a
representation is only the part of solution for transforming raw data into processed data.
10.OBJECT DETECTION AND RECOGNITION-It is a process that assigns a label to an object based on
its descriptor.
To convert a continuous image f(x, y) into digital form, we have to sample the function in both co-
ordinates and amplitude.
Sampling
Since an analogue image is continuous not just in its co-ordinates (x axis), but also in its amplitude (y
axis), so the part that deals with the digitizing of co-ordinates is known as sampling. In digitizing
sampling is done on independent variable. In case of equation y = sin(x), it is done on x variable.
When looking at this image, we can see there are some random variations in the signal caused by
noise. In sampling we reduce this noise by taking samples. It is obvious that more samples we take,
the quality of the image would be more better, the noise would be more removed and same
happens vice versa. However, if you take sampling on the x axis, the signal is not converted to digital
format, unless you take sampling of the y-axis too which is known as quantization.
QUANTIZATION
Here we assign levels to the values generated by sampling process. In the image showed in sampling
explanation, although the samples has been taken, but they were still spanning vertically to a
continuous range of gray level values. In the image shown below, these vertically ranging values
have been quantized into 5 different levels or partitions. Ranging from 0 black to 4 white. This level
could vary according to the type of image you want.
Difference between Image Sampling and Quantization:
Sampling Quantization
It determines the spatial resolution of the It determines the number of grey levels in the
digitized images. digitized images.
In human visual perception, the eyes act as the sensor or camera, neurons act as the connecting
cable and the brain acts as the processor.
1. Structure of Eye
Structure of eyes
1.
The shape of the eye is nearly spherical with a radius of 11 m.
(approx). The outermost layer, called the sclera, is a 1 mm thick
opaque membrane and merges into the transparent cornea.
2. At the rear, the optic nerve penetrates the sclera on the nasal side.
The choroid, the membrane that lies directly below the sclera
contains a network of blood vessels which provide nutrition to the
eye.
3. The choroid being heavily pigment reduces the backscatter of light
within the optical globe. It’s parts are the ciliary body and the iris
diaphragm.
4. The iris is a nearly circular aperture that constitutes the pupil. It
contracts and expands to control the amount of light entering the
eye. The innermost membrane is the retina. The retinal surface
contains a mosaic of photoreceptor cells called rods and cones.
5. The number of cones in each eye is between 6 and 7 million and
that of rods ranges from 75 to 150 million. Cones are primarily
located at the centre of the retina, says the fovea, and are sensitive
to colour.
6. The cones are also responsible for acute vision and cone vision is
known as photopic or bright-light vision.
7. Rods give an overall appearance of the scene but are not involve in
colour vision. They are sensitive to low levels of illumination. So, rod
vision is known as scotopic or dim-light vision.
2. Image formation
Graphical representation of the eye looking at a tree. Point C is the optical center of the lens.
1. The principal difference between the lens of the eye and an ordinary
optical lens is that the former is flexible. ii.
2. In above fig. the radius of the curvature of the anterior surface of
the lens is greater than the radius of its posterior surface.
3. The shape of the lens is control by the tension in the fibers of the
ciliary body;
a. The focus on distant objects, the controlling muscles cause the
lens to be relatively flatten. Similarly, these muscles allow the lens
to become thicker in order to focus on objects near the eye.
b. The distance between the focal center of the lens and the retina
varies from approximately 14 mm to about 17 mm, as the refractive
power of the lens increases from minimum to maximum.
c. When the eye focuses on an object farther away than about 3 m,
the lens exhibits its lowest refractive power, and when the eye
focuses on a nearby object the lens is most strongly refractive.
3. Brightness adaptation and discrimination
1. Brightness is a psycho-visual concept and is describe as the
sensation of light intensity.
2. The contrast is the difference in perceived brightness.
3. Detection of a bright spot depends not only on the brightness, size
(in space), and duration (in time) but also on the contrast between
the spot and the background.
4. The spot can be detected only when the contrast is greater than a
threshold, depending on the average brightness of the surrounding.
This dependence is known as brightness adaptation.
5. The range of intensity levels that can be adapted by the human
visual system is enormous; the highest level (glare limit) is
approximately 101° times the lowest one (scotopic threshold).
6. The experimental evidence shows that the brightness perceived by
the human visual system is a logarithmic function of intensity
incident on the eye.
COLOR MODELS
The colour spaces in image processing aim to facilitate the specifications of colours in some standard
way. Different types of colour spaces are used in multiple fields like in hardware, in multiple
applications of creating animation, etc. The colour model aims to facilitate the specifications
of colours in some standard way.
Different types of colour models are used in multiple fields like in hardware, in multiple applications
of creating animation, etc.
• RGB
• CMYK
• HSV
• YIQ
RGB: The RGB colour model is the most common colour model used in Digital image processing and
openCV. The colour image consists of 3 channels. One channel each for one colour. Red, Green and
Blue are the main colour components of this model. All other colours are produced by the
proportional ratio of these three colours only. 0 represents the black and as the value increases the
colour intensity increases.
Properties:
• This is an additive colour model. The colours are added to the black.
Colour combination:
Green(255) + Red(255) = Yellow
CMYK: CMYK colour model is widely used in printers. It stands for Cyan, Magenta, Yellow and Black
(key). It is a subtractive colour model. 0 represents the primary colour and 1 represents the lightest
colour. In this model, point (1, 1, 1) represents black, and (0,0,0) represents white. It is a subtractive
model thus the value is subtracted from 1 to vary from least intense to a most intense colour value.
1-RGB = CMY
HSV: The image consists of three channels. Hue, Saturation and Value are three channels. This colour
model does not use primary colours directly. It uses colour in the way humans perceive them. HSV
colour when is represented by a cone.
Hue is a colour component. Since the cone represents the HSV model, the hue represents different
colours in different angle ranges.
Yellow colour falls between 61 and 120 degrees in the HSV cone.
Green colour falls between 121 and 180 degrees in the HSV cone.
Cyan colour falls between 181 and 240 degrees in the HSV cone.
Blue colour falls between 241 and 300 degrees in the HSV cone.
Magenta colour falls between 301 and 360 degrees in the HSV cone.
Saturation as the name suggest describes the percentage of the colour. Sometimes this value lies in
the 0 to 1 range. 0 being the grey and 1 being the primary colour. Saturation describes the grey
colour.
The value represents the intensity of the colour chosen. Its value lies in percentage from 0 to 100. 0
is black and 1 is the brightest and reveals the colour.
HSV model is used in histogram equalization and converting grayscale images to RGB colour images.
YIQ: YIQ is the most widely colour model used in Television broadcasting. Y stands for luminance
part and IQ stands for chrominance part. In the black and white television, only the luminance part
(Y) was broadcast. The y value is similar to the grayscale part. The colour information is represented
by the IQ part.
YIQ model is used in the conversion of grayscale images to RGB colour images.