Ip1 2024
Ip1 2024
Ip1 2024
An image may be defined as a two dimensional light intensity function f(x,y). The amplitude of ‘f’ at any
pair of coordinates (x,y) is called intensity or brightness of the image at that point.
An image may be characterised by two components : Illumination and Reflectance
The amount of source light incident on the scene being viewed is Illumination and the Reflectance means
the amount of light reflected by the objects in the scene.
Fig shows the Digital Image Acquisition System
The imaging system consists of sensor used to capture the image and digitizer is used to convert analog signal in to
digital form.
The analysis and manipulation of digital image, especially in order to improve its quality is known as image
processing. Fig. shows various stages of image processing.
Image acquisition is the process of converting an analogue image into digital form. It happens in a camera
or Scanner
Image enhancement is the process of adjusting the digital image or enhance certain features of the image,
so that the results are more suitable than the original image for certain specific application or further
analysis.
Image enhancement techniques are very much problem oriented! . Best technique for enhancement of X-
ray images may not be the best for enhancement of microscopic images.
Image Restoration is the process of reconstruct or recover an image that has been degraded by
mathematical or probabilistic model of image degradation.
Image enhancement is subjective process which means we improve an image so that it looks subjectively
better were as restoration is the objective process , here we applying the inverse process in order to
recover the original image.
Color image processing is an area that has been gaining in importance because of the significant increases
in the use of digital images over the Internet. Moreover color is the basis for extracting features of
interest in an image.
Wavelets are the foundation for representing images in various degree of resolution. The pyramidal
representation of images can be subdivided successfully into smaller region for further analysing in
various applications.
Compression technique deals with reducing the storage required to save an image or the bandwidth
required to transmit it. Image compression algorithms(Run-length encoding, Arithmetic coding, Huffman
coding, Flate/deflate algorithms) rewrite the image file in a way that takes up less storage space.
JPEG file format is most widely used image compression method. It can usually compress files by a ratio
of 10:1 with a minimal reduction in image quality.
Lossy compression methods : JPEG, WebP, HEIF (High Efficiency Image Format)
Loss less compression method : PNG, GIF, BMP
Morphological process deals with tools for extracting image components that are useful in the
representation and description of region shape.
Image segmentation is a fundamental technique in DIP and computer vision. It involves partitioning a
digital image into multiple segments (regions or objects) to simplify and analyze an image by separating it
into meaningful components, Which makes the image processing more efficient by focusing on specific
regions of interest.
Image representation refers to the methods and techniques used to digitally encode and interpret images.
It involves converting visual information, such as a photograph or video frame, into a format that a
computer can process and manipulate.
Object recognition is a computer vision technique for identifying objects in images or videos. It is a key
output of deep learning and machine learning algorithms.
The process of receiving and analyzing visual information by digital computer’s is called digital image
processing.
An image may be described as a two dimensional function f(p, q) where p and q are spatial
coordinates.
Amplitude of f at any pair of co-ordinates (p, q) is called the intensity or gray level of the image at
that point.
When special co-ordinates and amplitude values are all finite, discrete quantities the image is called
digital image.
The image composed of a finite number of elements each of which has a particular location and
values. These elements are related to as picture elements. Pixel is the term most widely used to denote the
elements of digital image.
To emulate human vision, including some analysis. Performing some mechanical operation (robot
motion) is the goal of the Image processing.
In the figure 1.2 typical blocks diagram of image processing system. It consists of the computer
system, image acquisition, image processing software, storage devices, transmitters and display devices.
Digital image processing has many advantages over analog image processing. It allows a much wider
range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and
signal distortion during processing.
Fig 1.2 Typical Image Processing System.
The image processing system starts with image acquisition. Two factors are required to acquire a digital image.
First is a sensor that is a physical device that is sensitive to the energy radiated by the object has to be imaged. The
second part is called Digitizer is used to converting the output of the sensing image to digital form.
For example, in a digital camera the sensor produces an electrical output proportional to light intensity.
During the process of image acquisition noise are formed and the digitizer converts the output into digital data.
Image sensing and acquisition refer to the process of capturing visual information from the real world and
converting it into a digital format that can be processed and stored electronically. This process is fundamental in
various fields such as photography, computer vision, medical imaging, remote sensing, and more.
1. Image Sensing: This involves the physical act of capturing light from a scene using sensors or detectors.
Sensors can vary depending on the application but often include:
o Charge-Coupled Devices (CCDs): Commonly used in digital cameras and scientific imaging
devices.
o Complementary Metal-Oxide-Semiconductor (CMOS) Sensors: Found in many consumer
electronics due to their lower power consumption and integration capabilities.
o Infrared (IR) Sensors: Used for capturing infrared light, useful in applications like night vision and
thermal imaging.
2. Image Acquisition: Once light is captured by the sensor, it needs to be converted into digital data. This
involves:
o Analog-to-Digital Conversion (ADC): The analog signal generated by the sensor (voltage levels
corresponding to light intensity) is converted into digital values (pixels) that can be processed by
a computer or stored in memory.
o Color Filtering: In color imaging, sensors may use filters (typically red, green, and blue) to capture
different wavelengths of light and reconstruct color information.
3. Processing and Storage: The digital image data obtained from sensors can undergo various processing
steps such as:
o Image Enhancement: Adjusting brightness, contrast, and color balance to improve the visual
quality of the image.
o Compression: Reducing the size of image files for efficient storage and transmission.
o Analysis: Using algorithms for tasks like object detection, pattern recognition, or measuring
characteristics in medical imaging.
4. Applications: Image sensing and acquisition have widespread applications in:
o Photography: Consumer cameras, professional photography, and artistic expression.
o Medical Imaging: X-rays, MRIs, CT scans, etc., for diagnosis and treatment planning.
o Surveillance: Security cameras for monitoring and analysis.
o Remote Sensing: Satellite and aerial imagery for environmental monitoring, agriculture, and
urban planning.
o Machine Vision: Industrial applications such as quality control and automated inspection.
Sampling and Quantization
The output of most sensors is a continuous voltage waveform whose amplitude and spatial behavior are
related to the physical phenomenon being sensed. To create a digital image, we need to convert the continuous
sensed data into digital form. This involves two processes: sampling and quantization.
The basic idea behind sampling and quantization is illustrated in Fig.1.3, (a) shows a continuous image, f(x, y), that
we want to convert to digital form. An image may be continuous with respect to the x- and y-coordinates, and
also in amplitude. To convert it to digital form, we have to sample the function in both coordinates and in
amplitude. Digitizing the coordinate values is called “sampling”. Digitizing the amplitude values is called
“quantization”.
In order to form a digital function, the gray-level values also must be converted (quantized) into discrete
quantities. The right side of Fig. 1.3 (c) shows the gray-level scale divided into eight discrete levels, ranging from
black to white. The vertical tick marks indicate the specific value assigned to each of the eight gray levels. The
continuous gray levels are quantized simply by assigning one of the eight discrete gray levels to each sample. The
assignment is made depending on the vertical proximity of a sample to a vertical tick mark. The digital samples
resulting from both sampling and quantization are shown in Fig. 1.3 (d). Starting at the top of the image and
carrying out this procedure line by line produces a two-dimensional digital image.
when a sensing array is used for image acquisition, there is no motion and the number of sensors in the array
establishes the limits of sampling in both directions. Figure 1.4 illustrates this concept. Figure 1.4 (a) shows a
continuous image projected onto the plane of an array sensor. Figure 1.4 (b) shows the image after sampling and
quantization. Clearly, the quality of a digital image is determined to a large degree by the number of samples and
discrete gray levels used in sampling and quantization.
Fig.1.4 (a) Continuous image projected onto a sensor array (b) Result of image sampling and quantization.
Connectivity:
Connectivity between pixels is a fundamental concept that simplifies the definition of numerous digital
image concepts, such as regions and boundaries. To establish if two pixels are connected, it must be determined
if they are neighbors and if their gray levels satisfy a specified criterion of similarity (say, if their gray levels are
equal). For instance, in a binary image with values 0 and 1, two pixels may be 4-neighbors, but they are said to be
connected only if they have the same value.
Let V be the set of gray-level values used to define adjacency. In a binary image, V={1} if we are
referring to adjacency of pixels with value 1. In a grayscale image, the idea is the same, but set V typically
contains more elements. For example, in the adjacency of pixels with a range of possible gray-level values 0 to
255, set V could be any subset of these 256 values. We consider three types of adjacency:
(a) 4-adjacency. Two pixels p and q with values from V are 4-adjacent if q is in the set N4 (p).
(b) 8-adjacency. Two pixels p and q with values from V are 8-adjacent if q is in the set N8 (p).
(c) m-adjacency (mixed adjacency).Two pixels p and q with values from V are m-adjacent
if i) q is in N4 (p), or ii) q is in ND (p) and the set has no pixels whose values are from V.
Mixed adjacency is a modification of 8-adjacency. It is introduced to eliminate the ambiguities that often
arise when 8-adjacency is used
Let R be a subset of pixels in an image. We call R a region of the image if R is a connected set. The boundary
(also called border or contour) of a region R is the set of pixels in the region that have one or more neighbors
that are not in R. If R happens to be an entire image (which we recall is a rectangular set of pixels), then its
boundary is defined as the set of pixels in the first and last rows and columns of the image. This extra
definition is required because an image has no neighbors beyond its border. Normally, when we refer to a
region, we are referring to a subset
************************