0% found this document useful (0 votes)
52 views3 pages

LT - (P) - 1.3 Image Processing

Digital image processing involves manipulating digital images using computers. There are three types of image processing: low-level, intermediate-level, and high-level. Low-level processing includes image enhancement techniques like contrast adjustment and noise removal. Intermediate-level processing transforms images into other domains like the discrete cosine transform. High-level processing analyzes image content through tasks like segmentation, feature extraction, and classification.

Uploaded by

wqlusimbi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views3 pages

LT - (P) - 1.3 Image Processing

Digital image processing involves manipulating digital images using computers. There are three types of image processing: low-level, intermediate-level, and high-level. Low-level processing includes image enhancement techniques like contrast adjustment and noise removal. Intermediate-level processing transforms images into other domains like the discrete cosine transform. High-level processing analyzes image content through tasks like segmentation, feature extraction, and classification.

Uploaded by

wqlusimbi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Image Processing

A typical image processing system is:

Digital Image: A sampled and quantized version of a 2D function that has been acquired by
optical or other means, sampled at equally spaced rectangular grid pattern, and quantized in
equal intervals of amplitudes.
The task of digital image processing involves handling, transmission, enhancement and analysis
of digital images with the aid of digital computers. This calls for manipulation of 2-D signals.
There generally three types of processing that are applied to an image. These are: low-level,
intermediate-level and high-level processing
Areas of Digital Image Processing
1. Image Representation and Modelling
An image can be represented either in the spatial domain or the transform domain. An
important consideration in image representation is the fidelity or intelligibility criteria for
measuring To advance click enter or page down to go back use page up 6 the quality of an
image. Such measures includes contrast (gray level difference within an image), spatial
frequencies, color and sharpness of the edge information.

Images represented in the spatial domain directly indicate the type and the physical nature
of the imaging sensors; e.g. luminance of object in a scene for pictures taken by camera,
absorption characteristics of the body tissue for X-ray images, radar crosssection of a target
for radar imaging, temperature profile of a region for infrared imaging and gravitational field
in an area in geophysical imaging.

2. Image Enhancement (low-level):


No imaging system will give images of perfect quality. In image enhancement the aim is to
manipulate an image in order to improve its quality. This requires that an intelligent human
viewer is available to recognize and extract useful information from an image. Since the
human subjective judgement may either be wise or fickle, certain difficulties may arise.

Examples of image enhancement (low-level processing) are


1. Contrast & gray scale improvement
2. Spatial frequency enhancement
3. Pseudo coloring
4. Noise removal
5. Edge sharpening
6. Magnification and Zooming

3. Image Restoration (low-level):


As in the image enhancement, the ultimate goal of image restoration is to improve the quality
of an image in some sense. Image restoration involves recovering or estimating an image that
has been degraded by some deterministic and/or stochastic phenomena.
The restoration techniques aim at modelling the degradation and then applying an appropriate
scheme in order to recover the original image. Some of the typical methods are:
1. Image estimation and noise smoothing
2. Deblurring
3. Inverse filtering
4. 2D Wiener and Kalman filters
Image reconstruction can also be viewed as a special class of restoration where two or higher
dimensional objects are reconstructed from several projections. Applications: CT scanners
(medical), astronomy, radar imaging.
Typical methods are:
1. Radon Transform
2. Projection theorem
3. Reconstruction algorithms

4. Image Transforms (Intermediate-Level):


Image transformation involves mapping digital images to the transform domain using a
unitary image transform such as 2D DFT, 2D Discrete Cosine Transform (DCT), 2D Discrete
Wavelet Transform (DWT).
In the transform domain certain useful characteristics of the images, which cannot typically be
ascertained in the spatial domain, are revealed. Image transformation performs both feature
extraction and dimensionality reduction which are crucial for various applications. These
operations are considered intermediate-level since images are mapped to reduced dimensional
feature vectors.
5. Image Data Compression and Coding:
In many applications one needs to transmit or store images in digital forms. The number of
bits is tremendous.
Storage and/or transmission of such huge amount of data require large capacity and/or
bandwidth which would be expensive or impractical.
Straight digitalization requires 8 bits per picture element (pixel).
Using frame coding technique (DPCM) or transform coding one can reduce this to 1-2
bits/pixel which preserves the quality of images.

Using frame-to-frame coding (transmit frame differences) further reduction is possible. Motion-
compensated coding detects and estimates motion parameters from video image sequences and
motion-compensated frame differences are transmitted.
Some of the typical schemes are:
1. Pixel-by-pixel coding
2. Predictive coding
3. Transform coding
4. Hybrid coding
5. Frame-to-frame coding
6. Vector quantization

6. Image Analysis and Computer Vision (High-level):


Image analysis and computer vision involve (a) segmentation, (b) feature extraction and (c)
classification/recognition. Segmentation techniques are used to isolate the desired object from
the scene so that the features can be measured easily and accurately e.g. separation of targets
from background. The most useful features are then extracted from the segmented objects
(targets). Quantitative evaluation of these features allow classification and description of the
object.

You might also like