Midsem BKM Merged PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 239

7/29/2019

GNR607 Principles of Satellite


Image Processing

Prof. B. Krishna Mohan


CSRE, IIT Bombay
bkmohan@csre.iitb.ac.in
Lecture 1 29 July 2019 9.30 – 10.25 AM

Lecture – 1 Course Overview


• Introductory course
• Assumes no background in signal / image
processing
• Desired – linear algebra, probability and
random variables, vectors and vector
spaces, basic concepts in linear systems
• Mathematical preliminaries will be
reviewed

GNR607
29-07-2019 Lecture 1 Slide 2
Prof. B. Krishna Mohan

1
7/29/2019

Course Team – TA’s


M.Tech. II Year students

Nidhi Kapoor

Aditya Chondke

Aishwarya Gujrathi

Sanoj Prasad

GNR607
29-07-2019 Lecture 1 Slide 3
Prof. B. Krishna Mohan

Expectation from the Course


• Basic knowledge of satellite image
processing
• Satellite image handling including Region
of Interest selection, referencing to a
ground reference system, improving visual
quality of images, retrieving useful indices,
extracting quantitative information by
classification, assessing accuracy of
operations, monitoring changes
• Writing programs to process satellite
images

GNR607
29-07-2019 Lecture 1 Slide 4
Prof. B. Krishna Mohan

2
7/29/2019

Assessment Pattern
• Quizzes – 15%
• Mid-semester Examination – 20%
• Programming Assignment – 15%
• Laboratory – 10%
• Semester-end Examination – 40%

GNR607
29-07-2019 Lecture 1 Slide 5
Prof. B. Krishna Mohan

Text and Reference Material


1. Richards, J.A. and X. Jia, Remote Sensing
Digital Image Analysis, 4th ed., Springer-
Verlag, 2006
2. Gonzalez, R.C. and Woods, R.E., Digital
Image Processing, 3rd ed., Pearson
Education, 2008
3. Jensen, J.R., Introductory Digital Image
Processing A Remote Sensing Perspective,
4th ed., Pearson Education, 2016
4. http://www.imageprocessingplace.com
5. http://geoinfo.amu.edu.pl/wpk/rst/rst/Front/o
verview.html
6. http://www.ccrs.nrcan.gc.ca
7. https://webapps.itc.utwente.nl/sensor/defau
lt.aspx?view=allsensors
GNR607
29-07-2019 Lecture 1 Slide 6
Prof. B. Krishna Mohan

3
7/29/2019

E-Resources
1. https://www.itc.nl/library/papers_2009/general/princ
iplesremotesensing.pdf
2. https://www.itc.nl/library/papers_2009/general/princ
iplesgis.pdf
3. https://www.nrcan.gc.ca/sites/www.nrcan.gc.ca/file
s/earthsciences/pdf/resource/tutor/fundam/pdf/fun
damentals_e.pdf
4. http://giswin.geo.tsukuba.ac.jp/sis/tutorial/koko/re
motesensing/FundamentalRemoteSensing.pdf
(booklet)
5. http://www.gdmc.nl/oosterom/PoRSHyperlinked.pdf
6. http://www.intechopen.com/books/land-
applications-of-radar-remote-sensing/estimation-
of-cultivated-areas-using-multi-temporal-sar-data
7. https://www.nrsc.gov.in/Knowledge_EBooks

GNR607
29-07-2019 Lecture 1 Slide 7
Prof. B. Krishna Mohan

Resources Specific to GNR607


1. https://www.cdeep.iitb.ac.in/previous_courses.php
(IIT Bombay’s archive of video-recorded courses,
available through ldap userid and password.
Contains video-recording of Autumn 2014 version
of this course.

GNR607
29-07-2019 Lecture 1 Slide 8
Prof. B. Krishna Mohan

4
7/29/2019

Instructor’s Profile
Prof. B. Krishna Mohan - Instructor
bkmohan@csre.iitb.ac.in
•PhD in Electrical Engineering from IIT Bombay 1991
•Professor and previous Head of CSRE
•Over 100 publications in journals and conferences
•7 PhDs completed; currently supervisor of 4 students
•4 M.Tech. dissertations currently being supervised
•Over 40 Research and consultancy projects dealing with
satellite image analysis, digital mapping, and others
•PhD thesis examiner for several universities
•Manuscript reviewer for several international and national
journals
•Member of Board of Studies at many Universities
•Two awards and many grants for international visits

GNR607
29-07-2019 Lecture 1 Slide 9
Prof. B. Krishna Mohan

Mode of Interaction
•Class lectures according to slot 2 schedule
•Friday afternoon lab
•Lecture slides will be available on
http://moodle.iitb.ac.in
•Use your login and password to get access to
lecture materials

ALL DOUBTS/QUERIES/COMMENTS MAY BE


POSTED ON MOODLE FOR COMMON
BENEFIT;RESPONSE ALSO ON MOODLE

GNR607
29-07-2019 Lecture 1 Slide10
Prof. B. Krishna Mohan

5
7/29/2019

Laboratory Sessions
•Friday afternoon lab
•CSRE has procured 30+ licenses of state-
of-the-art satellite image processing
software ERDAS Imagine and 25+ licenses
of ENVI
•Two extra hours will be spent by students
on carrying out image processing and
analysis using ERDAS Imagine software on
Friday afternoon concurrent with theory
covered during that week

GNR607
29-07-2019 Lecture 1 Slide 11
Prof. B. Krishna Mohan

Overview of the Course


1. Introduction to 6. Image Transforms –
remote sensing and Fourier, Color, Principal
image processing Component, Hough
2. Mathematical 7. Texture analysis
Preliminaries methods
3. Image display and 8. Feature selection and
corrections classification
4. Image enhancement 9. Accuracy analysis
and filtering Techniques
5. Mathematical 10. Change detection
Morphology approaches

GNR607
29-07-2019 12
Prof. B. Krishna Mohan

6
7/29/2019

Remote Sensing
• Motivation
– Images of natural resources such as
forests, waterbodies, oceans, soils and
hills/mountains are collected by
spaceborne sensors
– The governing principle is remote sensing
– Important to understand the choice of
sensors, their operation and role of
atmosphere to understand the digital
image and evolve its processing strategy

GNR607
29-07-2019 Lecture 1 Slide 13
Prof. B. Krishna Mohan

Sample Image

GNR607
29-07-2019 Lecture 1 Slide 14
Prof. B. Krishna Mohan

7
7/29/2019

Enlarged View of Previous Image

GNR607
29-07-2019 Lecture 1 Slide 15
Prof. B. Krishna Mohan

Remote Sensing
Definition

Remote sensing is the art and science of


making measurements about an object
or the environment without being in
physical contact with it

GNR607
29-07-2019 Lecture 1 Slide 16
Prof. B. Krishna Mohan

8
7/29/2019

Image Processing
• Motivation
– Most remote sensing data collected in
digital form
– Digital image processing essential to
analyze and extract information
– Some image processing operations
common across domains, some unique to
remote sensing
– An important distinguishing factor - huge
data volume, no video, only still images
from space

GNR607
29-07-2019 Lecture 1 Slide 17
Prof. B. Krishna Mohan

Image Corrections
• Motivation
– Various types of distortions are
introduced into the images due to
atmosphere, satellite motion, earth
rotation and curvature
– Distortions are modeled and
corrected prior to the use of the
images
– Essential for practical use of images,
for preserving shape and area of
objects on Earth

GNR607
29-07-2019 Lecture 1 Slide 18
Prof. B. Krishna Mohan

9
7/29/2019

Image Enhancement
• Motivation
– Good contrast and brightness
essential for visually appreciating the
content
– Image display is modified to improve
or enhance the visual quality
– Often this operation is performed in
real time, and users often get a set of
options for different types of image
enhancement

GNR607
29-07-2019 Lecture 1 Slide 19
Prof. B. Krishna Mohan

Neighborhood Operations
• Motivation
– Groups of points are considered
together for processing
– Necessary to suppress sensor or
atmosphere induced noise in data
– Useful to sharpen the image
– Required to extract boundaries of
objects as well as lines from images

GNR607
29-07-2019 Lecture 1 Slide 20
Prof. B. Krishna Mohan

10
7/29/2019

Mathematical Morphology
• Motivation
– Morphology is the study of form or shape
– Mathematical morphology deals with set
theoretic and other mathematical
operations to deal with shapes or forms in
images
– Useful to perform structure based image
analysis
– Powerful tool to highlight object features
like smoothness, roughness and so on

GNR607
29-07-2019 Lecture 1 Slide 21
Prof. B. Krishna Mohan

Image Transforms
• Motivation
– Image transformations facilitate
certain types of processing
operations
– Can be better for visualizing the
color, frequency and other
information in a transformed domain
– Information extraction sometimes is
easier through transformations

GNR607
29-07-2019 Lecture 1 Slide 22
Prof. B. Krishna Mohan

11
7/29/2019

Texture Analysis Techniques


• Motivation
– Texture – perceptual attribute of human
vision
– Texture can be used to distinguish
between objects in image
– Derived feature, to supplement satellite
collected data. Very handy when objects
are of the same color or shape or size.
– Perceived by humans through vision,
touch, taste, and auditory senses

GNR607
29-07-2019 Lecture 1 Slide 23
Prof. B. Krishna Mohan

Feature Selection Methods


• Motivation
– Features are descriptors for each object
or each pixel in the image based on
which it can be classified as water /
vegetation / cloud / road / building …
– Selection and evaluation of features
essential for success of digital image
classification
– Rich feature set facilitates
sophisticated image analysis processes

GNR607
29-07-2019 Lecture 1 Slide 24
Prof. B. Krishna Mohan

12
7/29/2019

Image Classification Methods


• Motivation
– To assign each point in the image to
a category or class of our interest
– Guided by statistical procedures that
allow estimation of probability of
pixel to belong to each class; highest
class probability suggests pixel
should be assigned to that class

GNR607
29-07-2019 Lecture 1 Slide 25
Prof. B. Krishna Mohan

Accuracy Assessment Techniques

• Motivation
– Analysis of satellite images can lead
to practically useful outputs
– Before deploying these outputs,
accuracy assessment is essential
– Numerical estimates and issues
involved – conservative estimate or
optimistic estimate?

GNR607
29-07-2019 Lecture 1 Slide 26
Prof. B. Krishna Mohan

13
7/29/2019

Change Detection
• Motivation
– Changes on the ground need to be
detected and categorized as what
changed, and from what to what
– Essential in military, urban and rural
planning, afforestation /
deforestation, crop monitoring …

GNR607
29-07-2019 Lecture 1 Slide 27
Prof. B. Krishna Mohan

Image recorded on Feb. 25, 2002

See the changes!

I live in
this bldg!

Image recorded on Feb. 25, 2015

GNR607
29-07-2019 Lecture 1 Slide 28
Prof. B. Krishna Mohan

14
7/29/2019

To be continued …

15
7/29/2019

GNR607 Principles of Satellite


Image Processing

Prof. B. Krishna Mohan


CSRE, IIT Bombay
bkmohan@csre.iitb.ac.in
Lecture 2 30 July 2019 10.35 – 11.30 AM

Lecture – 2 Contents
• Introduction to Digital Image
Processing
• Generation of a digital image
• Sampling and quantization
• Image Processing System
• Image Understanding Methodology
• Applications

GNR607
29-07-2019 Lecture 1 Slide 2
Prof. B. Krishna Mohan

1
7/29/2019

What is a digital image?


• A digital image is a representation of
the real world, discretized in space,
with energy reflected / emitted /
transmitted by the objects in the image
quantized to a finite number of levels

GNR607
29-07-2019 Lecture 1 Slide 3
Prof. B. Krishna Mohan

Real World to Digital World

Camera

Pixel
Real World Scene Digital Image

GNR607
29-07-2019 Lecture 1 Slide 4
Prof. B. Krishna Mohan

2
7/29/2019

Digitization
• Digitization involves three steps:
– Sampling
– Quantization
– Coding

GNR607
29-07-2019 Lecture 1 Slide 5
Prof. B. Krishna Mohan

Sampling
• View area divided into cells
• Each cell is a picture element pixel
• The image now is a matrix of M rows,
and N columns
• M = Length of View area / Length of Cell
• N = Width of View area / Width of Cell
• Smaller cell size better ability to
distinguish between closely spaced
objects

GNR607
29-07-2019 Lecture 1 Slide 6
Prof. B. Krishna Mohan

3
7/29/2019

Sampling
• In remotely sensed images the sampling is
essentially ground sampling – i.e., on the
ground a virtual grid is placed and the
energy reflected / transmitted / emitted
from each grid cell is collected by the
sensors and stored as a pixel value
• The grid cell corresponds to a pre-defined
area on the ground; e.g., 5.8m x 5.8m as
with ISRO’s Resourcesat or 50cm x 50cm
as in case of WorldView Satellite

GNR607
29-07-2019 Lecture 1 Slide 7
Prof. B. Krishna Mohan

Sampling
• Smaller the grid cell area better the details
visible in the image
• The grid cell corresponding to a pre-
defined area on the ground; e.g., 5.8m x
5.8m
• This is similar to dpi settings in desktop
image scanners. Higher dpi, smaller size
of dot, more pixels or cells in the image

GNR607
29-07-2019 Lecture 1 Slide 8
Prof. B. Krishna Mohan

4
7/29/2019

Spatial Resolution

GNR607
29-07-2019 Lecture 1 Slide 9
Prof. B. Krishna Mohan

Image Detail and Sampling Size

23.5m x 23.5m resolution


Vast areas seen without
local details

61cmx61cm resolution
details of individual
buildings

5.8m x 5.8m resolution


An entire settlement block
seen without individual
buildings

GNR607
29-07-2019 Lecture 1 Slide 10
Prof. B. Krishna Mohan

5
7/29/2019

Example using Desktop Scanner

Increased clarity,
reduced
distortion with
finer sampling

75dpi scan 150dpi scan 300dpi scan

GNR607
29-07-2019 Lecture 1 Slide 11
Prof. B. Krishna Mohan

Impact of Pixel Size


• Pixel size corresponds to the
Instantaneous Field of View (IFOV) of
the sensing system
• Smaller the IFOV, better is the ability to
resolve closely spaced objects
(RESOLUTION)
• Price to pay – larger size of data
• Noise sensitivity of the sensor
determines the maximum possible
resolution

GNR607
29-07-2019 Lecture 1 Slide 12
Prof. B. Krishna Mohan

6
7/29/2019

Point to Remember!
• IFOV is 10 metres x 10 metres square
does not mean that objects smaller
than this size will not be visible
• If a smaller object has very high or very
low reflectance relative to its
background, such object will be visible
despite its size being smaller than the
pixel’s IFOV

GNR607
29-07-2019 Lecture 1 Slide 13
Prof. B. Krishna Mohan

Quantization
• Reflected / transmitted / emitted energy
from the object is converted into an
electrical signal
• The electrical signal converted to a
digital signal by an analog-to-digital
converter (ADC).
• Digital signal takes a range of values
according to the specification of the
ADC

GNR607
29-07-2019 Lecture 1 Slide 14
Prof. B. Krishna Mohan

7
7/29/2019

Quantization

24 bit color
8-bit color

GNR607
29-07-2019 Lecture 1 Slide 15
Prof. B. Krishna Mohan

Gray Shades and Levels


0 255

GNR607
29-07-2019 Lecture 1 Slide 16
Prof. B. Krishna Mohan

8
7/29/2019

Analog to Digital Converter


• 8-bit ADC  28 distinct values,
represented in binary as 00000000 –
11111111, or 0 to 255 in decimal form or
00 to FF in hex
• 11-bit ADC  211 values, 0 to 2047
• The number of levels indicate the
number of distinct individually
differentiable levels of received energy

GNR607
29-07-2019 Lecture 1 Slide 17
Prof. B. Krishna Mohan

Impact of Quantization Levels

Impact of
quantization
levels
64 levels (6 bit) – more
shades visible

4 levels (2 bit) – severe


contouring effect

GNR607
29-07-2019 Lecture 1 Slide 18
Prof. B. Krishna Mohan

9
7/29/2019

Pixel Size Importance

160 x150 80 x 75
24-bit 24-bit

GNR607
29-07-2019 Lecture 1 Slide 19
Prof. B. Krishna Mohan

Pixel Size Importance

160 x150 160 x 150


24-bit 8-bit

GNR607
29-07-2019 Lecture 1 Slide 20
Prof. B. Krishna Mohan

10
7/29/2019

Motivation for Digital Image


Processing
• Why Digital Image Processing for Remote
Sensing?
– Nature of data (inherently digital)
– Flexibility offered by computers
– Reducing the bias of human analysts
– Standardizing routine operations
– Rapid handling of large volumes of data

GNR607
29-07-2019 Lecture 1 Slide 21
Prof. B. Krishna Mohan

Wavelengths Used for Imaging

Wavelength Freq.
• Gamma Rays
• X-Rays
• Visible/Infrared
• Microwaves
• Radio waves
• Ultrasound waves
• Seismic waves
GNR607
29-07-2019 Lecture 1 Slide 22
Prof. B. Krishna Mohan

11
7/29/2019

Components of an Image
Processing System
• Image Sensors
• Image Display
• Image Storage
• Computer
• Image Processing software
• Special Purpose graphics hardware
• Image printers/plotters

GNR607
29-07-2019 Lecture 1 Slide 23
Prof. B. Krishna Mohan

Schematic Diagram

GNR607
29-07-2019 Lecture 1 Slide 24
Prof. B. Krishna Mohan

12
7/29/2019

Steps in Digital Image Processing

Image Image Image


Acquisition Corrections Enhancement

Image Image
Feature
Classification Selection Segmentation

Final Interpretation

GNR607
29-07-2019 Lecture 1 Slide 25
Prof. B. Krishna Mohan

Limitations of Computer Based


Image Interpretation
• Lack of access to human intuition
• Ambiguities

GNR607
29-07-2019 Lecture 1 Slide 26
Prof. B. Krishna Mohan

13
7/29/2019

What is the foreground?

Vase? Or …
People?

GNR607
29-07-2019 Lecture 1 Slide 27
Prof. B. Krishna Mohan

Illusions
• Which is the mouth?

Vase? Or … Mouth
Mouth
People?

The Bunny/Duck illusion.

GNR607
29-07-2019 Lecture 1 Slide 28
Prof. B. Krishna Mohan

14
7/29/2019

Illusions
• Who drew the triangle?

Vase? Or …
People?

GNR607
29-07-2019 Lecture 1 Slide 29
Prof. B. Krishna Mohan

Some Applications

GNR607
29-07-2019 Lecture 1 Slide 30
Prof. B. Krishna Mohan

15
7/29/2019

Quality Improvement

Vase? Or …
People?

GNR607
29-07-2019 Lecture 1 Slide 31
Prof. B. Krishna Mohan

Hubble Telescope

Vase? Or …
People?

GNR607
29-07-2019 Lecture 1 Slide 32
Prof. B. Krishna Mohan

16
7/29/2019

Industrial Quality Inspection

GNR607
29-07-2019 Lecture 1 Slide 33
Prof. B. Krishna Mohan

Law Enforcement

GNR607
29-07-2019 Lecture 1 Slide 34
Prof. B. Krishna Mohan

17
7/29/2019

Gamma-Ray Imaging

GNR607
29-07-2019 Lecture 1 Slide 35
Prof. B. Krishna Mohan

X-Ray Imaging

GNR607
29-07-2019 Lecture 1 Slide 36
Prof. B. Krishna Mohan

18
7/29/2019

UV Imaging

GNR607
29-07-2019 Lecture 1 Slide 37
Prof. B. Krishna Mohan

Visible – Infrared Imaging

GNR607
29-07-2019 Lecture 1 Slide 38
Prof. B. Krishna Mohan

19
7/29/2019

Visible – Infrared Imaging

GNR607
29-07-2019 Lecture 1 Slide 39
Prof. B. Krishna Mohan

Thermal Imaging

http://gsp.humboldt.edu/olm_2015/Courses/GSP_216_Online/
lesson8-1/interpreting-imagery.html

GNR607
29-07-2019 Lecture 1 Slide 40
Prof. B. Krishna Mohan

20
7/29/2019

Microwave Imaging

GNR607
29-07-2019 Lecture 1 Slide 41
Prof. B. Krishna Mohan

Imaging in RF Region

Mini-RF CPR (colorized and overlaid on


backscatter image) image of the Mouchez crater
(78.38°N, 26.8°W) region. The radar bright
regions with abundant wavelength-scale
scatterers are indicated with arrows in both the
CPR and WAC (Fig.3) images.
GNR607
29-07-2019 Lecture 1 Slide 42
Prof. B. Krishna Mohan

21
7/29/2019

Imaging in the Radio Band

Seismic Imaging

22
7/29/2019

Ultrasound Imaging

Advances in Imaging
• 2-D Still images, only location on ground,
no height or depth
• 3-D Still images, with location on ground,
location and depth
• 4-D Image location, height/depth and
temporal variations
• 6-D Image location, height/depth, 3-D
motion of field of view
• Add color/thermal/spectral dimensions…

GNR607
29-07-2019 Lecture 1 Slide 46
Prof. B. Krishna Mohan

23
7/29/2019

Summary

• Images are used in a wide range of


applications
• Most part of the electromagnetic
spectrum is suitable for imaging for
one or the other application

GNR607
29-07-2019 Lecture 1 Slide 47
Prof. B. Krishna Mohan

To be continued …

24
GNR607 Principles of Satellite Image Processing 19/7/2016

GNR607 Principles of Satellite


Remote Sensing

Prof. B. Krishna Mohan


CSRE, IIT Bombay
bkmohan@csre.iitb.ac.in
Lecture 3 01 August 2019 11.35 – 12.30 PM

Lecture – 3 Contents
• Introduction to Remote Sensing
• Stages in Remote Sensing
• Concept and types of Resolution
• Indian and International Space Programs

29 July 2019 B. Krishna Mohan Lecture 3 Slide 2


GNR607 Principles of Satellite Image Processing 19/7/2016

What is remote sensing?


Remote sensing is the art and science of
making measurements about an object
or the environment without being in
physical contact with it

29 July 2019 B. Krishna Mohan Lecture 3 Slide 3

Importance of Remote Sensing


Remote Sensing provides vital data for
many critical applications
• Resources management
• Environmental monitoring
• Defence
• Urban / rural development and planning
• Crop yield forecasting
• Hazard zonation and disaster mitigation

29 July 2019 B. Krishna Mohan Lecture 3 Slide 4


GNR607 Principles of Satellite Image Processing 19/7/2016

Remote Sensing Platforms


• Earth orbiting geostationary satellites
• Polar orbiting remote sensing satellites
• Low earth orbiting satellites
• Airborne systems
• Drones/UAVs

29 July 2019 B. Krishna Mohan Lecture 3 Slide 5

Stages in Remote Sensing


• Electromagnetic energy reflected / emitted
by earth surface features
• Energy received by the remote sensors
• Energy converted to electrical signal
• Electrical signal converted to DIGITAL form
• Digital signal transmitted to ground
• Ground station organizes data on
CDs/DVDs
• Data distributed to users
• Users analyze data and produce
information products

29 July 2019 B. Krishna Mohan Lecture 3 Slide 6


GNR607 Principles of Satellite Image Processing 19/7/2016

Electromagentic Spectrum

29 July 2019 B. Krishna Mohan Lecture 3 Slide 7

Visible and Reflective Infrared


• Reflectance measurements in different
wavelengths
– ratio of incident to reflected energy
– Ranges 0% to 100%
– Highly wavelength dependent

• Basic Premise of RS

– Each object on the earth surface has a unique


reflectance pattern as a function of wavelength

29 July 2019 B. Krishna Mohan Lecture 3 Slide 8


GNR607 Principles of Satellite Image Processing 19/7/2016

Vegetation Reflectance Spectrum


High reflectance
region

Low reflectance
region

29 July 2019 B. Krishna Mohan Lecture 3 Slide 9

Reflectance Spectra of Earth Objects

• Different objects respond differently!

29 July 2019 B. Krishna Mohan Lecture 3 Slide 10


GNR607 Principles of Satellite Image Processing 19/7/2016

Atmospheric Windows
• The atmosphere interferes with the radiation
passing through it
• It is essential to block the harmful UV rays in
solar radiation from reaching the earth
• Should not block the radiation in in wavelengths
used for earth observation
• Choice of wavelengths should ensure
– Clear response recorded from Earth surface
features
– Minimal interference from atmospheric
constituents

29 July 2019 B. Krishna Mohan Lecture 3 Slide 11

Radiation Propagation thro’ Atmosphere

29 July 2019 B. Krishna Mohan Lecture 3 Slide 12


GNR607 Principles of Satellite Image Processing 19/7/2016

Atmospheric Characteristics
• Wavelength Bands
Short Wave Infrared (SWIR)
Long Wave Infrared (LWIR)
(0.7 - 2 microns)
(8 - 12 microns)
Visible
(0.4 - 0.7 microns) Mid Wave Infrared (MWIR) Millimeter Wave (MMW)
(3 - 5 microns) (3200 - 8600 microns)

1 10 102 103 104

wavelength (in microns)

29 July 2019 B. Krishna Mohan Lecture 3 Slide 13

Amospheric Windows

Visible Near Infrared Far Infrared


Transmission (%)

Wavelength
(microns)

29 July 2019 B. Krishna Mohan Lecture 3 Slide 14


GNR607 Principles of Satellite Image Processing 19/7/2016

Role of Atmosphere
• Wavelengths less affected by atmosphere
are chosen to design the sensors to
operate in:
• Visible 400 nm – 700 nm
• Near infrared – 700 nm – 2500 nm with a
few gaps
• Thermal infrared – 8 microns – 15 microns
• Microwave – 1 cm – 30 cm (approx.)

• Other wavelengths are blocked by


atmosphere

29 July 2019 B. Krishna Mohan Lecture 3 Slide 15

Specifications of Remote Sensors


• Technology used – Solid state / Electro-
mechanical
• Resolution
– IFOV of sensing element
– Number of wavelengths in which data are
recorded
– Number of levels in which data values are
quantized
– Frequency of data collection over a given area
24 bit color
8-bit color

29 July 2019 B. Krishna Mohan Lecture 3 Slide 16


GNR607 Principles of Satellite Image Processing 19/7/2016

Sensor Technology
• Sensors are broadly of two types:
• Electromechanical – scanning is performed by an
oscillating mirror deflecting upwelling radiation from
earth onto wavelength sensitive photodetectors.
Maintaining constant angular velocity of the mirror is
a problem
• Solid state – sensor consists of a linear array of
detectors, equal in number to the number of pixels in
a row of the image. Much more stable compared to
electromechanical scanning

29 July 2019 B. Krishna Mohan Lecture 3 Slide 17

Electromechanical Technology

Oscillating
mirror system

Detector
system

A pixel
Direction of
flight

29 July 2019 B. Krishna Mohan Lecture 3 Slide 18


GNR607 Principles of Satellite Image Processing 19/7/2016

Solid State Technology

Detector Array

Lens

Linear
Array of
pixels
Direction of
flight

29 July 2019 B. Krishna Mohan Lecture 3 Slide 19

Concept of Resolution
• Four types of resolution in remote sensing:
• Spatial resolution
• Spectral resolution
• Radiometric resolution
• Temporal resolution

29 July 2019 B. Krishna Mohan Lecture 3 Slide 20


GNR607 Principles of Satellite Image Processing 19/7/2016

Spatial Resolution
• Ability of the sensor to observe closely
spaced features on the ground
• Function of the instantaneous field of view of
the sensor
• Large IFOV  Coarse spatial resolution –
pixel covers more area on ground
• Small IFOV  Fine spatial resolution –
pixel covers less area on ground
• A sensor with pixel area 5x5 metres has a
higher spatial resolution than a sensor
with pixel area 10x10 metres
29 July 2019 B. Krishna Mohan Lecture 3 Slide 21

Effect of Spatial Resolution


• When resolution is very high we
perceive individual objects such as
buildings or roads
• When resolution is medium, we
perceive very large objects as
individual features, and areas as
textured regions
• When resolution is coarse, we perceive
color or tone variations, and large area
based features.
256 levels 8 levels

29 July 2019 B. Krishna Mohan Lecture 3 Slide 22


GNR607 Principles of Satellite Image Processing 19/7/2016

Very High Spatial Resolution

0.6m x 0.6m

29 July 2019 B. Krishna Mohan Lecture 3 Slide 23

Another Very High Spatial Resolution


Image

1m x 1m

160 x150 160 x 150

IIT Campus image from Ikonos Satellite


29 July 2019 B. Krishna Mohan Lecture 3 Slide 24
GNR607 Principles of Satellite Image Processing 19/7/2016

Medium Resolution Image


Portion of
Resourcesat
Satellite data
Pixel size:
5.8m x 5.8m

29 July 2019 B. Krishna Mohan Lecture 3 Slide 25

Low Resolution Image

23.25m x 23.25m

29 July 2019 B. Krishna Mohan Lecture 3 Slide 26


GNR607 Principles of Satellite Image Processing 19/7/2016

Effect of High Spatial Resolution


• High resolution images are information rich
– Spatial information
– Multispectral information
– Textural information
• Image can be viewed as a collection of objects
with spatial relationships – adjacent, north of,
south of, …

29 July 2019 B. Krishna Mohan Lecture 3 Slide 27

Spectral Resolution
• Ability of the sensor to distinguish differences in
reflectance of ground features in different
wavelengths
• Characterized by many sensors, each operating
in a narrow wavelength band
• Essential to discriminate between sub-classes of
a broad class such as vegetation
• Helpful in detecting objects under camouflage
• Essential in identifying state of objects such as
waterbodies, vegetation, road surface material,
elements in top soil of a mineralized area, …

29 July 2019 B. Krishna Mohan Lecture 3 Slide 28


GNR607 Principles of Satellite Image Processing 19/7/2016

High spectral resolution


response of vegetation
Response

Wavelength
Large number of contiguous sensors
Narrow bandwidth (~ a few nanometers)

29 July 2019 B. Krishna Mohan Lecture 3 Slide 29

Coarse Spectral Resolution


• Most satellites provide multispectral images
with very few spectral bands
Response

Small number of sensors


Large bandwidth
wavelength

29 July 2019 B. Krishna Mohan Lecture 3 Slide 30


GNR607 Principles of Satellite Image Processing 19/7/2016

Reflectance Spectra

Reflectance

Unique spectra of objects wavelength

29 July 2019 B. Krishna Mohan Lecture 3 Slide 31

Temporal Resolution
• This depends on the return time of the satellite
• Return time is a function of the altitude at which
the satellite is launched
• Higher the altitude, more circumference of orbit,
longer to orbit the earth
• For frequent coverage, such as coverage of areas
of military conflict, areas affected by natural
disasters, of areas of massive human gatherings
the images should be acquired asynchronously
• Steerable sensor systems make this feasible
today

29 July 2019 B. Krishna Mohan Lecture 3 Slide 32


GNR607 Principles of Satellite Image Processing 19/7/2016

Wavelengths used for Imaging


• Gamma Rays Wavelength
• X-Rays
• Visible/Infrared Rays
• Microwaves
• Radio waves
• Ultrasound waves
• Seismic waves Frequency

29 July 2019 B. Krishna Mohan Lecture 3 Slide 33

High Temporal Resolution Coverage


• Lower altitude satellites have a higher
frequency of revisit of the same area on
earth
• Normal revisit time is approximately 16-25
days for different satellites
• Some satellites are launched in pairs with
a time gap, e.g., IRS 1C / 1D
• Temporal resolution doubles, revisit time
decreases by 50%

29 July 2019 B. Krishna Mohan Lecture 3 Slide 34


GNR607 Principles of Satellite Image Processing 19/7/2016

Steerable sensor systems for high temporal


resolution coverage
• Some sensors have steerable control
mechanism
• This enables revisit over any area whenever
desired
• Useful in applications like disaster mitigation,
military reconnaissance
• Steerable sensors also provide multiple views
of the terrain for stereo modeling

29 July 2019 B. Krishna Mohan Lecture 3 Slide 35

Steerable Sensor Systems

29 July 2019 B. Krishna Mohan Lecture 3 Slide 36


GNR607 Principles of Satellite Image Processing 19/7/2016

International Space Programs


USA China/Brazil
Russia Nigeria
France Canada
India European Space Agency
Japan South Korea
Taiwan Thailand

http://www.itc.nl/research/products/sensord
b/searchsat.aspx

29 July 2019 B. Krishna Mohan 37

Useful Links
• www.isro.gov.in
• www.nrsc.gov.in
• www.digitalglobe.com
• http://global.jaxa.jp
• http://glovis.usgs.gov
• http://www.itc.nl/research/products/sensordb/searchsa
t.aspx
• http://www.geo-airbusds.com/en/
• https://directory.eoportal.org/web/eoportal/satellite-
missions/k/kompsat-5
• http://www.geo-airbusds.com/en/160-formosat-2
• http://bhuvan.nrsc.gov.in
• https://vedas.sac.gov.in/vedas/

29 July 2019 B. Krishna Mohan 38


GNR607 Principles of Satellite Image Processing 19/7/2016

To be continued …
8/14/2019

GNR607
Principles of Satellite Image
Processing
Instructor: Prof. B. Krishna Mohan
CSRE, IIT Bombay
bkmohan@csre.iitb.ac.in
Slot 2
Lecture 5 Image Display and Data Formats
August 06, 2019 10.35 AM – 11.30 AM

IIT Bombay Slide 1


July 25, 2016 Lecture 4 Image Display and Data Formats

Contents of the Lecture


Display of Remotely Sensed Images
• False color composites
• Natural color composites
• Gray scale images
Interleaving Formats for Multiband Images
• BIL
• BSQ
• BIP

GNR607 Lecture 04 B. Krishna Mohan

1
8/14/2019

IIT Bombay Slide 2


Image Display

Red Gun
Image Data
Green Gun
on Disk
Blue Gun

Image Display System

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 3


Concept of a Color Composite
• In order to generate a color display of a
satellite image on the monitor, we need to
choose
– Data to represent in red color
– Data to represent in green color
– Data to represent in blue color
• Such a display is known as a color
composite
GNR607 Lecture 04 B. Krishna Mohan

2
8/14/2019

IIT Bombay Slide 4


False Color Composite
• A False Color Composite (FCC) is formed
when the data assigned to red / green / blue
color on the display is collected outside the
visible region
• A standard FCC comprises:
Wavelength of Data Display
– Near Infrared wavelength Red Color
– Red wavelength Green Color
– Green wavelength Blue color
GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 5


Example of FCC

GNR607 Lecture 04 B. Krishna Mohan

3
8/14/2019

IIT Bombay Slide 5a

Example of FCC
GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 6


Natural Color Composite
• A Natural Color Composite (NCC) is
formed when the data assigned to
red/green/blue is collected in the same
wavelengths
• For instance:
Wavelength of Data Display
– Red wavelength Red Color
– Green wavelength Green Color
– Blue wavelength Blue color
GNR607 Lecture 04 B. Krishna Mohan

4
8/14/2019

IIT Bombay Slide 7

Natural Color Composite


GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 7a

GNR607 Lecture 04 B. Krishna Mohan

5
8/14/2019

IIT Bombay Slide 8


Black & White Image
• A black/white image is one that has no
color but only white, black and shades of
gray.
• The smallest value at a pixel is 0 (black)
• The largest value is 2L-1 (white)
• Intermediate values represent shades of
gray, from black increasing towards white
• For L=8, black = 0, white = 255
GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 9

Gray Scale

black dark gray light gray white

GNR607 Lecture 04 B. Krishna Mohan

6
8/14/2019

IIT Bombay Slide 10


Gray Scale Images
• Examples of gray scale images in R.S.
– An image of a single band of a multisp. image
– An image from radar sensor (SAR image)
– An image from panchromatic sensor
• How does this happen on a display
monitor?
When Red, Green and Blue display guns are fed
the same signal, the resulting display on the
screen will be black&white

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 11

Panchromatic Image
MUMBAI
Data: IRS-1C, PAN
Consists of
1024x1024 pixels.

GNR607 Lecture 04 B. Krishna Mohan

7
8/14/2019

IIT Bombay Slide 11a

Panchromatic Image
Bangalore
Data: SPOT, PLA
Consists of 1024x1024
pixels.

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 12

Common Data Structures to


Store Multiband Data

GNR607 Lecture 04 B. Krishna Mohan

8
8/14/2019

IIT Bombay Slide 13

Common Data Structures to


Store Multiband Data
• BIL – band interleaved by line
• BSQ – band sequential
• BIP – band interleaved by pixel

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 14


Image Acquisition
Optics Band 1
Band 2
Band 3

Width
equal to
pixel width Direction of
Ground satellite motion

GNR607 Lecture 04 B. Krishna Mohan

9
8/14/2019

IIT Bombay Slide 15


BIL
• Band interleaved by line storage format
– MxN Image; K Bands; One row on ground
B11 B12 … B1N
B21 B22 … B2N

Bk1 Bk2 … BkN

• A single file on disk or CD contains M.K rows, each


having N columns; Every K rows in the file
correspond to ONE ROW ON THE GROUND

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 16


Band 1 Row1
… Image Size

BIL FILE Band K Row1 M rows


Band1 Row2 N columns
STRUCTURE
… K Bands
Band K Row2

Band 1 Row M

Band K Row M

GNR607 Lecture 04 B. Krishna Mohan

10
8/14/2019

IIT Bombay Slide 17


BIL
• BIL is a popular format for storing
multispectral images, and supported by most
remote sensing software (ERDAS, PCI, …)
• Well suited when multiband data analysis is
required
• Lot of data I/O involved when access to a
single band image is needed on sequential
access systems. Moderate overhead on
random access systems

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 18


BSQ
• Band sequential method involves storing
one full single band image after another
B11 B12 … B1N
B21 B22 … B2N

BM1 BM2 … BMN

• The image for the second band, …, up to


Band K follow
GNR607 Lecture 04 B. Krishna Mohan

11
8/14/2019

IIT Bombay Slide 19


BSQ FILE STRUCTURE
Band 1 Row 1

Band 1
Band 1 Row M
Band2 Row 1
Image Size … Band 2
Band 2 Row M
M rows …
N columns Band K Row 1
… Band K
K Bands
Band K Row M

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 20


BSQ
• Ideally suited when the multiband image is
processed one band at a time, such as
image enhancement, neighbourhood
filtering, etc.
• More overheads when all band values are
required at each pixel

GNR607 Lecture 04 B. Krishna Mohan

12
8/14/2019

IIT Bombay Slide 21


BIP
• Band interleaved by pixel
– Commonly used for storing color images, with
red, green and blue values alternating
• RGBRGBRGB…
– Not used in present times to store satellite
images
– Used in the early stages of Landsat data
distribution

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 22


BIP Structure
First Row
Band 1 Band 2 … Band K Band 1 Band 2 … Band K … Band K
Row 1 Row 1 Row 1 Row 1 Row 1 Row 1 Row 1
Pixel 1 Pixel 1 … Pixel 1 Pixel 2 Pixel 2 Pixel 2 Pixel N

Second Row
Band 1 Band 2 … Band K Band 1 Band 2 … Band K … Band K
Row 2 Row 2 Row 2 Row 2 Row 2 Row 2 Row 2
Pixel 1 Pixel 1 … Pixel 1 Pixel 2 Pixel 2 Pixel 2 Pixel N

Mth Row
Band 1 Band 2 … Band K Band 1 Band 2 … Band K … Band K
Row M Row M Row M Row M Row M Row M Row M
Pixel 1 Pixel 1 … Pixel 1 Pixel 2 Pixel 2 Pixel 2 Pixel N

GNR607 Lecture 04 B. Krishna Mohan

13
8/14/2019

IIT Bombay Slide 23


Formats for Distributing
Remotely Sensed Data
• Suppliers provide image data in different
formats:
– LGSOWG (super-structure format)
– Fast format
– HDF format
– GeoTiff format
– Proprietary software format (e.g., ERDAS IMG)

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 24

Superstructure Format
•Very exhaustive data format
•Levels of processing
•Level 0 (Raw data)
•Level 1 (Radiometrically corrected)
•Level 2 (Radiometrically and geometrically corrected)

•Digital File Volume consists of five files


•Some differences for mag. tapes and CD-ROMs

GNR607 Lecture 04 B. Krishna Mohan

14
8/14/2019

IIT Bombay Slide 25

Digital File Volume


• Volume Directory File (Volume descriptor, File
Pointers and text record)
• Leader File (Descriptor, Header, ancillary,
Calibration, histogram, map projection, GCP,
annotation, Boundary, and Boundary annotation
Record)
• Image Data File (BIL or BSQ)
• Trailer File (Description and trailer records)
• Null volume File
GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 26

CD Product Structure

CDINFO – a file describing the Satellite name, Product


Code, Path, Row, Date Of Pass, Sensor, Volume number
etc. It is basically gives the information about the contents
of CD. Typical content of a CDINFO file for P6 will look like
this.
Product1 – a folder containing files listed in the previous
slide
NOTE: Data were distributed in the past on magentic tapes prior to
CDs and DVDs. Even today 100 GB tapes are used for backups in
data centres

GNR607 Lecture 04 B. Krishna Mohan

15
8/14/2019

IIT Bombay Slide 27

File Details

PRODUCT1- a directory containing following files.

VOLUME.SensorCode/DAT – the Volume Directory File.


LEADER.SensorCode/DAT - the Leader File.
IMAGERYb.SensorCode/DAT- the Imagery File.
TRAILER.SensorCode/DAT – the Trailer File.
NULL.SensorCode/DAT - the Null Volume File.

Where b=Band number. In case of PAN ‘b’ will not be


present. and SensorCode=Three Character Sensor Code

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 28

CD Product Structure
Options possible:

One CD containing one full scene

One scene stored onto two CDs

Two scenes stored on one full CD

All possibilities are accommodated by the


superstructure format
Most commercial software vendors read this format
GNR607 Lecture 04 B. Krishna Mohan

16
8/14/2019

IIT Bombay Slide 29


IRS Sensors

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 29a

Extension to TIFF
GeoTIFF format

Supports additional tags to represent geographic


information like datum, projection, lat-long coordinates
etc.

Supports storing multiband data (>3 bands) in a single


file

Open format (not proprietary)

GNR607 Lecture 04 B. Krishna Mohan

17
8/14/2019

IIT Bombay Slide 30


Typical Data of interest
• Image dimensions (in rows, cols)
• Latitude-longitude extents of scene
• Sun angle (to interpret shading differences, shadows
etc.)
• Number of bands
• Type of processing done on the raw data
• Full scene in one CD / multiple CDs / multiple scenes on
single CD
• Ground Control Points

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 31


Some Sample Calculations
Given pixel area on ground: l x b metre2,
Size of image: M x N
Area of image on ground: L x B km2
Number of bands: K
Format of storage: BIL
Extract from the image a window of
Area L1 x B1 (L1 < L, B1 < B)
OR
Size M1 x N1 (M1 < M, N1 < N)

GNR607 Lecture 04 B. Krishna Mohan

18
8/14/2019

IIT Bombay Slide 32


Image Size and Ground Area
One pixel; l metres x b
metres

M Rows
L kms

N Columns; B kms
GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 33

Area of Sub-image
• Number of rows in the sub-image = L1 / l
• Number of cols in the sub-image = B1/b
• What are the coordinates of the window?

L1

B1

GNR607 Lecture 04 B. Krishna Mohan

19
8/14/2019

IIT Bombay Slide 34

Sub-image Extraction
• The user must specify the location of the sub-
image in the overall image
• This may be done using interactive facility such
as given the left top coordinate, extract a sub-
image of L1xB1 area, a window of M1 x N1 etc.
• Work out an algorithm to extract this assuming
BIL / BSQ / BIP organization of the data

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 35


Algorithm
• Open the input image for read and output image for write
operations
• Skip the pixels up to the left top corner of the sub-image
• From the left-top corner, copy desired pixels into a
separate array
• Store this array in an output file
• Close the image files
• Remember – this should be done for K bands, stored
in BIL/BSQ/BIP form

GNR607 Lecture 04 B. Krishna Mohan

20
8/14/2019

IIT Bombay Slide 36


Example
• Given a SPOT-1 multispectral image
covering an area of size 25km x 25km,
extract the middle 10km x 10km window
• Assume BSQ form of storage for input as
well as output
• SPOT-1 acquired the image in three
spectral bands

GNR607 Lecture 04 B. Krishna Mohan

IIT Bombay Slide 37


Disk File Size of the image
• Rows x Cols x Bands x Bytes per pixel
• For the SPOT window,
– 500 x 500 x 3 x 1 = 750000 bytes ~ 750 KB
• In case of Ikonos image, storage is 2 bytes per pixel,
4 metres resolution, 4 bands
• 10 km x 10 km Ikonos multispectral image size on
disk = 10000/4 x 10000/4 x 4 x 2
• = 10000 x 5000 bytes ~ 50 MB
• Size of panchromatic image =
– 10000 x 10000 x 2 = 10000 x 20000 bytes ~200 MB
• NOTE THE DIFFERENCE IN SIZE OF DATA!

GNR607 Lecture 04 B. Krishna Mohan

21
8/14/2019

IIT Bombay Slide 37


Images to cover entire globe
• Surface area of Earth = 5.1 x 108 km2
• One Landsat scene covers 185 km x185 km
• If entire Earth is covered by Landsat images without
overlap, number of scenes required =

5.1 x 108 / (185)2 = ~ 14902


• At 270MB/scene, ~3.84 TB

• If covered by images having sidelap and overlap,


more images are required

GNR607 Lecture 04 B. Krishna Mohan

Contd…

22
8/14/2019

GNR607
Principles of Satellite Image
Processing
Instructor: Prof. B. Krishna Mohan
CSRE, IIT Bombay
bkmohan@csre.iitb.ac.in
Slot 2
Lecture 6-9 Histogram and Image Enhancement
August 6-19 2019

IIT Bombay Slide 1


Aug 06-19 2019 Lecture 6-9 Histogram & Image Enhan.

Contents of the Lectures


• Histogram of an Image
• Useful information from Histogram
• Contrast in Satellite Image
– Linear Contrast Enhancement
– Log, Exponential Enhancement
– Histogram Equalization and Specification
– Piecewise Enhancement and Pseudocolor
– Miscellaneous Operations
GNR607 Lecture 6-9 B. Krishna Mohan

1
8/14/2019

IIT Bombay Slide 2


Concept of Histogram
• Given a digital image Fm,n of size MxN, we can define
f(j) = #{Fm,n = j, 0 ≤ m ≤ M-1; 0 ≤ n ≤ N-1}
• We refer to the sequence f(j), 0 ≤ j ≤ K-1, where K is the
number of gray levels in the image, as the histogram of
the image.
• f(n) is interpreted as the number of times gray level n
has occurred in the image.
• Obviously,
Sn f(n) = M . N

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 3

Sample Histogram

GNR607 Lecture 6-9 B. Krishna Mohan

2
8/14/2019

IIT Bombay Slide 4

Histogram
• With digital images, we have a range of values
that can be found at a given pixel. Depending on
the resolution of the sensor from which the
image is acquired, the gray level values may be
[0-255], [0-1023], [0-2047], [0-63], [0-127] etc. in
each band

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 5


Histogram
• The normalized version of f(n) may be defined as
p(n) = f(n) / (M.N)
– p(n)  probability of the occurrence of gray
level n in the image (in relative freq. sense)
 Sn p(n) = 1
MIN = minn {f(n) | f(n) ≠0}
MAX = maxn {f(n) | f(n) ≠0}

GNR607 Lecture 6-9 B. Krishna Mohan

3
8/14/2019

IIT Bombay Slide 6


Application of Histogram
• Dynamic range of display system – min
to max range of intensities that can be
displayed
• Normal range is 0 – 255 for gray scale; for
color it is 0 – 255 for red, green and blue
• If Min-Max range of data is comparable to
dynamic range of display device, good
quality display is possible
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 7

Histogram of image with good contrast

GNR607 Lecture 6-9 B. Krishna Mohan

4
8/14/2019

IIT Bombay Slide 8


Image

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 9

Histogram of Low Contrast Image

GNR607 Lecture 6-9 B. Krishna Mohan

5
8/14/2019

IIT Bombay Slide 10


Low Contrast Image

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 10a


Good Contrast Image

GNR607 Lecture 6-9 B. Krishna Mohan

6
8/14/2019

IIT Bombay Slide 10b


Good Contrast Image

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 11


Information from Histogram
• The information conveyed by the occurrence of an
event whose probability of occurrence is p(n) is given by

I(n) = ln{1/p(n)} = -ln{p(n)}

• This implies that if the probability of occurrence of an


event is low, then its occurrence conveys significant
amount of information

• If the probability of an event is high, the information


conveyed by its occurrence is low

GNR607 Lecture 6-9 B. Krishna Mohan

7
8/14/2019

IIT Bombay Slide 12


Average Information – Entropy
• Average information conveyed by a set of events with
probabilities p(i), i=1,2,…, is given by

H = - Sn p(n) log {p(n)}


• H is called entropy and is extensively used in image
processing operations
• H is highest when all probabilities are equally likely.
Hmax = -Sn k.ln(k), where p(n) = k for all n
• H is zero when p(j)=1 for some j, and p(k) = 0 for all k
≠j

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 13

Role of Entropy
• Indicator whether very few gray levels are
actually present, or wide range of levels in
sufficient numbers.
• Entropy is also used for threshold selection
• e.g., separating image into object of interest and
background

GNR607 Lecture 6-9 B. Krishna Mohan

8
8/14/2019

IIT Bombay Slide 14

Applications of histogram
• Can be related to the discrete probability density
function
• The gray level corresponding to the highest
frequency of occurrence is called the modal
level, as seen in the histogram
• If the image has two classes, the histogram may
be bimodal with means m1 and m2 and standard
deviations s1 and s2 respectively.

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 15

Applications of Histogram
• A threshold or cutoff T between m1 and m2
• All gray levels below T – one class
• Gray levels T and above – second class
• Finding the value of T - Threshold
selection.
• Indicate the classes by 0 and 255 when
displayed on the screen.

GNR607 Lecture 6-9 B. Krishna Mohan

9
8/14/2019

IIT Bombay Slide 16

A Bimodal Histogram
f(n)

Peak 1
Width
equal to s

Peak 2

Mode 1 Mode 2 n
=m1 =m2
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 17


Image Statistics from Histogram
• MIN gray level MIN = n: minn f(n) ≠0
• Max gray level MAX = n: maxn f(n) ≠0
• Mean gray level
m = Snn.f(n) / (M.N)
• Variance
s2 = Snf(n)[n-m]2 / (M.N)
• Median
Med = k: Skn=0 f(n) = (M.N)/2
GNR607 Lecture 6-9 B. Krishna Mohan

10
8/14/2019

IIT Bombay Slide 18

Image Statistics from Histogram


1
3 
• Skewness Sk = ( n  m ) 3 f ( n)
( MN  1)s

Skewness is positive if the histogram is skewed


to the left of the mean, i.e., it has a long tail
towards the higher gray levels
Skewness is negative if the histogram is
skewed to the right of the mean

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 19

Skewness
f(n) f(n)

n
n
Negatively skewed Positively skewed
histogram histogram

GNR607 Lecture 6-9 B. Krishna Mohan

11
8/14/2019

IIT Bombay Slide 20


Image Statistics from Histogram
1
• Kurtosis Ku =
( MN  1)s 4
 (n  m ) 4
f ( n)  3

• For Gaussian distributions, Kurtosis = 3.


Therefore the excess kurtosis is defined by
subtracting 3 from the above equation. Positive
kurtosis in this case indicates a sharply peaked
distribution, and negative kurtosis denotes a flat
distribution, with uniform distribution being the
limiting case.

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 20a


Image Statistics from Histogram

http://www.statisticshowto.com/pearson- http://mvpprograms.com/help/mvpstats/dist
mode-skewness/ ributions/SkewnessKurtosis

GNR607 Lecture 6-9 B. Krishna Mohan

12
8/14/2019

Digital Image Enhancement

IIT Bombay Slide 21


Motivation for Image
Enhancement
• Image data when received in its original
form often has poor visible appearance,
lacking in adequate contrast to perceive
the important features in it
• The visual appearance needs to be
enhanced through image enhancement
procedures

GNR607 Lecture 6-9 B. Krishna Mohan

13
8/14/2019

IIT Bombay Slide 22

Example
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 23


What is Contrast?
• Contrast is the difference in the intensity of the
object of interest compared to the background
(rest of the image)
• The perceptual contrast does not change
linearly with the difference in the intensity
• The perceptual contrast is a function of the
logarithm of the difference in the object and
background intensities
• This means that in the darker regions, small
changes in intensity can be noticed, but in
brighter regions, the difference has to be much
more
GNR607 Lecture 6-9 B. Krishna Mohan

14
8/14/2019

IIT Bombay Slide 24

Contrast

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 25

Simultaneous contrast

From Digital Image Processing by R.C. Gonzalez and R.W. Woods, Prentice-
Hall, 3rd ed.

GNR607 Lecture 6-9 B. Krishna Mohan

15
8/14/2019

IIT Bombay Slide 26


Histogram for Image Enhancement
• Given a 1-D histogram (computed for a
black/white image or for one band in a
multispectral image), it conveys
information about the quality of the image.
• Positively skewed histogram – darkish
image
• Negatively skewed histogram – lightish
image

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 27

Negatively skewed Positively skewed


histogram histogram

GNR607 Lecture 6-9 B. Krishna Mohan

16
8/14/2019

IIT Bombay Slide 28


Information from Histogram
• Minimum and maximum gray levels in the
image
• Imin = min {i | h(i) ≠ 0}
• Imax = max {i | h(i) ≠ 0}
• A poor contrast image will have (Imax – Imin)
range much less than the display range of
the monitor or printer.

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 29


Low Contrast Image

GNR607 Lecture 6-9 B. Krishna Mohan

17
8/14/2019

IIT Bombay Slide 30

Image Histogram

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 31


Quality of Image Display
• Separation between minimum and
maximum levels in the image should
match the dynamic range of the display
system
• High Imin or small Imax will result in poor
visual quality images due to lack of
contrast – difference in brightness of
object of interest relative to background
GNR607 Lecture 6-9 B. Krishna Mohan

18
8/14/2019

IIT Bombay Slide 32


Adjustment of Contrast
• This is done in several ways
– Linearly, from minimum to maximum level
– Preferential adjustment, emphasis on dark
levels
– Preferential adjustment, emphasis on bright
levels
– Preferential adjustment, emphasis on number
of pixels at each gray level
– Based on a desired histogram shape

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 33


Global Operations on Images
• Global operations are applied to the pixels
in the image, without taking note of their
locations.
• g = H(f) where H is some operation on the
image f.
• If location of the pixel is also included,
then it is a local operation

GNR607 Lecture 6-9 B. Krishna Mohan

19
8/14/2019

IIT Bombay Slide 34

Point Operations
• Point Operations are applied to pixels
solely on the basis of the gray levels found
there, without taking into account the pixel
position.
• Point operations lead to mapping of gray
levels from one set of values to another
set.
• gij = H[fij], where H is some transformation
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 35


Point Operations
• In case of point operations, gray level transformations
need NOT be computed at each pixel in the image
• If the number of bits assigned to each pixel is L, then the
transformation has to be computed only for 2L gray
values, 0 , 1, … , 2L-1
• This permits creating a look-up table for mapping each
gray level to its new level, supported by display
hardware and facilitated by programming libraries
Input f1 f2 f3 f4 … fn
Output g1 g2 g3 g4 … gn

GNR607 Lecture 6-9 B. Krishna Mohan

20
8/14/2019

IIT Bombay Slide 36

Linear Contrast Stretch


• Suppose the display range of the monitor
is Omin to Omax, which means the monitor
can display (Omax – Omin + 1) levels
• Example: Omin = 0
Omax = 255
• Let the input range be Imin to Imax.

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 37


Linear Contrast Stretch
• When the input image has poor contrast, then
the range of gray levels in the image is much
less than the display range of the monitor
• (Omax – Omin) >> (Imax – Imin)
• If Imax is in the left half of the gray scale, then the
image appears dark
• If Imin appears in the right half of the gray scale,
then the image appears light or faded out

GNR607 Lecture 6-9 B. Krishna Mohan

21
8/14/2019

IIT Bombay Slide 38


Un-enhanced image in ERDAS
• After loading the image into the viewer,
select
• Raster – Contrast – General Contrast –
Linear – (Gain=1.0 and Offset=0)
• This step removes any default contrast
enhancement performed by ERDAS and
displays the image as it originally is.

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 39


Linear Contrast Stretch
• Low contrast images can be linearly enhanced
using simple contrast stretch operations. Then
the linear contrast stretch operation is defined by
Omax  Omin
y= ( x  I min )
I max  I min
= m.(x  I min ), where
Omax  Omin
m=
I max  I min
x is the input level and y is the output level
GNR607 Lecture 6-9 B. Krishna Mohan

22
8/14/2019

Output level
IIT Bombay Slide 40
Omax
Linear Contrast Stretch

m=1
line
m > 1  stretching
m < 1  compressing
m is the slope of the
line

Omin
Imin Imax Input level
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 41


Low Contrast Image

GNR607 Lecture 6-9 B. Krishna Mohan

23
8/14/2019

IIT Bombay Slide 42

After
linear
contrast
stretch

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 43


Low Contrast Image

GNR607 Lecture 6-9 B. Krishna Mohan

24
8/14/2019

IIT Bombay Slide 44

After Enhancement
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 45


Use of the Histogram
• The previous methods just make use of
the extreme gray levels in the input image
• The number of pixels at that gray level is
not considered.
• If one pixel is present at 0 and one pixel
present with gray level 255, then the entire
dynamic range of the display device is
considered occupied

GNR607 Lecture 6-9 B. Krishna Mohan

25
8/14/2019

IIT Bombay Slide 46

h(n) Effective Gray


Scale Limits

0 A B 255 n
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 47


Interactive Choice of Limits
• A and B can be either interactively chosen or
• Apply a minimum count of the number of pixels
at the extreme gray levels
• Omin = 0; Omax = 255;
• Imin = A; Imax = B
• Apply the standard linear contrast stretch
procedure
• y = m.(x-Imin)

GNR607 Lecture 6-9 B. Krishna Mohan

26
8/14/2019

IIT Bombay Slide 48

Automatic Choice of Limits

• Most software packages perform default contrast


enhancement prior to display of an image.
• In such cases, automatic computation of Imax and Imin
are computed as
• Imin = m – k.s
• Imax = m + k.s
• k is an integer, often equal to 1 or 2
• This is also referred to as Standard Deviation
Stretch

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 49


Non-linear Stretch
• Human visual system is not linear; so are
films and computer monitors
• When we wish to examine the details in
the dark portion of the image at the
expense of the bright portion, then linear
contrast stretch is not very useful

GNR607 Lecture 6-9 B. Krishna Mohan

27
8/14/2019

IIT Bombay Slide 50


Logarithmic Stretch
• y = k.log(1+x) + c
• Nature of log curve – rapid rise initially,
and levels off later
• Greater difference in values of log function
for smaller gray levels, smaller difference
for larger gray levels

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 51

Output

Logarithmic
transformation

Input

GNR607 Lecture 6-9 B. Krishna Mohan

28
8/14/2019

IIT Bombay Slide 52

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 53

GNR607 Lecture 6-9 B. Krishna Mohan

29
8/14/2019

IIT Bombay Slide 54

Another Example

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 55


After Logarithmic Stretch

GNR607 Lecture 6-9 B. Krishna Mohan

30
8/14/2019

IIT Bombay Slide 56


Exponential Stretch
• Exponential stretch is the opposite of log
stretch, and enhances the details in the
brighter portion of the gray scale
• y = k.xr + c, r > 1
• The exponential curve rises much faster
for higher values of the argument of exp(.).

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 57

GNR607 Lecture 6-9 B. Krishna Mohan

31
8/14/2019

IIT Bombay Slide 58

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 59


Another Example

GNR607 Lecture 6-9 B. Krishna Mohan

32
8/14/2019

IIT Bombay Slide 60

After Exponential
Stretch

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 61


Lookup Table
• The contrast enhancement operations can be
implemented using a Lookup Table.
 Input Output 
 x y0 
 0
 x1 y1 
 
 ... 
 x L-1 y 
 L-1 

• The advantage here is that the operations have to be


performed just on the number of gray levels, and
stored in a table
GNR607 Lecture 6-9 B. Krishna Mohan

33
8/14/2019

IIT Bombay Slide 62


Point Operations
• Point operations are implemented in real-time
• Display systems have look-up tables, whose entries
by default are of the form y = x (no change of values
as found on the disk or in RAM)
• Manipulation of the table entries results in change in
the displayed image
• For color display, the display system has three
lookup tables
• Device manufacturers provide access to the tables
through software

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 63


Point Operations
• In Turbo C, the function SetRGBPalette
allows the user to set the values of the
RGB palette so that the colors displayed
are instantly modified.

GNR607 Lecture 6-9 B. Krishna Mohan

34
8/14/2019

IIT Bombay Slide 64


Histogram Based Enhancement
• Enhancement based on the extreme gray
levels does not take into account the
population of pixels at each gray level.
• One of the ways of enhancement
considering the complete histogram is
based on histogram equalization.
• Another approach is histogram
specification, built on equalization.

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 65


Motivation
• Enhancement of images taking into
account the relative frequency of
occurrence of gray levels in the image
• Frequently occurring gray levels should be
given preferential treatment and they
should be well separated on the gray scale
to result in better display quality

GNR607 Lecture 6-9 B. Krishna Mohan

35
8/14/2019

IIT Bombay Slide 66

Low Contrast Image


GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 67

Histogram Equalized Image


GNR607 Lecture 6-9 B. Krishna Mohan

36
8/14/2019

IIT Bombay Slide 68

So how do we achieve
histogram equalization?

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 69


Histogram Equalization
• When the image contains very few similar
valued gray levels, then the ability to interpret it
is hampered. It is desirable that the dynamic
range of the display device is better utilized.
• One way to achieve this is by transforming the
image such that all gray levels have equal
likelihood of occurrence.

GNR607 Lecture 6-9 B. Krishna Mohan

37
8/14/2019

IIT Bombay Slide 70


Principle of Histogram Equalization

Given an imperfect histogram, and an


ideal histogram that has equal
population of all gray levels, map the
input histogram to approximate the
“equalized” histogram.

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 71

Principle of Histogram Equalization


If the input image is considered as a random variable
x, with a probability density function fX(x), then let us
define a new random variable z = FX(x) where
x
FX (x) = 
y =
f X ( y ) dy

By adopting the above theory to discrete images, fX (x)


will be the histogram, and FX (x) is the cumulative
histogram
GNR607 Lecture 6-9 B. Krishna Mohan

38
8/14/2019

IIT Bombay Slide 72


Histogram Equalization
• For the random variable z what is the
resulting probability density function?
fz(z) = fx(x) / | dz/dx|
• Since Z = FX(x), dz/dx = fX(x)
• Therefore fZ(z) = 1 which means that z will
have a constant pdf. (This type of
mathematical distribution is called uniform
distribution).
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 73


Translation of Theory into Practice
• Essentially, the enhanced image which
has an equalized histogram has (ideally)
equal number of pixels at each gray level.
In practice, we can only achieve an
approximation of it.
• For an equalized histogram, the
cumulative histogram is known given the
size of the image and the number of gray
levels.
GNR607 Lecture 6-9 B. Krishna Mohan

39
8/14/2019

IIT Bombay Slide 74


How to equalize the histogram?
• For a gray level, corresponding to its cumulative
frequency, find the nearest gray level that
matches the ideal cumulative frequency
• Therefore image enhancement by histogram
equalization is achieved by the mapping of
gray levels is based on the actual cumulative
histogram of the image and the desired
cumulative histogram

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 75


Equalization
Histogram

GNR607 Lecture 6-9 B. Krishna Mohan

40
8/14/2019

IIT Bombay Slide 76


Desired Cumulative Histogram
• Desired cumulative histogram
0 1 2 3 4 5 6 7
16384 32768 49152 65536 81920 98304 114688 131072

8 9 10 11 12 13 14 15
147456 163840 180224 196608 212992 229376 245760 262144

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 77

Interpretation
• E.g., consider gray level 4 in the input
• h(4) = 13108, cum.freq(4) = 31460.
• For the ideal equalized histogram, the
nearest matching cum. freq above
31460 is
cum.freq(1) = 32768.
• Therefore input gray level 4 maps to
gray level 1 in the output.

GNR607 Lecture 6-9 B. Krishna Mohan

41
8/14/2019

IIT Bombay Slide 78

Interpretation of Histogram Equalization


•Number of distinct non-zero gray levels in the output
reduced
•This happens when the input histogram has long tails
•Histogram equalization optimizes the utilization of the
available display range
•Merges gray levels having very few pixels
•Separates those levels that are heavily populated.
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 78a


Example of Histogram Equalization

Input Image
GNR607 Lecture 6-9 B. Krishna Mohan

42
8/14/2019

IIT Bombay Slide 78b


Example of Histogram Equalization

After equalization
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 79


Alternative Computation of Equalized
Histogram Levels
•Probability density function of equalized image = Constant
•ps (sk ) = 1/(L-1), where L is the number of levels
•pr (rk) = nk / MN, where the image dimensions are M rows
and N columns, nk is the number of pixels having gray level
k
•Based on the definition of the transformed pixel, we can
write sk = (L-1).Σk pr (rk)

GNR607 Lecture 6-9 B. Krishna Mohan

43
8/14/2019

IIT Bombay Slide 80


Histogram Specification
• If a good contrast image is available for a
given area from one sensor, its histogram
can be used as reference or target to
imitate to modify the histogram of a low
contrast image
• The process is called Histogram
Specification implemented through
histogram equalization like procedure
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 81


Histogram Specification
Basic idea
• Generate cumulative histogram of input image
• Generate cumulative histogram of the image
supposedly having good contrast
• Match cumulative histogram of input image with
that of good contrast image
• Complete process by mapping gray levels from
input image to levels from the good contrast
image

GNR607 Lecture 6-9 B. Krishna Mohan

44
8/14/2019

IIT Bombay Slide 82

Original artwork from the book Digital Image Processing by R.C. Gonzalez and
R.E. Woods © R.C. Gonzalez and R.E. Woods, reproduced with permission
granted to instructorsby authors on the website www.imageprocessingplace.com

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 83

Piece-wise Enhancement
• In piece-wise contrast enhancement, the
input gray scale is divided into several
subranges, and a different enhancement
may be applied to each sub-range. This
requires prior knowledge of the gray scale
range of objects of choice

GNR607 Lecture 6-9 B. Krishna Mohan

45
8/14/2019

IIT Bombay Slide 84


y
Piecewise contrast
enhancement

x1 x2 x3 x4 x5 x

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 85


Thresholding
A trivial form of enhancement of the input
image is to map all values below a
threshold gray level to a constant value,
and those gray levels from the threshold
value and above to another constant
value. This can be expressed as
Y = y1, for x < T
Y = y2 for all other values of x.
GNR607 Lecture 6-9 B. Krishna Mohan

46
8/14/2019

IIT Bombay Slide 86


Thresholding
• Another option is to map graylevels
between two bounds to a single value,
while mapping all others to a second
value.
• Y = y1 if T1 < x < T2
• Y = y2 otherwise
• This assumes that the gray level range of
the desired object is known
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 87

GNR607 Lecture 6-9 B. Krishna Mohan

47
8/14/2019

IIT Bombay Slide 88

Threshold at gray level 60

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 89


Density Slicing
• Density slicing is a simple extension of thresholding,
where a separate threshold is used for every sub-range
of gray levels in the image. For instance, if the input
image is thresholded using m different thresholds, then
the resultant image Y is given by the equations
Y = y1 if 0 < x < T1
Y = y2 if T1 < x < T2

Y = ym if Tm-1 < x < Tm

GNR607 Lecture 6-9 B. Krishna Mohan

48
8/14/2019

IIT Bombay Slide 90

Input image
and its
histogram

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 91

Density Sliced
to four levels

GNR607 Lecture 6-9 B. Krishna Mohan

49
8/14/2019

IIT Bombay Slide 92

Pseudo-
coloring to
visualize
objects at
different
intensity
ranges

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 93

GNR607 Lecture 6-9 B. Krishna Mohan

50
8/14/2019

IIT Bombay Slide 94

More Examples of
Pseudocoloring

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 95


Negative of Image
• Negative of an image reverses the bright
and dark portions of the image. This is a
simple transformation given by
• y = Omax – x
• For 256 level images, y = 255 - x

GNR607 Lecture 6-9 B. Krishna Mohan

51
8/14/2019

IIT Bombay Slide 96

Input
Image

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 97

Negative of
input image

GNR607 Lecture 6-9 B. Krishna Mohan

52
8/14/2019

IIT Bombay Slide 98


Adaptive Image Enhancement
Algorithm
• Enhancement varies according to local
conditions in the image
• Divide image into zones and apply
enhancement separately for each zone
• Alternatively estimate image content –
edges or lines and vary enhancement
accordingly
GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 99

Summary of Contrast
Enhancement methods
• Linear Contrast Enhancement
– Identify minimum and maximum gray levels
in the input
– Specify minimum and maximum gray levels
in the output image
– Compute the gray level mapping based on
the line y = m.x + c

GNR607 Lecture 6-9 B. Krishna Mohan

53
8/14/2019

IIT Bombay Slide 100


Summary of Contrast Enhancement
methods
• Logarithmic Contrast Stretch
– Identify minimum and maximum gray levels in
the input
– Apply the transformation y = k.log(1+x)
– k is user-specified parameter.
– The resulting floating point values of y are
scaled linearly to the desired output range as
before

GNR607 Lecture 6-9 B. Krishna Mohan

IIT Bombay Slide 101


Summary of Contrast Enhancement
methods
• Exponential Contrast Stretch
– Identify the input minimum and maximum gray
levels
– Apply the transformation y = kxr
– The parameter r controls the rate at which xr
rises
– The resulting values of y are rescaled to the
desired output range using linear stretching
GNR607 Lecture 6-9 B. Krishna Mohan

54
8/14/2019

IIT Bombay Slide 102

Common Feature of Contrast


Enhancement Methods
• All these methods map one gray level to another
• Location of gray level in the image is not
relevant
• All the methods can be implemented in real time
using look-up tables
• All the methods operate on one color or one
band at a time in case of color or multiband
images
GNR607 Lecture 6-9 B. Krishna Mohan

Contd…

55
9/5/2019

GNR607
Principles of Satellite Image
Processing
Instructor: Prof. B. Krishna Mohan
CSRE, IIT Bombay
bkmohan@csre.iitb.ac.in
Slot 2
Lecture 10-14 Neighborhood Operations
August – Sept 2019

IIT Bombay Slide 1


August 2019 Lecture 10-14 Neighborhood Opns

Contents of the Lectures


Neighborhood Operations
• Concept of Neighborhood Operations
• Utility of neighborhood in smoothing and edge
enhancement
• Image smoothing algorithms
• Gradient operations
• Edge enhancement using gradient operators
GNR607 Lecture 10-14 B. Krishna Mohan

1
9/5/2019

IIT Bombay Slide 2

NEIGHBORHOOD
OPERATIONS

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 3


Pixel and Neighborhood
A B C
D X E
F G H

• Pixel under consideration X


• Neighbors of X are A, C, F,H, B,D,E,G
• Size of neighborhood = 3x3
• Neighborhoods of size mxn m and n are odd;
Unique pixel at the centre of the neighborhood
GNR607 Lecture 10-14 B. Krishna Mohan

2
9/5/2019

IIT Bombay Slide 4


4-neighborhoods
A B C
D X E
F G H

• B,D,E and G are the 4-neighborhood of X


• 4-neighbors are physically closest to X, at
one-unit distance

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 5


8-neighborhood
A B C
D X E
F G H
• A,C,F and H are ALSO included with B,D,E,G as
neighbors; 8-pixel set is the 8-neighborhood
of X
• A,C,F and H are the diagonal neighbors, sqrt(2)
times farther from X

GNR607 Lecture 10-14 B. Krishna Mohan

3
9/5/2019

IIT Bombay Slide 6


Larger Neighborhoods
o o o o o
o o o o o
o o X o o 5 x 5 neighborhood
o o o o o
o o o o o

• Larger neighborhoods used based on need;


computational load varies exponentially with
size of neighborhood
• 3x3  9 neighbors; 5x5  25 neighbors …

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 7


Point Operations v/s Neighborhood
Operations
• Point operations do not alter the sharpness or
resolution of the image
• Gray level associated with a pixel is manipulated
independent of the gray levels associated with
neighbors
• Pixel operations cannot deal with noise in the
image, nor highlight local features like object
boundaries

GNR607 Lecture 10-14 B. Krishna Mohan

4
9/5/2019

IIT Bombay Slide 8


Neighborhood Effect
15 17 16 16 17 19
18 17 15 18 15 45
17 14 16 16 20 17
Normal Noise?

• Abnormalities can be located by


comparing a pixel with neighboring pixels

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 9


Neighborhood Effect
15 17 16 16 17 38
18 17 15 18 40 39
17 14 16 39 40 38
Normal region Boundary
• Sharp transitions from one region to
another are marked by large differences in
pixel values at neighboring positions

GNR607 Lecture 10-14 B. Krishna Mohan

5
9/5/2019

IIT Bombay Slide 10


Neighborhood Operations
• Results of operations performed on the
neighborhood are posted at the location of the
central pixel
• The values in the input image are not
overwritten, instead the results are stored in an
output array or file
• Cannot be computed in real time since the
configurations of gray levels in the neighborhood
are very large

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 11


Neighborhood Operations
• Simple averaging
A B C
D X E
F G H
• g(X) = (1/9)[f(A) + f(B) + f(C) + f(D) + f(X) +
f(E) + f(F) + f(G) + f(H)]
• The output gray level is the average of the gray
levels of all the pixels in the 3x3 neighborhood

GNR607 Lecture 10-14 B. Krishna Mohan

6
9/5/2019

IIT Bombay Slide 12


Example
15 17 16 15 17 16
18 17 15 18 37 15
17 14 16 17 14 16
Case 1 Case 2
• In case 1, after averaging, the central element
17 is replaced by the local average 16 –
negligible change
• In case 2, after averaging, the central element
37 is replaced by 18 – significant change
• Averaging is a powerful tool to deal with random
noise
GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 13


Neighborhood Operations -
Procedure
• The procedure involves applying the
computational step at every pixel, considering its
value and the values at the neighboring pixels
• Then the neighborhood is shifted by one pixel to
the right and the centre pixel of the new
neighborhood is in focus
• This process continues from left to right, top to
bottom

GNR607 Lecture 10-14 B. Krishna Mohan

7
9/5/2019

IIT Processing
Bombay step Slide 14

Image

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 15


Mathematical form for averaging
• In general, we can write
K

g(X) =
 f (A )
i 1
i

| N(X ) |

where K is the number of neighbors Ai. A5 refers


to X, the central pixel for a 3x3 neighborhood.
• It is obvious that all neighbors are given equal
weightage during the averaging process
GNR607 Lecture 10-14 B. Krishna Mohan

8
9/5/2019

IIT Bombay Slide 16


General form for averaging
• In case different weights are preferred for different
neighbors, then we can write
K

• g(X) =  w f (A )
i 1
i i

w
i 1
i

• For simple averaging over a 3x3 neighborhood, wi =


(1/9), i=1,2,…,9
• We can alter, for example, the weights for 4-neighbors
and 8-neighbors. In such a case, wi is not a constant for
all values of i.

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 17


Averaging as Space Invariant Linear
Filtering
• Simple averaging can be represented as a linear space
invariant operation:
k i  w l  j  w
gi , j   h
k i  w l  j  w
k ,l f i  k , j l

1 k,l= -w, …, 0, …, w
hk ,l 
(2 w  1)(2w  1)
For a 3x3 window, w=1; For 5x5 window, w=2, …

2-d discrete convolution of h with f; g = f*h


GNR607 Lecture 10-14 B. Krishna Mohan

9
9/5/2019

IIT Bombay Slide 18


Concept of Convolution
• Convolution is a weighted summation of inputs
to produce an output; weights do not change
anytime during the processing of the entire data
• If the input shifts in time or position, the output
also shifts in time or position; character of the
processing operation will not change
• The weights with which the pixels in the image
are modified are represented by the term filter

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 19

Filter Mask
• The filter can be compactly represented using the
weights or multiplying coefficients:
• e.g., 3x3 averaging filter
• 0.111 0.111 0.111 1 1 1
• 0.111 0.111 0.111 or (1/9) 1 1 1
• 0.111 0.111 0.111 1 1 1
• This implies that the pixels in the image are multiplied
with corresponding filter coefficients and the products
are added

GNR607 Lecture 10-14 B. Krishna Mohan

10
9/5/2019

IIT Bombay Slide 20


Reduced neighborhood influence
0.05 0.15 0.05
0.15 0.20 0.15
0.05 0.15 0.05
• Central pixel is given 20% weight, 4-
neighbors 15% weight. Diagonal
neighbors given 5% weight.
• Note that the weights are all positive,
and sum to unity

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 21


Discrete Convolution
• In general, all the filter coefficients need
not be equal or symmetric
• In that case, the weighted averaging
operation has to be performed using a
discrete convolution procedure
• This is a general operation, assuming that
the process is space invariant.

GNR607 Lecture 10-14 B. Krishna Mohan

11
9/5/2019

IIT Bombay Slide 22


Discrete Convolution
w w
• gi,j =  h
k  w l  w
k ,l f i  k , j l

• The filter coefficients are mirror-reflected around


the central element, and then the filter is slid on
the input image
• The filter moves from top left to bottom right,
moving one position at a time
• For each position of the filter, an output value is
computed
GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 23

Filter
Mask

Image
GNR607 Lecture 10-14 B. Krishna Mohan

12
9/5/2019

IIT Bombay Slide 24


Border Effect
• The computation of the filtering operation
is applicable at those positions of the
image where the filter completely fits
inside.
• At the boundary positions, only part of the
filter fits inside the image. At such
positions, the computation is arbitrarily
defined
GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 25


Smoothing
• In image processing literature, the weighting averaging
operation is referred to as image smoothing
• By smoothing, it is implied that local differences between
pixels are reduced
• For simplicity, images are often filtered using the same
operator throughout, implying shift-invariance
• Most image display adaptors, have hardware convolvers
built in to perform 3x3 convolutions in real-time.
• Shift-variant filtering is chosen when local information is
to be preserved.

GNR607 Lecture 10-14 B. Krishna Mohan

13
9/5/2019

IIT Bombay Slide 26

Original
Image

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 27


3x3 averaging

GNR607 Lecture 10-14 B. Krishna Mohan

14
9/5/2019

IIT Bombay Slide 28


Gaussian smoothing
• Gaussian filter: linear smoothing
• weight matrix 1 r 2 c2
 ( )
w( r , c )  ke 2 s2

1
for all (r , c)  W , where k 1 r 2 c2
 ( )
e
( r , c )W
2 s2

W: one or two s from center

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 28a


Gaussian smoothing
• Use of Gaussian filter: Specify size of neighborhood size,
and given value of σ, determine filter coefficients by
varying r,c in the range [–W/2 +W/2]
• Alternatively, given value of σ, find the size of the
neighbourhood from 3 σ limits
• About 99% of the Gaussian distribution is covered within
the range mean±3 3 σ
• r,c vary in the range = [-3 σ +3 σ]
• For example, if σ = 1, then the range is [-3 3], i.e., the
size of neighbourhood is 7x7

GNR607 Lecture 10-14 B. Krishna Mohan

15
9/5/2019

IIT Bombay Slide 28b


p(x)

Gaussian
curve

x
m2s ms m ms m2s
GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 29


Shift-Variant Filtering
• When the filtering operation is required to
adapt to the local intensity variations then
the filter coefficients should vary according
to the position in the image.
• Shift-variant filters can preserve the
object boundaries better, while smoothing
the image
• One example is the sigma filter
GNR607 Lecture 10-14 B. Krishna Mohan

16
9/5/2019

IIT Bombay Slide 30


Sigma filter
• The underlying principle here is to take the subset of
pixels in the neighborhood whose gray levels lie within c.s
of the central pixel
k i  w l  j  w
gi , j   h
k i  w l  j  w
i , j , k ,l f i  k , j l

• hi,j,k,l = 0 if |fi-j,k=l – fij | > c.sij ; hi,j,k,l = 1 otherwise


sij is the local standard deviation of the gray levels within
the neighborhood centred at pixel (i,j)
• To save time, one can also use global std. dev.
• c = 1 or 2 depending on the size of neighborhood

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 31


Sigma Filter Algorithm
• Consider neighborhood size, and value of c
• Find the mean and standard deviation of the pixels within
the neighborhood
• Find the neighbors of the central pixel whose gray levels
are within c.s of the central pixel’s gray level
• Compute the average of the pixels meeting the above
criterion
• Replace the central pixel’s value by the average
• This cannot be replaced by a convolution since the
filter response varies for each position in the image

GNR607 Lecture 10-14 B. Krishna Mohan

17
9/5/2019

IIT Bombay Slide 31a


Comments on Sigma Filter
• Degradation of a smoothed image is due to blurring of
object boundaries
• Here boundaries are better preserved by limiting the
smoothing only to a homogeneous subset of pixels
in the neighborhood
• The selected subset comprises those pixels that have
similar intensities
• Pixels with very different intensities are excluded by
making corresponding weights equal to 0

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 32


Lee filter
Simple Lee filter
• gij = fmean + k.(fij – fmean)
• k varies between 0 and 2
k = 0, gij = fmean  simple averaging
k = 1, gij = fij  no smoothing at all
k = 2, gij = fij + (fij – fmean)
Interpretation of (fij – fmean) ???

GNR607 Lecture 10-14 B. Krishna Mohan

18
9/5/2019

IIT Bombay Slide 33

Lee filter
a. Original
image
b. Wallis filter
c. K=2
d. K=3
e. K=0.5
f. K=0

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 34


General form of Lee filter
• The general form of Lee filter is given by
gij  f mean  kij ( fij  f mean )
• kij is given by
s ij2
kij  2
f means v2  s ij2
• Greater noise, smaller kij, hence more
smoothing

GNR607 Lecture 10-14 B. Krishna Mohan

19
9/5/2019

IIT Bombay Slide 34a

Comments on General form of


Lee filter
• Noise variance has to be estimated from
homogeneous areas
• Unless noise variance is very low, this filter
smoothes the image like average filter
s ij2
kij  2
f means v2  s ij2
• Greater noise, smaller kij, hence more
smoothing

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 35


Gradient Inverse Filter
• The gradient inverse filter applies weights to the
neighbors in an inverse proportion to their
difference to the central pixel (i,j)’s gray level
• Let u(i,j,k,l)=
1
, if f (i  k , j  l )  f (i, j )
| f (i  k , j  l )  f (i, j ) |

• Else, u(i,j,k,l) = 2.0

GNR607 Lecture 10-14 B. Krishna Mohan

20
9/5/2019

IIT Bombay Slide 36


Gradient Inverse Filter
• The gradient inverse filter is defined by
k  w l  w
gi , j   h
k  w l  w
f
i , j , k ,l i  k , j  l

• hi,j,k,l = 0.5 (weight for the centre pixel)

• hi,j,k,l = 0.5[ui,j,k,l / u
k,l
i,j,k,l ]

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 37


K-Nearest Neighbor algorithm
• K-nearest neighbor average: compute equally
weighted average of k-nearest neighbors – k
neighbors whose gray levels are closest to the
central pixel in the neighborhood
• Sort the neighbors on the basis of similarity of
gray level to the central pixel
• Compute the average of K neighbors whose
gray levels are closest

GNR607 Lecture 10-14 B. Krishna Mohan

21
9/5/2019

IIT Bombay Slide 38


Example
Consider the neighborhood
33 41 37
32 46 39
30 29 28
K=4
Closest 4 gray levels to 46 are 41, 39, 37, 33
Including the central pixel, the average is
(1/5)(46 + 41 + 39 + 37 + 33) = 39.20 ~ 39

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 39


Non-linear filtering
• Nonlinear filters have certain advantages
over linear filters when dealing with noise
• Common examples are the rank order
filters
• A typical rank order filter is of the form
• gij = H[fi,j,k,l], where H represents a user-
specified rank criterion

GNR607 Lecture 10-14 B. Krishna Mohan

22
9/5/2019

IIT Bombay Slide 40


Rank filtering
• Modal filter
• Central pixel is assigned the gray level that
occurs most frequently in the neighborhood
• gij = mode {fi-k,j-l | k,l=-w, …, o, …, w}
• e.g., fn = 11 12 14 15 12 16 11 15 15
• 12 is replaced by 15, which occurs most
frequently in the neighborhood

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 41


Median Filter
• Median filter is the most commonly used
non-linear filter for image smoothing
• When the image is corrupted by random
salt-and-pepper noise, median operation is
very effective in removing the noise,
without degrading the input image
• gij = median {fi-k,j-l | k,l=-w, …, o, …, w}

GNR607 Lecture 10-14 B. Krishna Mohan

23
9/5/2019

IIT Bombay Slide 42


Mean v/s Median filter
• Consider an example:
• 15 17 16 15 17 17
• 18 17 15 157 18 15
• 17 14 16 17 14 16
• Case 1 Case 2
• Mean=16 Mean=32
• Median=16 Median=17
• In arithmetic averaging, noise is distributed over the
neighbours
• In median filtering, the extreme values are pushed to one
end of the sequence after sorting, hence ignored when
filtered

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 43


Algorithm
• Consider the size of the window around the pixel
• Collect all the pixels in the window and sort them
in ascending / descending order
• Select the gray level after sorting, according to
the rank criterion
• It can easily be verified that median and mode
filters are nonlinear, according to the definition of
linearity

GNR607 Lecture 10-14 B. Krishna Mohan

24
9/5/2019

IIT Bombay Slide 44

Median
filtering
Example here
is over 7x7
neighborhood

Example

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 45


Trimmed Mean Filter
• Trimmed-Mean Operator:
• trimmed-mean: first k and last k gray levels
not used
• trimmed-mean: equal weighted average of
central N-2k elements
N k
1
ztrimmed mean 
N  2k
x
n 1 k
(n)

GNR607 Lecture 10-14 B. Krishna Mohan

25
9/5/2019

IIT Bombay Slide 46


Some Comments
• Shift variant filters can adapt to the image
conditions better
• More computations are involved in shift
variant filtering
• Gaussian smoothing has some optimal
properties for which it is popular
• Degree of smoothing can be controlled by
varying the width s of the Gaussian filter

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 47


Comments contd…
• Simple averaging type filters fare poorly in
case of signal dependent noise
• Particularly with SAR images noise
suppression is challenging
• Noise filtering is performed in case of SAR
prior to image formation or after image
formation
• Shift variant and nonlinear filters more
successful with SAR images
GNR607 Lecture 10-14 B. Krishna Mohan

26
9/5/2019

IIT Bombay Slide 48


Comments contd…
• An important requirement of image smoothing:
the sharpness in the image should be least
affected
• Many comparative studies to evaluate
methods
• Estimating noise statistics key to improving
quality of data like SAR images
• Additional techniques – based on
mathematical morphology
GNR607 Lecture 10-14 B. Krishna Mohan

Edge Enhancement
Methods

27
9/5/2019

IIT Bombay Slide 49


Edge
• Edge: boundary where brightness
values significantly differ
among neighbors
edge: brightness value appears to abruptly
jump up (or down)

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 50

Original image (left), Sharpened Image (right)

GNR607 Lecture 10-14 B. Krishna Mohan

28
9/5/2019

IIT Bombay Slide 51


Edge Detection
Essential to mark the boundaries of objects
Area, shape, size, perimeter, etc. can be computed from clearly
identified object boundaries
Intensity / color / texture / surface orientation gradient
employed to detect edges
Gradient magnitude denotes the strength of edge
Gradient direction relates to direction of change of intensity /
color

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 52


How is an edge perceived?
• An edge is a set of connected pixels that lie
on the boundary between two regions
• The pixels on an edge are called edge
points
• Gray level / color / texture discontinuity
across an edge causes edge perception
• Position & orientation of edge are key
properties
GNR607 Lecture 10-14 B. Krishna Mohan

29
9/5/2019

IIT Bombay Slide 53


Different Edges
A

Different colors Different Intensities

GNR607 Lecture 10-14Different


B. brightness
Krishna Mohan

IIT Bombay Slide 54


Different Edges

Different textures Different surfaces

GNR607 Lecture 10-14 B. Krishna Mohan

30
9/5/2019

IIT Bombay Slide 55


Types Of Edges
Gray level profile derivatives
• Step edge: 1st

• Ramp edge: 2nd

• Peak edge: 1st

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 56


Locating an Edge
• Locating an edge is important, since the
shape of an object, its area, perimeter and
other such measurements are possible
only when the boundary is accurately
determined
• Edge is a local feature, marked by sharp
discontinuity in the image property on
either side of it
GNR607 Lecture 10-14 B. Krishna Mohan

31
9/5/2019

IIT Bombay Slide 57


Edge
• Edge: boundary where brightness
values significantly differ
among neighbors
edge: brightness value appears to abruptly
jump up (or down)

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 58


Principle of Gradient Operator
The interpretation of this operator is that the
intensity gradient is computed in two
perpendicular directions, followed by the
resultant whose magnitude and orientation
are computed by treating the values from
the two masks as two projections of the
edge vector

GNR607 Lecture 10-14 B. Krishna Mohan

32
9/5/2019

IIT Bombay Slide 59


Gradient Edge Detection
• Given an image f(x,y), compute
 f f 
• f =  , 
 x y 

• Squared gradient magnitude


2 2
 f   f 
|f|2 =  x    y 
 f f 
Gradient direction = arctan  y x 

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 60


Gradient Directions

Vertical gradient Horizontal gradient

Diagonal gradient
GNR607 Lecture 10-14 B. Krishna Mohan

33
9/5/2019

IIT Bombay Slide 61


Gradient Edge Detectors
• As seen, two mutually perpendicular
gradient detectors are required to
detect edges in an image, since edges
may occur in any orientation.
• Using two mutually perpendicular
orientations, an edge in any direction
can be resolved in terms of these two
orthogonal components
GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 62


Roberts Operator
• Roberts operator: two 2X2 masks to
calculate gradient; Operates on 2x size
neighborhood
1 0 0 1 A B
0 -1 -1 0 C D

gradient magnitude: r12  r22


r1 = f(A) – f(D); r2 = f(B) – f(C)
r1, r2 gradient outputs from the masks;
direction = arctan(r2/r1)
GNR607 Lecture 10-14 B. Krishna Mohan

34
9/5/2019

IIT Bombay Slide 63


Gradient Edge Detectors
1 1 1 -1 0 1
• Prewitt Operator
0 0 0 -1 0 1
-1 -1 -1 -1 0 1

Prewitt 1 Prewitt 2

g
• gradient magnitude:  p12  p22
  arctan( p1 p2 ) clockwise w.r.t.
• gradient direction:
column axis
• p1, p2 are gradient outputs from the masks

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 64


Gradient Edge Detectors
• Prewitt Edge Detector (one part of it)

x-1 x x+1 f ' ( x)  f ( x  1)  f ( x)


-1
-1
0
0
1
1
 f ' ( x  1)  f ( x)  f ( x  1)

-1 0 1 = f (x+1) – f (x -1)

More stable than Roberts, robust to noise in the image, and


produces better edges. More time consuming,

GNR607 Lecture 10-14 B. Krishna Mohan

35
9/5/2019

IIT Bombay Slide 65

Input image
GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 66

Prewitt Operator Output


GNR607 Lecture 10-14 B. Krishna Mohan

36
9/5/2019

IIT Bombay Slide 67


Gradient Edge Detectors
Sobel edge detector 1 2 1 -1 0 1
0 0 0 -2 0 2
Compare with -1 -2 -1 -1 0 1
Prewitt!
Sobel 1 Sobel 2
gradient magnitude: The picture can't be display ed.

gradient direction:
  arctan(s1 s2 )

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 68


Gradient Edge Detectors
• Frei and Chen Operators

1 sqrt(2) 1 -1 0 1
0 0 0 - sqrt(2) 0 sqrt(2)
-1 - sqrt(2) -1 -1 0 1

The above are two of nine masks, four of which are


formed by rotation in steps of 90o, four are line
detectors, and one is a simple 3x3 smoothing operator

GNR607 Lecture 10-14 B. Krishna Mohan

37
9/5/2019

IIT Bombay Slide 69


Compass Gradient Operators
• Frei and
Chen
edge
detector:
• nine
orthogonal
masks
(3X3)

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 70


Kirsch Compass Gradient Operator
-3 - 3 5 -3 - 3 -3 -3 - 3 -3 -3 -3 -3
-3 0 5 -3 0 5 -3 0 -3 5 0 -3
-3 -3 5 -3 5 5 5 5 5 5 5 -3
K1 K2 K3 K4

5 -3 -3 5 5 -3 5 5 5 -3 5 5
5 0 -3 5 0 -3 -3 0 -3 -3 0 5
5 -3 -3 -3 -3 -3 -3 -3 -3 -3 -3 -3
K5 K6 K7 K8
GNR607 Lecture 10-14 B. Krishna Mohan

38
9/5/2019

IIT Bombay Slide 71


Kirsch Gradient Edge Detectors
• Kirsch: set of eight compass template edge
masks

gradient magnitude: g  max ek , k = 1,2, ..., 9


k

gradient direction:   45 arg max ek

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 72


Robinson Gradient Detector
 Robinson: compass template mask set with only
∓0, ∓1, ∓2

-1 0 1 -2 -1 0 -1 -2 -1 0 -1 -2
-2 0 2 -1 0 1 0 0 0 1 0 -1
-1 0 1 0 1 2 1 2 1 2 1 0
R1 R2 R3 R4

Other four masks are mirror reflections of the first


four
Gradient magnitude and direction same as Kirsch
GNR607 Lecture 10-14 B. Krishna Mohan

39
9/5/2019

IIT Bombay Slide 73


Actual Edges
• The edge enhanced images are
thresholded in order to suppress the
interior portions of the image and retain
only the edges
• This helps in identifying the outlines of the
objects of interest

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 74

Prewitt operator output thresholded


at 40

GNR607 Lecture 10-14 B. Krishna Mohan

40
9/5/2019

IIT Bombay Slide 75


Laplacian Operator
• The Laplacian operator is based on the
Laplace equation given by

2 f 2 f
 0
x 2 y 2
• Laplacian operator is discretized version of
the above equation and is based on
second derivatives along x and y
directions
GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 76


Laplacian Operator
• Filter coefficients
• The discrete version of the second
derivative operator:
• [1 -2 1] and [1 -2 1]T in the horizontal
and vertical directions
• Superimposing the two, 0 -1 0
we get the discrete Laplace -1 4 -1
0 -1 0
operator
GNR607 Lecture 10-14 B. Krishna Mohan

41
9/5/2019

IIT Bombay Slide 77


Properties of Laplace Operator
• Isotropic operator – cannot give
orientation information
• Any noise in image gets amplified
• Faster since only one filter mask
involved
• Smoothing the image first prior to
Laplace operator is often needed for
reliable edges
GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 78


Zero-Crossing Edge Detectors
• First derivative maximum: exactly where
second derivative zero crossing
• In order to detect edges, we look at pixels
where the intensity gradient is high, or the
first derivative magnitude is maximum
• First derivative maximum implies a zero
when the second derivative is computed
• Edges are located at those positions where
there is a positive value on one side and a
negative value on the other side, in other
words a zero-crossing

GNR607 Lecture 10-14 B. Krishna Mohan

42
9/5/2019

IIT Bombay Slide 79

A step edge, whose first derivative is an impulse,


and whose second derivative shows a transition
from a positive to a negative
Edge location corresponds to the point where a
sign change occurs from positive to negative (or
vice versa)
GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 80


Zero-Crossing Edge Detectors
• Laplacian of a function I(r,c)

2 2 2I 2I
 I  ( 2  2 )I  2  2
2

r c r c

Two commonly used masks for Laplacian operator

GNR607 Lecture 10-14 B. Krishna Mohan

43
9/5/2019

IIT Bombay Slide 81


Zero Crossing Edge Detector
• Direct operation on the image using the
Laplacian operator results in a very noisy
result
• Derivative operator amplifies the high
frequency noise
• Preprocess the input image by a
smoothing operator prior to application of
the Laplacian
GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 82


Zero Crossing Edge Detector
• The Gaussian shaped smoothing operator
is found to be ideal as a preprocessing
operator
• Therefore the Laplacian operator is
applied on Gaussian smoothed input
image
• ZC(image) = Laplacian [gaussian(image)]

GNR607 Lecture 10-14 B. Krishna Mohan

44
9/5/2019

IIT Bombay Slide 83


LOG operator
• Both Laplacian operator and Gaussian
operator are linear, and hence can be
combined into one Laplacian of Gaussian
(LoG) operator
• Laplacian[Gaussian(image)] =
[Laplacian(Gaussian)](image)

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 84


LOG operator
• Laplacian[Gaussian(image)] =
[Laplacian(Gaussian)](image)
1 r 2  c2
1  ( ) r2
LOG (r , c)   e 2 s2
(1  )
2s 4 s2
1 r 2  c2
1  ( ) c2
Verify! [ e 2 s2
(1  )]
2s 4 s2
1 r 2  c2
1 r 2  c2  ( )
 (2  )e 2 s2

2s 4 s2
GNR607 Lecture 10-14 B. Krishna Mohan

45
9/5/2019

IIT Bombay Slide 85


LOG operator
• LoG operator is a sampled version of the
function
1 r 2  c2
1 r 2  c2  ( )
LOG (r , c)   (2  )e 2 s2
2s 4 s2

• For a given value of s, the size of the Gaussian


filter is -3s to +3s
• Computationally more expensive due to
convolution with large filter masks
GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 86

Zero-Crossing Edge Detectors


Properties
• Edges depend on the value of s
• For small value of s all edges are detected
• For large value of s only major edges are
detected
• Any minor difference in intensity between
neighbors can be captured using LoG filter
• Significant zero crossings can be identified using
suitable threshold

GNR607 Lecture 10-14 B. Krishna Mohan

46
9/5/2019

IIT Bombay Slide 87

Zero-Crossing Edge Detectors


• A pixel at (m,n) is declared to have a zero
crossing if
f’’(m,n) > T and f’’(m+dm, n+dn) < -T
OR
f’’(m,n) < -T and f’’(m+dm, n+dn) > T

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 88


Edge Detection in Multispectral
Images
• Simple approaches:
– Compute gradient by taking Euclidean distance
between multispectral vectors of data at adjacent
pixels instead of differences in gray levels
– Find independent gradients for different bands, edges
and combine edges
– Find independent gradients, combine gradients, and
find edge from multiband gradient

GNR607 Lecture 10-14 B. Krishna Mohan

47
9/5/2019

IIT Bombay Slide 89


Edge Detection in Multispectral
Images

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 90


Edge Detection in Multispectral
Images

GNR607 Lecture 10-14 B. Krishna Mohan

48
9/5/2019

IIT Bombay Slide 92


Image Sharpening
For example,
• Sharpened image =
Original image + k. gradient magnitude
• Scale factor k can determine whether
gradient magnitude is added as it is or a
fraction of it. The sum may be rescaled to
0-255 to display like an image

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 93

Original image (left), Sharpened Image (right)

GNR607 Lecture 10-14 B. Krishna Mohan

49
9/5/2019

IIT Bombay Slide 94

Unsharp Masking
• Sample convolution mask
0 0 0 0 0 0 1 1 1
0 1 0 + 0 1 0 - (1/9) 1 1 1
0 0 0 0 0 0 1 1 1

G = F + | (F – Fmean) |

NR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 95

NR607 Lecture 10-14 B. Krishna Mohan

50
9/5/2019

IIT Bombay Slide 96


Line Enhancement
Difference between a line and an edge

Line is a physical entity


Edge is a perceptual entity

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 97

Lines

NR607 Lecture 10-14 B. Krishna Mohan

51
9/5/2019

IIT Bombay Slide 98


Line Enhancement
Detection of a physical line involves

High to low transition  Low to high transition


OR
Low to high transition  High to low transition

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 99


Line Enhancement Masks

• These masks look for positive to


negative and negative to positive
transitions in
vertical/horizontal/diagonal directions
GNR607 Lecture 10-14 B. Krishna Mohan

52
9/5/2019

IIT Bombay Slide 100


Summary of Gradient Operators
• Edges or boundaries convey very
important information for image
understanding
• Gradient operators emphasize the local
intensity or other property differences
thereby making visible object boundaries
• Gradient operations in normal course are
only the first step in reliable edge
extraction
GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 101


Summary of Neighborhood
Operators
• Image processing operations involving
neighborhoods of pixels are important in
many tasks
• Smoothing filters are composed of non-
negative coefficients which add up to 1
• Gradient filters are composed of both
positive and negative coefficients which
must add up to 0 so that in images where
there is no edge, the output is zero.
GNR607 Lecture 10-14 B. Krishna Mohan

53
9/5/2019

Shape Fitting by Hough


Transform

IIT Bombay Slide 102

Fitting Lines/Circles to Edge Pixels


• Human visual system can interpolate and “see”
circles and lines or any other known shapes
when the outline is available as collection of
pixels
• The challenge is how a computer can see a line
or a circle
• The Hough (pronounced “Huff”) transform is
a popular tool for this purpose

GNR607 Lecture 10-14 B. Krishna Mohan

54
9/5/2019

IIT Bombay Slide 103

Hough Transform
• A method for finding global relationships
between pixels.
Example: We want to find straight lines in an
image
• Apply Hough transform to the edge enhanced –
thresholded image
• Any curve that can be represented by a
parametric equation can be extracted by Hough
transform

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 104

Line Fitting

Edges Lines fit to


the edges

GNR607 Lecture 10-14 B. Krishna Mohan

55
9/5/2019

IIT Bombay Slide 105

Hough transform Procedure


Consider a set of points xi, yi on a line y = a.x + b; a and
b are parameters (slope and intercept)

All the above points satisfy the equation yi = a.xi + b

Let xi and yi be the parameters; then b = -a.xi + yi

Vary a and find corresponding b. In the “a-b” space, it


is a line

For each xi, yi there is a line in the “a-b” space

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 106

Hough transform Procedure


Consider an array (called accumulator array) with ‘a’
varying along the columns and ‘b’ along the rows
Initialize the array with count 0.
Vary ‘a’ for a given point (xi,yi) and compute
corresponding ‘b’. Increment count in cell (a,b) by 1
When points (xi,yi), i=1,2,…,N lie on the same line, the
lines in the “a-b” space pass through a common cell
corresponding to the slope and intercept of the line
in the ‘x-y’ space
In other words, the count in the accumulator array will
be high for one cell corresponding to the line

GNR607 Lecture 10-14 B. Krishna Mohan

56
9/5/2019

IIT Bombay Slide 107

y b

x a

xy-space ab- or parameter space


GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 108

Problem with the line model y=ax+b


In reality we have a problem with y=ax+b because
the slope ‘a’ reaches infinity for vertical lines
In 1972, American researchers and pattern
recognition experts Richard Duda & Peter Hart
proposed a Standard HT (SHT). They used the
polar coordinate equation of a straight line:
x.cos + y.sin = r
For vertical lines, /2; No problem projecting
any line into (, r) space
GNR607 Lecture 10-14 B. Krishna Mohan

57
9/5/2019

IIT Bombay Slide 109

Standard Hough Transform

r  x cos( )  y sin( )
Y
y  ax  b


r X

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 110

Accumulator Array Creation


Select the point (x,y) on a line
Create an array in which  varies along
columns, from   0 to   2 in small
increments (e.g., 15o)
For each , find the value of r. Increment the
count in accumulator array for cell (,r) by 1
For each point (x,y) there is a sinusoid in the
(,r) space.
All points (xi, yi) on a given line will have some
(,r) common

GNR607 Lecture 10-14 B. Krishna Mohan

58
9/5/2019

IIT Bombay Slide 111


Original artwork from the
book Digital Image
Processing by R.C.
Gonzalez and R.E.
Woods © R.C. Gonzalez
and R.E. Woods,
reproduced with
permission granted to
instructors by authors on
the website
www.imageprocessingpl
ace.com

Hough Space

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 112


Example of Line and Accumulator
Theta = 45º = 0.785 rad
r = (1√2) / 2 = 0.707
Theta: 0 to 3.14
(rad)

r: 0 to 1.55

Brightest point
gets 20 votes
Original artwork from the book Digital Image Processing by R.C. Gonzalez and
R.E. Woods © R.C. Gonzalez and R.E. Woods, reproduced with permission
granted to instructors by authors on the website www.imageprocessingplace.com

GNR607 Lecture 10-14 B. Krishna Mohan

59
9/5/2019

IIT Bombay Slide 113

Mechanics of the Hough transform


• Difficulties • How many lines?
– how big should the – count the peaks in

cells be? (too big, the Hough array


and we cannot • Who belongs to which
distinguish line?
between quite – tag the votes
different lines; too
small, and noise Hardly ever

causes lines to be satisfactory in
missed) practice, because
problems with noise
GNR607 Lecture 10-14 and cell
B. Krishna Mohan
size defeat it

IIT Bombay Slide 114

Noisy Line

Brightest point = 6 votes


Original artwork from the book Digital Image Processing by R.C. Gonzalez and
R.E. Woods © R.C. Gonzalez and R.E. Woods, reproduced with permission
granted to instructors by authors on the website www.imageprocessingplace.com

GNR607 Lecture 10-14 B. Krishna Mohan

60
9/5/2019

IIT Bombay Slide 115

Totally Chaotic!

Original artwork from the book Digital Image Processing by R.C. Gonzalez and
R.E. Woods © R.C. Gonzalez and R.E. Woods, reproduced with permission
granted to instructors by authors on the website www.imageprocessingplace.com

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 116


Improvements to Simple Hough
Transform
Noise tolerance: Most edge detectors give edge direction.
Consider only those directions in accumulator array
corresponding to edge direction at pixels

Speed up: A two-stage process can be considered. First,


generate coarse (r,) array. Find approximate lines.
Next, find precise values of (r,) by searching around
the coarse values

GNR607 Lecture 10-14 B. Krishna Mohan

61
9/5/2019

IIT Bombay Slide 117

High Order Parametric Curves


Circle: (x-a)2 + (y-b)2 = r2
Parameter space is 3-dimensional
Highly computation intensive
Searching for maxima in 3-D arrays is
computationally expensive
Efficient data structures are important
Ref: A.M. Cross, Detection of circular
geological features using the Hough
transform, International Journal of Remote
Sensing, vol. 9, no. 9, pp. 1519-1528, 1988

GNR607 Lecture 10-14 B. Krishna Mohan

IIT Bombay Slide 118


Circle Detection

If radius of the
circle is known
in advance,
the problem
simplifies to a
2-parameter
problem
b

GNR607 Lecture 10-14 B. Krishna Mohan

62
9/5/2019

IIT Bombay Slide 119

High Order Parametric Curves


Ellipse: (x-r)2 + (y-c)2 = 1
a2 b2
Parameter space is 4-dimensional
Even more computation intensive than circle
detection

If ellipse parameters are known a priori, the search for


the ellipses matching these parameters can be a simpler
problem

GNR607 Lecture 10-14 B. Krishna Mohan

Contd…

63
9/6/2019

GNR607
Principles of Satellite Image
Processing
Instructor: Prof. B. Krishna Mohan
CSRE, IIT Bombay
bkmohan@csre.iitb.ac.in
Slot 2
Lecture 16-17 Image Corrections
Sept. 06, 09 2019

IIT Bombay Slide 1


Sept. 06,09 2019 Lecture 16-17 Image Corrections

Contents of the Lecture


• Distortions in Satellite Image
– Radiometric Distortions
– Geometric Distortions
• Radiometric Corrections
• Geometric Distortions
– Sources of Radiometric Distortions
– Correction of Distortions
• Image Registration and Mosaicing
GNR607 Lect16-17 B. Krishna Mohan

1
9/6/2019

IIT Bombay Slide 2


Distortions in Satellite Images
• Nature of Distortion
– Systematic (predictable)
– Random
• Types of Distortions
– Geometric Distortions
• Position of pixel in the image in error
• shape of pixel in the image in error
– Radiometric Distortions
• Recorded value in error
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 3

Background
• The signal received at the satellite
depends on several factors
– Performance of the onboard electronics
– Atmospheric conditions
– Terrain elevation
– Terrain slope and
– Reflectance characteristics of objects
• The first four factors can result in distortions in
the signal received
GNR607 Lect16-17 B. Krishna Mohan

2
9/6/2019

IIT Bombay Slide 4


Atmospheric Scattering
Three types of scattering are considered
– Raleigh scattering where particle size is small
compared to the wavelength of radiation. Only small
wavelengths are affected
– Mie scattering where the particle size is comparable
to the wavelength of radiation. The smoke and dust
are the influencing factors
– Non-selective scattering where the particle size is
much larger than the wavelength of radiation. The
radiation is absorbed by water vapor.

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 5


Scattering Phenomena

Reproduced
with
permission
from the
lecture notes
of Prof. John
Jensen,
University of
South
Carolina

GNR607 Lect16-17 B. Krishna Mohan

3
9/6/2019

IIT Bombay Slide 6


Reproduced with permission from

Absorption Windows
the lecture notes of Prof. John
Jensen, University of South Carolina

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 7


Detector Errors
• Shot noise (random bad pixels)
• Detector malfunction resulting in row or
column drop-outs
• Detector malfunction resulting in delayed
row or column start
• Detector malfunction resulting in a striping
effect (sensor not adapting to changes in
terrain conditions)

GNR607 Lect16-17 B. Krishna Mohan

4
9/6/2019

IIT Bombay Slide 8

Striping Errors
IRS-1C Panchromatic Sensor

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 9


Shot Noise
• Shot noise pixels can be eliminated by
comparing them with their neighboring
pixels
• If the gray levels at the neighboring pixels
are very different from that of the pixel
under observation, then the pixel is a
noise pixel, whose gray level is replaced
by the average of the neighboring
pixels.
GNR607 Lect16-17 B. Krishna Mohan

5
9/6/2019

IIT Bombay Slide 10

Removal of Shot Noise


Reproduced
with
permission
from the
lecture notes
of Prof. John
Jensen,
University of
South
Carolina

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 11


Line or Column Drop-outs
• In case of Landsat satellite with an electro-
mechanical scanner, malfunction of sensor
results in line (row) drop outs, i.e., during
the scan from left to right the detector
does not function
• In case of pushbroom sensors like SPOT,
IRS, Ikonos etc., due to malfunctioning of
some of the elements entire columns may
be blank
GNR607 Lect16-17 B. Krishna Mohan

6
9/6/2019

IIT Bombay Slide 12

Scanning Mechanism
Reproduced
with
permission
from the
lecture notes
of Prof. John
Jensen,
University of
South
Carolina

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 13


Correction of Line or Column Drop-outs
• By comparing the histograms of pixels in
different rows (or columns), the defective
rows (or columns) can easily be
highlighted. How?
• (Assuming that successive rows or
columns are not defective), the defective
row (or column) is replaced by the average
of the rows above and below (or columns
to the left and right)
GNR607 Lect16-17 B. Krishna Mohan

7
9/6/2019

IIT Bombay Slide 14

Atmospheric Corrections
• Absolute correction
• Relative correction

Algorithms are available for both


techniques

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 15

Histogram Adjustment
• From histogram, find MIN level
• IF MIN ≠0 all pixels in the image may
have received a certain constant intensity
contribution from atmosphere due to
scattering
• Subtract the MIN value from intensity of
each pixel
• Repeat for all bands

GNR607 Lect16-17 B. Krishna Mohan

8
9/6/2019

IIT Bombay Slide 16


Reproduced with permission from the lecture notes
of Prof. John Jensen, University of South Carolina

Histogram Adjustment
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 17


Absolute Radiometric Correction
• Absolute radiometric correction nullifies
the effects of the atmosphere
• Digital numbers in data correlate with
spectral reflectance
• This enables comparison of reflectance
from an object in one part of the globe
with reflectance from that object in
another part of the globe
GNR607 Lect16-17 B. Krishna Mohan

9
9/6/2019

IIT Bombay Slide 18

Contribution of Atmosphere
Reproduced
with
permission
from the
lecture notes
of Prof. John
Jensen,
University of
South
Carolina

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 19


Correction Algorithms
• Information required
• Latitude and longitude of the scene
• Date and exact time of satellite pass
• Image acquisition altitude
• Mean elevation of the scene
• Atmospheric model (e.g., summer, winter, tropical)
• Temperature, pressure, humidity
• Radiometrically calibrated image radiance data
• Data about each specific band (i.e., its mean and full-
width at half-maximum (FWHM)
• Local atmospheric visibility at the time of satellite
pass
GNR607 Lect16-17 B. Krishna Mohan

10
9/6/2019

IIT Bombay Slide 20


Line or Column Drop-outs
• In case of Landsat satellite with an electro-
mechanical scanner, malfunction of sensor
results in line (row) drop outs, i.e., during
the scan from left to right the detector
does not function
• In case of pushbroom sensors like SPOT,
IRS, Ikonos etc., due to malfunctioning of
some of the elements entire columns may
be blank
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 21


Line or Column Drop-outs
Landsat MSS
Image with line
dropout problem

From “Remote Sensing


Tutorial” developed by
Dr. Nick Short; used with
permission

GNR607 Lect16-17 B. Krishna Mohan

11
9/6/2019

IIT Bombay Slide 22


Multiple-data image
normalization using regression
• Selecting a base image and then transforming the
spectral characteristics of all other images obtained on
different dates to have approximately the same
radiometric scale as the base image.
• Selecting pseudo-invariant features (PIFs) or region
(points) of interest is important:
– Spectral characteristic of PIFs change very little through time,
(deep water body, bare soil, rooftop)
– PIFs should be in the same elevation as others
– No or rare vegetation,
– The PIF must be relatively flat
• Then PIFs will be used to normalize the multiple-date
imagery
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 23


Example
SPOT Band 1, 8/10/91

Reproduced
with
permission
from the
lecture
notes of
SPOT Band 3, 8/10/91

Prof. John
Jensen,
University
of South
Carolina

GNR607 Lect16-17 B. Krishna Mohan

12
9/6/2019

IIT Bombay Slide 24

Regression Procedure
 SPOT image of 8/10/1991 is selected as the base image
 PIFs (wet and dry) were selected for generating the
relationship between the base image and others
 The resulted regression equation will be used to
normalize the entire image of 4/4/87 to 8/10/91 for
change detection.
 The additive component corrects the path radiance
among dates, and multiplicative term correct the detector
calibration, sun angle, earth-sun distance, atmospheric
attenuation, and phase angle between dates.

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 25

• Regression
equations for all
images, all
based on the
SPOT image of
8/10/91

GNR607 Lect16-17 B. Krishna Mohan

13
9/6/2019

IIT Bombay Slide 26

Popular Correction Models


• ACORN – Atmospheric Correction
• ATCOR – Atmospheric Correction
• ATREM – Atmosphere Removal
• FLAASH – Fast Line of Sight
Atmospheric Analysis of Spectral
Hypercubes (specially developed for
hyperspectral imagery)

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 27

Example of ATCOR
Reproduced with permission from the
lecture notes of Prof. John Jensen,
University of South Carolina

GNR607 Lect16-17 B. Krishna Mohan

14
9/6/2019

IIT Bombay Slide 28

Test Site Preparation for Calibration


Large objects with known
reflectances
Prepare calibration tables
for all the bands.

Reproduced with permission


from the lecture notes of Prof.
John Jensen, University of
South Carolina

GNR607 Lect16-17 B. Krishna Mohan

Geometric Distortions and


Corrections

15
9/6/2019

IIT Bombay Slide 4129

Geometric Distortions
• Nature of geometric distortion
– Positional errors
– Shape of pixel
• Sources of distortion
– Earth curvature (Internal or Systematic error)
– Relative motion between satellite and earth (Internal
or Systematic error)
– Satellite attitude (Random or external error)
– Satellite altitude variations (Random or external error)
– Errors in case of electromechanical scanners
(Random or external error)
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 30

Example: Earth Rotation


Image shift to west due to earth rotation
Individual Pixels Satellite motion

Earth rotation from west to east

GNR607 Lect16-17 B. Krishna Mohan

16
9/6/2019

IIT Bombay Slide 31

Example: Panoramic Distortion


Resolution at nadir higher than at off-
nadir locations

(h/cosq)
Read section
h (alt.)
2.3 in
Richards and
Jia’s book

Pixel width
Pixel width
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 32

External Geometric Errors


• External geometric errors are induced by
satellite attitude (roll, pitch and yaw) and
variations in altitude
• Both result in geometric distortions, that
can be corrected by modeling the imaging
process

GNR607 Lect16-17 B. Krishna Mohan

17
9/6/2019

IIT Bombay Slide 33

Altitude Errors
• Remote sensing systems flown at a constant altitude
above ground level result in imagery with a uniform
scale all along the flightline.
• Increasing the altitude will result in smaller-scale
imagery. That is the size of pixel on the ground
increases, lowering the sensor resolution. Decreasing
the altitude of the sensor system will result in larger-
scale imagery, due to reduction in size of pixel on the
ground, increasing the spatial resolution above the
specification value.

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 34

Reproduced
with
permission
from the
lecture
notes of
Prof. John
Jensen,
University
of South
Carolina

GNR607 Lect16-17 B. Krishna Mohan

18
9/6/2019

IIT Bombay Slide 35

Attitude Errors
• Prominent Errors
– Roll: Spacecraft vibrates about the direction of
motion
– Pitch: Spacecraft vibrates in a vertical plane
perpendicular to the direction of motion
– Yaw: Spacecraft moves along an angle to the
direction of motion
• Both altitude and attitude errors cause
geometric distortions

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 36

Geometric Corrections
• Nature of geometric corrections
– From modeling ALL errors
– Mapping image pixels to a reference coordinate
system with desired pixel size and shape
• Modeling Errors
– Systematic errors can be estimated in advance
– Other errors can be estimated based on telemetry
data
• A combined approach is commonly followed
GNR607 Lect16-17 B. Krishna Mohan

19
9/6/2019

IIT Bombay Slide 37

Mathematical Transformation
• Pixel mapping using mathematical
transformations
– A reference coordinate system is established, with
desired pixel size and shape
– Correspondence between pixel in the reference frame
and the image is established
– Pixel value in the reference frame is computed from
the known values in the image
• Result is a corrected image generated in the
desired frame of reference.

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 38


Polynomial Image Correction
• Let the uncorrected image be v(x’,y’)
• Let the image in the corrected reference frame be
u(x,y)
• The task is to find the mapping connecting the two
frames of reference. For an affine transformation,
• Let x’ = a1x + a2y + a3
• y’ = b1x + b2y + b3
• We need minimum six equations to solve for six
coefficients but in practice more equations (min. 8)
are ALWAYS used to detect errors in any of the
equations while solving for the above six unknowns
• Using an affine transformation, we can handle
translation, scaling, rotation and shearing
distortions
GNR607 Lect16-17 B. Krishna Mohan

20
9/6/2019

IIT Bombay Slide 39


Polynomial Image Correction
• In case a sensor’s IFOV on the ground is, say, 5.8m x
5.8m, we can convert it to 6m x 6m or 5.5m x 5.5m
during image corrections

• For higher order transformations,

• x’ = a1x + a2y + a3xy + a4x2 + a5y2 + a6


• y’ = b1x + b2y + b3xy + b4x2 + b5y2 + b6

• There are 12 variables to be found for which we need


minimum 12 equations. Again, more equations are
ALWAYS used while solving for the 12 unknowns

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 40


Polynomial Coefficients
• In order to find the coefficients, we need to precisely
identify points in the reference frame as well as in
the uncorrected image

• For six variables in the affine transform, three pairs


of points (x,y) and (x’,y’) are the minimum required
• More points are ALWAYS USED to detect any errors
in the selection of the points

• Minimum six pairs of points are required for second


order polynomial but often more are ALWAYS used
for error checking purposes.

GNR607 Lect16-17 B. Krishna Mohan

21
9/6/2019

IIT Bombay Slide 41


Control Points
How to select the corresponding points?

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 42


Ground Control Point
• A ground control point is a location on
the surface of Earth that can be
accurately located in the image as well
as on a reference frame such as a map
• The mathematical transformation that
maps the pixels in the (distorted) image
onto the reference map is known as the
geometrical or spatial transformation
GNR607 Lect16-17 B. Krishna Mohan

22
9/6/2019

IIT Bombay Slide 43

Intensity Corrections after


Geometric Correction
• The computation of the pixel values (gray
levels) after the geometric transformation is
often referred to as resampling that is
essentially a spatial interpolation
• The geometric correction is influenced by the
choice of spatial transformation and the
resampling procedure

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 44


Solving for Coefficients
• Selection of GCPs allow us to compute the
transformation coefficients.
x1’ = a1 .x1 + b1 .y1 + c1
y1 ’ = a2 .x1 + b2 .y1 + c2
x2’ = a1 .x2 + b1 .y2 + c1
y2’ = a2 .x2 + b2 .y2 + c2
x3’ = a1 .x3 + b1 .y3 + c1
y3’ = a2 .x3 + b2 .y3 + c2
x4’ = a1 .x4 + b1 .y4 + c1
y4’ = a2 .x4 + b2 .y4 + c2
• More points are needed to check the accuracy of the
control points selected
GNR607 Lect16-17 B. Krishna Mohan

23
9/6/2019

IIT Bombay Slide 45


Source of Ground Control Points
• GCPs are obtained from:
– Survey of India topographic maps (digital
or paper) at 1:25,000 or 1:50,000 scale
– Other maps with ground reference
– Global Positioning Systems (GPS)
• It is important to choose GCPs that are
invariant with time since the map and
image are often years apart in time
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 46

Control Point Selection


Reproduced with permission from the
lecture notes of Prof. John Jensen,
University of South Carolina

GNR607 Lect16-17 B. Krishna Mohan

24
9/6/2019

IIT Bombay Slide 47


Computation of Spatial
Transformation
• The first order affine transformation is adequate
to account for a several forms of distortions:
– Skew
– Rotation
– Scale changes in x and y directions
– Translation in x and y directions

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 48


Computation of Spatial
Transformation
• Given a map reference, we define the pixel size
such that after geometric correction, the image
aligns with the map reference, with a pixel size
chosen by the user.
• It may be noted that the size of pixel as
acquired by the satellite can be selected
different from the pixel size after geometric
correction

GNR607 Lect16-17 B. Krishna Mohan

25
9/6/2019

IIT Bombay Slide 49

Spatial Transformation
Reproduced
with
permission
from the
lecture
notes of
Prof. John
Jensen,
University of
South
Carolina

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 50


Errors in Transformation
• If the GCPs selected are in error, the
transformation maps the points in the image
inaccurately onto the reference. The error can
be measured in terms of the Root Mean
Squared (RMS) Error
• RMSerror =
N
1
N
 (x
i 1
'
orig  xcomp
'
) 2  ( yorig
'
 ycomp
'
)2

GNR607 Lect16-17 B. Krishna Mohan

26
9/6/2019

IIT Bombay Slide 51


Effect of Errors in Transformation
Coefficients
• Error for each point is given by
'
( xorig  xcomp
'
) 2  ( yorig
'
 ycomp
'
)2

• It is common to select initially more GCPs and


choose those that result in the smallest RMS
error

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 52


Higher Order Transformations
• Sometimes the 1st order affine transformation may not
accurately transform the image onto the map in which
case one can choose a higher order polynomial
transformation such as
x '  a1 x 2  b1 xy  c1 y 2  d1 x  e1 y  f1
y '  a2 x 2  b2 xy  c2 y 2  d 2 x  e2 y  f 2
Based on the order of transformation, the number of
coefficients vary. Accordingly the number of minimum GCPs
also vary. Commercial products support 1st – 5th order
transformations.
GNR607 Lect16-17 B. Krishna Mohan

27
9/6/2019

IIT Bombay Slide 53


Resampling or Intensity
Interpolation
• The transformation is of two types:
– Forward mapping or input to output mapping, i.e., for
every pixel in the input image find the corresponding
location in the reference map according to the
determined transformation
– Reverse mapping or output to input mapping, i.e., for
every pixel in the output frame find the corresponding
location in the input image according to the
determined transformation

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 54


Intensity Transformation

Reproduced with
permission from
the lecture notes
of Prof. John
Jensen, University
of South Carolina

GNR607 Lect16-17 B. Krishna Mohan

28
9/6/2019

IIT Bombay Slide 55


Intensity Interpolation
• In this phase, gray level values are
computed for the transformed pixels since
they are now at different locations from
where they collected the reflected energy
• This step involves intensity interpolation
since the computed values are weighted
averages of existing measured values

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 56


Interpolation Strategy
• It is more convenient to use reverse
mapping or output to input mapping when
geometrically correcting multispectral
images
• The reference frame can be assigned a
given pixel size, and each pixel can then
be located in the input image through the
spatial transformation
GNR607 Lect16-17 B. Krishna Mohan

29
9/6/2019

IIT Bombay Slide 57


Intensity Interpolation


• •
• •

Reference frame

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 58


Nearest Neighbor Interpolation
Standard
B Interpolation
A Methods:

P● •Nearest Neighbor
•Bilinear
D Interpolation
C •Higher order
interpolation
(bicubic)

GNR607 Lect16-17 B. Krishna Mohan

30
9/6/2019

IIT Bombay Slide 59


Nearest Neighbor Interpolation
• P is the location to which a point from the
reference frame gets transformed
• Measured values exist at A, B, C and D
• Let DAP be the distance of P from A, likewise
DBP, DCP, and DDP
• P is assigned the value of
element K ∈{A,B,C,D} in case of Nearest
Neighbor Interpolation where DKP =
Min{DAP, DBP, DCP, DDP}

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 60


Issues in NN Interpolation
• Fastest to compute
• No new values introduced – only the same
values recorded by the sensors retained
• Renders the image blocky if large pixel
size to small pixel size resampling is
performed
• e.g., resampling an IRS-1D LISS-III image
to 1 metre pixel size
GNR607 Lect16-17 B. Krishna Mohan

31
9/6/2019

IIT Bombay Slide 61


Bilinear Interpolation
• As opposed to nearest neighbor
interpolation, all the four known points are
employed in estimating the value at the
unknown point
• The weightages assigned to the four
points are dependent on the proximity of
the unknown point to these known points.

GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 62


Bilinear Interpolation Principle

B
A

P●

C
Bilinear Interpolation
d(C,P)

GNR607 Lect16-17 B. Krishna Mohan

32
9/6/2019

IIT Bombay Slide 63


Bilinear Interpolation
• Denoting the estimated gray level at point
P by f(P), and the known values by f(A),
f(B), f(C) and f(D),

wA f ( A)  wB f ( B )  wC f (C )  wD f ( D)
f ( P) 
wA  wB  wC  wD
• The weight wA = 1/d(A,P), where d(A,P) is
the distance between point A and point P.
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 64


Cubic Convolution
• Use of a bigger neighborhood to estimate
the pixel gray level allows a smooth image
since local differences are averaged out.
• O—O—O—O For the location marked
• O—O—O—O by the colored circle,
the neighboring 16
• O—O—O—O elements are employed.
• O—O—O—O

GNR607
Pixel (i,j) Location
Lect16-17
XR, YR B. Krishna Mohan

33
9/6/2019

IIT Bombay Slide 65

Cubic Convolution Technique


• The estimated value at location (XR, YR) is given by
VR =
4
V(i–1,j+n–2)× f(d(i–1,j+n–2)+ 1) +

n1
V(i, j + n – 2)× f(d(i, j + n – 2)) +
V(i+1,j+n–2)× f(d(i+1,j+n–2)– 1) +
V(i+2,j+n–2)× f(d(i+2,j+n–2)– 2)
V(m,n) is the value of the pixel at location (m,n)
f(x) is weight function
d(x,y) is the (Euclidean) distance between pixels x and y.
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 66


Cubic Convolution Kernel
• The weighting function f(x) is defined as
(Ref: ERDAS Field Guide)
f(x) = (a+2)|x 3 | - (a+3)|x 2 | +1 if x < 1
a |x 3 | - 5a |x 2 | +8a|x| - 4a if 1 < x < 2
0 Otherwise

a = -1 (constant)

GNR607 Lect16-17 B. Krishna Mohan

34
9/6/2019

IIT Bombay Slide 67


Comments on Cubic Convolution
• Strengths • Weaknesses
• Due to larger • Does produce new values
neighborhoods, the mean due to averaging of the
and variance of input original recorded values
image and the output • Extremely slow compared
image match closely to other methods
• Useful to filter out noise
and improve the image
• Well suited when image
is resampled from a large
pixel size to a small pixel
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 68


Comments on Intensity Interpolation
• The choice of order of transformation, and the
type of resampling method used will affect the
image quality
• The distribution of control points is very
important to ensure that the image is properly
registered to the map frame on all sides
• Nearest neighbor method is adequate if the
resolution of the input image and the corrected
image are the same
GNR607 Lect16-17 B. Krishna Mohan

35
9/6/2019

IIT Bombay Slide 69


Comments on Interpolation
• Bilinear interpolation is adequate if the
resampled image and the input image have only
a small difference in the pixel size
• Cubic convolution is best to produce a smooth
resampled image, and is ideal when the pixel
size of the resampled image is very different
from that of the input image
• Nearest neighborhood is fastest, and cubic
convolution is slowest.
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 70


Image to Image Registration
• When a reference image is available to be used
instead of the map, we register the input image
to the reference image.
• Registration is the process of making an
image conform to another image. If image A
is not geo-referenced and it is being used
with image B, then image B must be
registered to image A so that they conform to
each other.
• In this example, image A is not rectified to a
particular map projection, so there is no need to
rectify image B to a map projection.
GNR607 Lect16-17 B. Krishna Mohan

36
9/6/2019

IIT Bombay Slide 71


Image Registration
• Much of the procedure remains the same
except that if the pixel sizes of the input
and references are different, then one
should be first zoomed in / zoomed out to
bring it to the size of the other.
• This step is vital when images from
different sensors are to be fused into one
data set.
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 72


Image Mosaicing
• If the study area is large, it may be
covered by two adjoining scenes.
• Remote sensing data providers always
keep a small overlap between adjacent
scenes.
• Mosaicing is the procedure of joining
overlapping images into a single large
image
GNR607 Lect16-17 B. Krishna Mohan

37
9/6/2019

IIT Bombay Slide 73


Image Mosaicing
• It is possible that the two adjoining images
are acquired on two different dates due to
which the atmospheric conditions may
vary
• The brightness levels of the images may
be different, and the place where the two
images are joined, called the seam will be
quite visible
• Example:Google Earth images
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide


8774 Seam of Mosaic

GNR607 Lect16-17 B. Krishna Mohan

38
9/6/2019

IIT Bombay Slide 75


Mosaicing Process
• Geo-referencing both images
• Identification of the overlap area
• Adjustment of the brightness levels of the
two images
• Adjustment of brightness across the
overlap area (called feathering)
• Filling out the blank areas with black/white
values
GNR607 Lect16-17 B. Krishna Mohan

IIT Bombay Slide 76


Mosaicing Process
Overlap Area

Mosaic is the
union image
that contains
both the input
images

GNR607 Lect16-17 B. Krishna Mohan

39
9/6/2019

IIT Bombay Slide 77

Example
Reproduced with
permission from
the lecture notes
of Prof. John
Jensen, University
of South Carolina

GNR607 Lect16-17 B. Krishna Mohan

Contd…

40

You might also like