Image Processing Husseina Ozigi Otaru

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 54

What is the function of

Image Processing?
In high resolution field, in addition to the usual
preprocessing functions (offset, dark and flat
corrections), the usefulness of image processing
can be divided into two main functions:
increasing the contrast of planetary details and
reducing the noise.
Increasing the contrast of planetary
detail
 Increasing the contrast of small details is
the aim of many processing algorithms
which all act in the same way: they
amplify the high frequencies in the image.
This is the reason why they are called
high-pass filters, and probably the most
famous of them is unsharp masking. This
technique is well-known but hard to use in
astrophotography. In digital image
processing the general principle of
unsharp masking is
What is a MTF curve ?):
 a fuzzy image (blue curve) is made
from the initial image (red curve) by
application of a low-pass filter
(gaussian) whose strenght is
adjustable; the high frequencies are
suppressed,
 this fuzzy image is substracted from
the initial image; the result (green
curve) contains only the small details
(high frequencies) but its appearance
is very strange and unaesthetic
(unfortunately, this image also
contains noise),
MTF Curve
What is Sampling?
 Sampling is choosing which points you
want to have represent a given image.
Given an analog image, sampling
represents a mapping of the image from a
continuum of points in space (and possibly
time, if it is a moving image) to a discrete
set. Given a digital image, sampling
represents a mapping from one discrete
set of points to another (smaller) set.
Original Picture
Manroc Sampled
LINEAR FILTERING
Low pass filters

Low pass filtering, otherwise known as


"smoothing", is employed to remove high
spatial frequency noise from a digital
image. Noise is often introduced during
the analog-to-digital conversion process as
a side-effect of the physical conversion of
patterns of light energy into electrical
patterns
There are several common
approaches to removing this noise:
 If several copies of an image have
been obtained from the source, some
static image, then it may be possible
to sum the values for each pixel from
each image and compute an
average. This is not possible,
however, if the image is from a
moving source or there are other
time or size restrictions.
Intensity Histogram / Adjustment
Bone Marrow Image
 If such averaging is not possible, or if it is
insufficient, some form of low pass
spatial filtering may be required. There
are two main types:
 reconstruction filtering, where an
image is restored based on some
knowledge of of the type of degradation it
has undergone. Filters that do this are
often called "optimal filters"
 enhancement filtering, which
attempts to improve the (subjectively
measured) quality of an image for
human or machine interpretability.
Enhancement filters are generally
heuristic and problem oriented
Moving window operations
 The form that low-pass filters usually
take is as some sort of moving
window operator. The operator
usually affects one pixel of the image
at a time, changing its value by some
function of a "local" region of pixels
("covered" by the window). The
operator "moves" over the image to
affect all the pixels in the image.
Some common types are:
 Neighborhood-averaging filters
 Median filters
 Mode filters
Neighborhood-averaging filters
 These replace the value of each
pixel, by a weighted-average of the
pixels in some neighborhood around
it, i.e. a weighted sum of the weights
are non-negative. If all the weights
are equal then this is a mean filter.
"linear"
Median filters
 This replaces each pixel value by the
median of its neighbors, i.e. the
value such that 50% of the values in
the neighborhood are above, and
50% are below. This can be difficult
and costly to implement due to the
need for sorting of the values.
However, this method is generally
very good at preserving edges.
Mode filters
 Each pixel value is replaced by its
most common neighbor. This is a
particularly useful filter for
classification procedures where each
pixel corresponds to an object which
must be placed into a class; in
remote sensing, for example, each
class could be some type of terrain,
crop type, water, etc..
These are all space invariant in that
the same operation is applied to
each pixel location.
 A non-space invariant filtering, using
the above filters, can be obtained by
changing the type of filter or the
weightings used for the pixels for
different parts of the image.
 Non-linear filters also exist which
are not space invariant; these
attempt to locate edges in the noisy
image before applying smoothing, a
difficult task at best, in order to
reduce the blurring of edges due to
smoothing.
High Pass Filter
 A high pass filter is used in digital image
processing to remove or suppress the low
frequency component, resulting in a
sharpened image. High pass filters are
often used in conjunction with low pass
filters. For example, the image may be
smoothed using a low pass filter, then a
high pass filter can be applied to sharpen
the image, therefore preserving boundary
detail.
What Is An Edge?
 An edge may be regarded as a
boundary between two dissimilar
regions in an image.
 These may be different surfaces of
the object, or perhaps a boundary
between light and shadow falling on
a single surface.
More about Edges
 edges have been loosely defined as pixel
intensity discontinuities within an image.
While two experimenters processing the
same image for the same purpose may not
see the same edge pixels in the image,
two for different applications may never
agree.
 In a word, edge detection is usually a
subjective task.
 In principle an edge is easy to find
since differences in pixel values
between regions are relatively easy
to calculate by considering gradients.
Many edge extraction techniques
can be broken up into two distinct
phases:
 Finding pixels in the image where
edges are likely to occur by looking
for discontinuities in gradients.
 Candidate points for edges in the
image are usually referred to as
edge points, edge pixels, or
edgels.
 Linking these edge points in some
way to produce descriptions of edges
in terms of lines, curves etc.
Gradient based methods

 An edge point can be regarded as a


point in an image where a
discontinuity (in gradient) occurs
across some line. A discontinuity
may be classified as one of three
types
Types of Edges
Gradient Discontinuity
 -- where the gradient of the pixel
values changes across a line. This
type of discontinuity can be classed
as
 roof edges
 ramp edges
 convex edges
 concave edges
--by noting the sign of the component
of the gradient perpendicular to the
edge on either side of the edge.
 Ramp edges have the same signs in

the gradient components on either


side of the discontinuity, while roof
edges have opposite signs in the
gradient components.
A Jump or Step Discontinuity
 -- where pixel values themselves
change suddenly across some line.
A Bar Discontinuity
 -- where pixel values rapidly increase
then decrease again (or vice versa)
across some line.
For example, if the pixel values are
depth values,

 jump discontinuities occur where one


object occludes another (or another
part of itself).
 Gradient discontinuities usually occur
between adjacent faces of the same
object.
If the pixel values are intensities,

 a bar discontinuity would represent


cases like a thin black line on a white
piece of paper.
 Step edges may separate different
objects, or may occur where a
shadow falls across an object.
Disadvantages of the use of
second order derivatives.
 Since First derivative operators
exaggerate the effects of noise,
Second derivatives exaggerate noise
twice as much.
 No directional information about the
edge is given.
Edge Linking

 Edge detectors yield pixels in an


image lie on edges.
 Next collect these pixels together
into a set of edges.
 Replace many points on edges with a
few edges themselves.
Problems…
 Small pieces of edges may be
missing,
 Small edge segments may appear to
be present due to noise where there
is no real edge, etc.
Local Edge Linkers
 -- where edge points are grouped to
form edges by considering each
point's relationship to any
neighbouring edge points.
Global Edge Linkers
 -- where all edge points in the image
plane are considered at the same
time and sets of edge points are
sought according to some similarity
constraint, such as points which
share the same edge equation.
Local Edge Linking Methods

 Most edge detectors yield


information about the magnitude of
the gradient at an edge point and,
more importantly, the direction of
the edge in the locality of the point.
Texture Analysis

 In many machine vision and image


processing algorithms, simplifying
assumptions are made about the
uniformity of intensities in local
image regions. However, images of
real objects often do not exhibit
regions of uniform intensities.
 Image texture, defined as a function
of the spatial variation in pixel
intensities (gray values), is useful in
a variety of applications and has
been a subject of intense study by
many researchers. One immediate
application of image texture is the
recognition of image regions using
texture properties.
Texture Segmentation
Texture boundaries can be found even
if the texture surfaces cannot be
classified. The goal of texture
segmentation is to obtain the
boundary map separating the
differently textured regions in an
image.
Texture Synthesis
 Texture synthesis is often used for
image compression applications. It is
also important in computer graphics
where the goal is to render object
surfaces which are as realistic
looking as possible.
Shape From Texture
 The shape from texture problem is
one instance of a general class of
vision problems known as ``shape
from X.'' The goal is to extract three-
dimensional surface shape from
variations in textural properties in
the image. The texture features are
distorted due to the imaging process
and the perspective projection which
provide information about surface
orientation and shape.

You might also like