0% found this document useful (0 votes)
38 views66 pages

Cp467 12 Lecture17 Segmentation

presenation

Uploaded by

Nora Mahmoud
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views66 pages

Cp467 12 Lecture17 Segmentation

presenation

Uploaded by

Nora Mahmoud
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Lecture 16

Image
Segmentation
1. The basic concepts of
segmentation
2. Point, line, edge detection
3. Thresh holding
4. Region-based segmentation
5. Segmentation with Matlab
What is
segmentation
• What is segmentation
– Segmentation subdivides an image into its constituent
regions or objects, until the objects of interest in an
application have been isolated.
• Segmentation conditions
R0 ,..., Rn
n
(a) ∪ Ri 
R i0
(b) Ri is a connected set i  1, 2,..., n
(c) Ri  R j   for all i and j, i  j
(d) ) Q(Ri )  true for i  1, 2,..., n
(e) Q(Ri  R j )  false for any adjacent regions Ri and R j
where Q(Ri ) is a logical predicate defined over the points in set Rk .

• Segmentation problem: to partition the


image into regions satisfying above
conditions
Two principal
approaches
• Edge-based segmentation
– partition an image based on abrupt changes in intensity
(edges)

• Region-based segmentation
– partition an image into regions that are similar
according to a set of predefined criteria.
Detection of
Discontinuities
• Detect the three basic types of gray-level
discontinuities
– points , lines , edges
• Use the image sharpening techniques
– The first order derivatives produce thicker edges
– The second order derives have a strong response to
fine detail, such as thin lines and isolated points, and
noise
– Laplasian operation
• Can be done by running a mask through the
image
9

R
 ww1 kz1  w2 z2  ...  w9 z9
zk 4
k 1
Point
Detection
Steps for point dection
1. Apply Laplacian filter to the
image to obtain R(x, y)
2. Create binary image by
threshold
if | R(x,
g(x, y)  y) | T
1, 

where T is a0,nonnegative
otherwise
threshold

5
Exampl
e

6
Line
•Detection
A special mask is needed to detect a special type
of line
• Examples:
– Horizontal mask has high response when a line passed
through the middle row of the mask.

7
Multiple Lines
• Detection
Apply every masks on the image, find the maximum response.

Example:

Let R1, R2, R3, R4 denotes the response of the horizontal, +45
degree, vertical and -45 degree masks, respectively. if, at a
certain point in the image |Ri| > |Rj|, for all ji, that point is said
to be more likely associated with a line in the direction of
mask i.

8
Edge
Detection
• Edge detection is the approach for
segmenting images based on abrupt
changes in intensity.
• What is an edge
– an edge is a set of connected pixels that lie on the
boundary between two regions.
– an edge is a “local” concept whereas a region
boundary, owing to the way it is defined, is a more
global idea.
• Edge models
– Step edge (Idea edge), ramp edge (thick edge), and
roof edge

9
Example of edges in an
image
•An image may all the three types of
edges
First and Second derivatives at the
edge

1. The magnitude of the first derivative can be used to


detect the presence of an edge at a point
2. Second derivative produces two values for every edge in
an image. An imaginary straight line joining the extreme
positive and negative values of the second derivative
would cross zero near the midpoint of the edge. zero-
crossing point: the center of thick edges
11
Noise
Images
•First column: images and
gray- level profiles of a
ramp edge corrupted by
random Gaussian noise of
mean 0 and  = 0.0, 0.1,
1.0 and 10.0, respectively.

• Second column: first-


derivative images and
gray-level profiles.

• Third column : second-


derivative images and
gray- level profiles.

12
Steps in edge
detection
1. Image smoothing for noise reduction
2. Detection of edge points. Points on an
edge
3. Edge localization

13
Image
gradient
• Gradient is a
 f 
vector  g x   x
f   f 
g y 
  
y 
• The magnitude of the gradient
2 2
M (x, y)  mag(f ) g x2  g y
2
 f    f 
   y 
 x 
• The direction of the gradient
vector
gy 
 (x, y)  tan 1
 g  x
• The direction of an edge at (x, y) is perpendicular to the
of the gradient vector at that
direction
point 14
Exampl
e

 f 
 x   2  2,
f       , M (x, y) 
f 2
2  y 

 (x, y)  tan 1 (2 / 2) 


o
Gradient Operators and
Masks
f
g x x  f (x 1, y)  f (x,
y)
f
g y y  f (x, y 1)  f (x,
y)
Roberts cross-gradient operators

g x  f (x, y)  9  z 5
x
g y z f (x, y)  8  z 6
y
Prewitt
z operators

f
g x x  (z7  8z  9z )  (z
1  2z 3z
)
f
g y y  (z3  6z  9z )  (z
1 4 z 7 z
)
Sobel operators

f
g x x  (z
7  2z
8 9z )  (z
1  2z
2 3 z
)
f
g y y  (z
3  2z
6 9z )  (z
1  2z
4 7 z
) 16
Prewitt and Sobel masks for Diagonal
edges

Sobel masks have


slightly superior
noise-suppression
characteristics
which is an
important issue
when dealing with
derivatives.

17
Exampl
e

18
Exampl
e
Exampl
e

2
0
Exampl
e
Laplacia
n
Laplacian operator
2 f  f ( x , 2 f (x,
2

y) x y) y
 2 2

 2 f  [ f (x 1, y)  f (x 1,
y)
 f (x, y 1)  f
(x, y 1)  4 f (x, y)]

22
Laplacian of
Gaussian
• Laplacian combined with smoothing as a
precursor to find edges via zero-crossing.

x2 y2

G(x,y)  2 2

e  2G 2
 2 G ( x , y ) G 
 
x y
x y
2 2

 x  y  2 
2 2 2 
2 2
  e
 4

g ( x , y )   2G ( x , y )  f ( x ,
y0

23
Marr-Hildreth edge detection
algorithm
1. Filter the input with an n by an Gaussian
lowpass filter
2. Compute the Laplacian of the image of step 1
3. Find the zero crossing of the image from step
Approximate the zero crossing from LoG image to threshold the LoG
image by setting all its positive values to white and all negative
values to black. the zero crossing occur between positive and
negative values of the thresholded LoG.
Zero crossing vs.
•Gradient
Attractive
– Zero crossing produces thinner edges
– Noise reduction
• Drawbacks
– Zero crossing creates closed loops. (spaghetti
effect)
– sophisticated computation.
• Gradient is more frequently used.

25
Exampl
e
a). Original image
b). Sobel
Gradient c).
Spatial Gaussian
smoothing
function d).
Laplacian mask
e). LoG
f). Threshold
LoG g). Zero
crossing

26
Edge Linking and Boundary
Detection
• An edge detection algorithm are followed by
linking procedures to assemble edge pixels
into meaningful edges.
• Basic approaches
– Local Processing
– Global Processing via the Hough Transform
– Global Processing via Graph-Theoretic Techniques

27
Local
Processing
• Analyze the characteristics of pixels in a small
neighborhood Sxy (say, 3x3, 5x5) about every edge
pixels (x, y) in an image.
• All points that are similar according to a set of predefined
• Criteri
criteria are| Mlinked, forming an edge of pixels that share
(s, t)  M (x, y) | E
those criteria.
a where E is a positive threshold
|  (s, t)   (x, y) | A
where A is a positive angle
• Algorithm steps threshold

1. compute f , M (x, y),  (x,


y) M (x, y)  TM and  (x, y)  A 
2. Form a binary image g(x, y) 
TA
1, 0,
3.Scan the rows of g and fill all gaps ineach tow that do not exceed a
otherwise
specified length K
4.Detect the gap in any direction  , rotate g by this angle and apply the
horizontal scanning procedure in Step 3. Rotate the result by - 
28
Example of local
precessing

TM =30%,A =90,TA = 45, Gap


=25
Hough Transformation
(Line)

yi =axi + b b = - axi + yi

all points (xi ,yi) contained on the same line must have
lines in parameter space that intersect at (a’,b’)

30
-
plane

• problem of using equation y = ax + b is that value of a is


infinite for a vertical line.
• To avoid the problem, use equation x cos + y sin  =  to
represent a line instead.
• vertical line has  = 90 with  equals to the positive y-
intercept or  =
-90 with  equals to the negative y-intercept 31
Generalized Hough
Transformation
The method can be used for any function of the form
g(v, c) = 0, where v is a vector of coordinates, c is a
vector of coefficients

Example: circles, (x-c1)2 + (y-c2)2 = c32


(c1, c2, c3) cube like cells, accumulators of the form A(i,
j, k) increment c1 and c2 , solve of c3 that satisfies the
equation update the accumulator corresponding to
the cell associated with triplet (c1, c2, c3)

33
Edge-linking based on Hough
Transformation
1. Compute the gradient of an image and
threshold it to obtain a binary image.
2. Specify subdivisions in the -plane.
3. Examine the counts of the accumulator cells
for high pixel concentrations.
4. Examine the relationship (principally for
continuity) between pixels in a chosen cell.
Continuity:
5. based on computing the distance between
disconnected pixels identified during traversal
of the set of pixels corresponding to a given
accumulator cell.
6. a gap at any point is significant if the distance
between that point and its closet neighbor
34
exceeds a certain threshold.
Exampl
e
link criteria:
1)the pixels belonged to one of the set of pixels
linked according to the highest count
2) no gaps were longer than 5 pixels 3
6
Image
Segmentation II
1. Threshold
2. Region-based segmentation
3. Segmentation using
watersheds
4. Segmentation with Matlab
Thresholdin
g
• Problem: how to determine the threshold
value of segmentation?
• Histogram

g(x, y) a,, if
if fT(x,
 y)  y) 
f (x,
1, if f (x, y)  T T2b
g ( x, y)   1 2

0, if f (x, y) 
T  c, if f (x, y)  2
T
T
The role of noise in image
thresholding
•When noise is small, the method work, otherwise
it may not work. Noise reduction has to be done
before choosing threshold value
The Role of
Illumination
Illumination plays an
important role
f(x,y) = i(x,y) r(x,y)
Approach:
1.g(x,y) = ki(x,y)

2.h(x,y) = f(x,y)/g(x,y) = r(x,y)/k

40
Basic Global
Thresholding
Based on visual inspection of histogram
1. Select an initial estimate for T.
2. Segment the image using T. This will produce
two groups of pixels: G1 consisting of all
pixels with gray level values > T and G2
consisting of pixels with gray level values  T
3. Compute the average gray level values m1 and
m2 for the pixels in regions G1 and G2
4. Compute a new threshold value T = 0.5 (m1 +
m2 )
5. Repeat steps 2 through 4 until the difference
in T in successive iterations is smaller than
a predefined parameter To.

41
Basic Global
Thresholding
Use T midway between the max and min gray
levels generate binary image

T0 = 1 , 3 iterations, result T
= 125
Optimum global
thresholding
Choose thresholding values that maximize the between-class
variance Otsu’s method
1. Compute the normalized histogram of the pi ,i  0,1, 2,..., L
k
image
2. Computer the cumulative 1
m(k) i k ipi , k  0,1, 2,...,
sums 0 p , k  0,1, 2,..., L
L P(k)
1
1 i

3.
4. Computer the global
Computer the cumulative i L1
intensity 1 m 0  ipi , k  0,1, 2,..., L
mean
mean, 1G  i
0
5. Compute the between-class [m G P1 (k )  m
 B2 (k ) , k  0,1, 2,..., L 
variance 
(kP)](k
1
2
)[1  P1 1
(k )]
6. Obain the Otsu
thereshold 2
( k' )  m a
k * th e a v e ra g e o f a ll k ' s u c h th a ( k )}
2
B B
k
t
7. Compute the separability x {
measure  B2 (k * )
 *=  2
G
Exampl
e
Exampl
e
Exampl
e
Using edges to improve global
thresholding
1. Compute the edge image g(x, y) of f(x, y)
2. Specify a threshold value T of g(x, y)
3. Threshold the image f(x, y) using T, obtain gt(x,
y)
4. Compute a histogram of f(x, y) using pixels (x, y)
with
gt(x, y) =1
5. Use the above histogram to segment f(x,y)
globally.
Exampl
e
Multiple
thresholding
Otsu’s method can be extended to arbitrary number of
1. Compute the normalized histogram of the pi ,i  0,1, 2,..., L
thresholds.
image
2. Computer the cumulative 1
1
 
k i k i 1 K
sums P  p , m 
P ip ,C ,..., C are
3. Computer the cumulative iC k
k iC k

mean classes
K K
Pi m i , B2 1 G ]2
m 
G
i  (k ,...,k ) 
K
i
i
P[m  m i

1 1

4. Obain
theresold
k * , ..., k *
K  )  m a x {
1 K
s u c h th a t B
2
( k 1
*
, ..., k *
B
2
1 , ..., k K  1 )}
1 k
 1(k
5. Threshold
image  a 1 if f ( x , y )  1

k * if k *
 f (x, y)  2
g ( x , y )  a 2 1

 k

*

 
K if f ( x , y )  K
a k*
Exampl
e
Basic Adaptive
Thresholding
• Subdivide original image into small areas.

• Utilize a different threshold to segment each


subimages.

• The threshold used for each pixel depends


on the location of the pixel in terms of the
subimages.

51
Example : Adaptive
Thresholding

52
2. Region-Based
•Segmentation
Basic
Formulation
n
(a)  Ri
i1

(b) 
Ri Ris a connected region, i  1, 2,
..., n
i R
(c) RP(R ) j   for
TRUE forall
i i and
 1, ij
j, ...,
2,
i

(e) ) n P(Ri  R j )  FALSE for i  j


(d)
P(Ri) is a logical predicate property defined over the points

in set Ri ex. P(Ri) = TRUE if all pixel in Ri have the same

gray level 53
Two basic
approaches
• Region Growing
– start with a set of “seed” points
– growing by appending to each seed those neighbors
that have similar properties such as specific ranges of
gray level
e.g TRUE if the absolute difference of
Q the
  intensites between the seed and
 FALSE otherwise
the pixel at (x,y) is  T
• Region splitting and merging
– Iteratively divide a region into smaller regions until
all regions become TRUE
– Merge adjacent regions as along as the resulting
region is still TRUE
54
Region
Growing
Select all seed points with gray
level 255 criteria:
1. the absolute
gray- level
difference
between any
pixel and the
seed has to be
less than 65
2. the pixel has to be
8- connected to
at least one pixel
in that region (if
more, the regions
are merged) 55
Region splitting and
merging
Quadtree
1. Split into 4 disjoint quadrants any region Ri
for which P(Ri) = FALSE
2. Merge any adjacent region Rj and Rk for
which P(Ri  Rk ) = TRUE
3. Stop when no further merging or
splitting is possible.
Exampl
e

 TRUE if   a and 0  m 
Q
b 
FALSE otherwise
where m and  the mean and standard deviation
of pixels in a quadregion
Exampl
e

P(Ri) = TRUE if at least 80% of the pixels in Ri


have the property |zj-mi|  2i,
where
zj is the gray level of the jth pixel
in Ri mi is the mean gray level of
that region
i is the standard deviation of
the gray levels in Ri 5
8
Segmentation using
Watersheds
• The concept of watersheds
– View an image as 3-D graphics
– Three types of points
• Local minimum
• Point at which water will flow to a local
minimum, also called catchment basin, or
watershed.
• Point at which water can equally flow to
more than one local minimum points, also
called divide lines, or watershed lines
• Segmentation by watershed lines
Exampl
e
Watershed
algorithm
Let M 1 , M 2 ,..., M R be sets denoting the coordinates of the
reginoal minima of image g (x, y). C(M i ) is the set of pixels of
catchment basin of M i
T[n]  {(s, t) | g(s, t)  n}, n  min1 to max1
Cn (M i )  C(M i )  T[n]
Q[n] denotes the set of connected components of M i
For n  min1 to max1 do
For each connected component q of Q[n]
if q  C[n 1]  
Let C[n]  C[n 1]  q
else if q  C[n 1] contains one
connected component of C[n 1]
C[n]  C[n 1]  q
else q  C[n 1] contains more than one components of C[n
1] The build dam within q
Dam
construction
Exampl
e

You might also like