Image Segmentation
Image Segmentation
Image Segmentation
Image Segmentation
Image segmentation is the process of partitioning an image into multiple segments. Image
segmentation is typically used to locate objects and boundaries in images.
Segmentation partitions an image into distinct regions containing each pixels with similar
attributes. To be meaningful and useful for image analysis and interpretation, the regions
should strongly relate to depicted objects or features of interest. Meaningful segmentation is
the first step from low-level image processing transforming a greyscale or colour image into
one or more other images to high-level image description in terms of features, objects, and
scenes. The success of image analysis depends on reliability of segmentation, but an accurate
partitioning of an image is generally a very challenging problem.
Image segmentation is a vital part of image analysis process. It differentiates between the
objects we want to inspect further and the other objects or their background.
Segmentation
Split-and-merge Segmentation
One of the simplest and most common algorithms for labelling connected regions after
greyscale or colour thresholding exploits the "grassfire" or "wave propagation" principle:
after a "fire" or "wave" starts at one pixel, it propagates to any of the pixel's 4- or 8-
neighbours detected by thresholding. Each already visited (i.e. "burnt away" or "wet") pixel
cannot be visited again, and after the entire connected region is labelled, its pixels are
assigned a region number, and the procedure continues to search for the next connected
region. Magenta and yellow stars below indicate the fire, or wave front and the burnt away
pixels, respectively. To label a region, the fire starts from its first chosen pixel:
Intra-region signal variations can be restricted with a similar predicate: P(R) = TRUE
if |f(x,y) − &muR| ≤ &Delta and FALSE otherwise where (x,y) is a pixel from the
region R and μR is the mean value of signals f(x,y) over the entire region R.
1.3. Region growing: The bottom-up region growing algorithm starts from a set of seed
pixels defined by the user and sequentially adds a pixel to a region provided that the
pixel has not been assigned to any other region, is a neighbour of that region, and its
addition preserves uniformity of the growing region.
The splitting stage alternates with a merging stage, in which two adjacent
regions Ri and Rj are combined into a new, larger region if the uniformity predicate
for the union of these two regions, P(Ri ∪ Rj), is TRUE.
2.1. Simple thresholding: The most common image property to threshold is pixel grey
level: g(x,y) = 0 if f(x,y) < T and g(x,y) = 1 if f(x,y) ≥ T, where T is the threshold.
Using two thresholds, T1 < T1, a range of grey levels related to region 1 can be
defined: g(x,y) = 0 if f(x,y) < T1 OR f(x,y) > T2 and g(x,y) = 1 if T1 ≤ f(x,y) ≤ T2.
2.2. Adaptive thresholding: Threshold separates the background from the object, the
adaptive separation may take account of empirical probability distributions of object
(e.g. dark) and background (bright) pixels. Such a threshold has to equalise two kinds
of expected errors: of assigning a background pixel to the object and of assigning an
object pixel to the background. More complex adaptive thresholding techniques use a
spatially varying threshold to compensate for local spatial context effects (such a
spatially varying threshold can be thought as a background normalisation).
2.3. Colour thresholding: Color segmentation may be more accurate because of more
information at the pixel level comparing to greyscale images. The standard Red-
Green-Blue (RGB) colour representation has strongly interrelated colour components,
and a number of other colour systems (e.g. HSI Hue-Saturation-Intensity) have been
designed in order to exclude redundancy, determine actual object / background
colours irrespectively of illumination, and obtain more more stable segmentation. An
example below shows that colour thresholding can focus on an object of interest
much better than its greyscale analogue:
Segmentation of colour images involve a partitioning of the colour space, i.e. RGB or
HSI space. One simple approach is based on some reference (or dominant) colour
(R0, G0, B0) and thresholding of Cartesian distances to it from every pixel
colour f(x,y) = (R(x,y),G(x,y),B(x,y)):
where g(x,y) is the binary region map after thresholding. This thresholding rule
defines a sphere in RGB space, centred on the reference colour. All pixels inside or on
the sphere belong to the region indexed with 1 and all other pixels are in the region 0.
How fine should be the partitioning depends on the application domain. In many cases
colour segmentation exploits only a few dominant colours corresponding to distinct
peaks of the pixel-wise colour distribution
Figure : Colour image “baboon”, and its colour 6×6×6 histogram
References: