Image Processing Notes
Image Segmentation and Analysis
1. Introduction: Image segmentation is the process of partitioning a digital image into multiple segments to
simplify image analysis. It helps isolate regions of interest (ROI), such as objects or boundaries.
2. Threshold-Based Segmentation: Divides image pixels based on intensity threshold values. Simple and
fast. Used in separating foreground and background.
3. Edge-Based Segmentation: Detects object boundaries using edge detection techniques like Sobel, Prewitt,
or Canny. Emphasizes high-intensity changes.
4. Edge Detection: Identifies points in an image where brightness changes sharply. Common algorithms:
Canny, Laplacian, Sobel.
5. Edge Linking: Joins detected edges to form complete object boundaries using techniques like Hough
Transform or graph-based methods.
6. Hough Transform: Detects geometric shapes (lines, circles) by mapping edge points into a parameter
space.
7. Watershed Transform: Treats grayscale images as topographic surfaces and finds object boundaries
where water would naturally divide regions.
8. Clustering Techniques: Groups pixels into clusters based on color or texture. Common methods: K-means,
Fuzzy C-means.
9. Region Approach: Grows regions by merging neighboring pixels that have similar properties. Includes
region growing and region splitting-merging.
Image Compression and Object Recognition
1. Image Compression:
- Introduction: Reduces file size for storage and transmission.
Image Processing Notes
- Need: Saves bandwidth and memory; speeds up processing.
- Run-Length Coding: Encodes repeated pixel values efficiently.
- Shannon-Fano Coding: Uses symbol probabilities to generate binary codes.
- Huffman Coding: Builds optimal binary tree; widely used in JPEG.
- Scalar & Vector Quantization: Maps input values to fixed values to reduce redundancy.
- JPEG/MPEG Standards: JPEG for still images; MPEG for video with motion compensation.
- Video Compression: Reduces temporal and spatial redundancies in video sequences.
2. Object Recognition:
- Introduction: Identifies and classifies objects within an image.
- Computer Vision: Field that includes image recognition, analysis, and interpretation.
- Tensor Methods: Used in deep learning to represent images and perform matrix operations.
- Classification Algorithms: SVM, KNN, CNNs classify detected objects.
- Object Detection: Locates and identifies objects (e.g., YOLO, SSD).
- Object Tracking: Continuously tracks moving objects across frames.
Image Restoration
1. Introduction: Recovers original image from a degraded one using mathematical models to remove blur or
noise.
2. Degradation Model: g(x, y) = h(x, y) * f(x, y) + n(x, y) where h = blur function, n = noise.
3. Noise Models:
- Gaussian: Random intensity variation.
- Salt-Pepper: Sharp black and white disturbances.
- Speckle: Multiplicative noise in ultrasound/radar.
- Poisson: Quantum noise in photon-limited imaging.
4. Restoration Techniques:
- Spatial Domain: Works on pixels (mean/median filters).
- Frequency Domain: Modifies image in Fourier space (Wiener filter).
Image Processing Notes
- Model-Based: Uses known or estimated degradation models.
5. Blind Deconvolution: Estimates both the image and the blur kernel when the degradation is unknown.
6. Lucy-Richardson Filter: Iterative method assuming Poisson noise for motion blur restoration.
7. Wiener Filter: Frequency domain filter minimizing mean square error. Optimal for Gaussian noise.
Medical Image Processing
1. Introduction: Uses imaging (MRI, CT, PET) for diagnosis and treatment planning.
2. Image Enhancement: Improves visibility using filters and contrast techniques.
3. Segmentation: Separates organs/tissues using thresholding, watershed, deep learning.
4. Analysis:
- Brain MRI: Detect tumors or strokes.
- Cardiac MRI: Assess heart structure and function.
- Breast MRI: Detect and classify tumors.
Satellite Image Processing
1. Remote Sensing: Collects Earth's data using satellite sensors.
2. GPS: Satellite-based navigation for location data.
3. GIS: Manages, analyzes spatial data; overlays maps/images.
4. Photographic Systems: Cameras, sensors, lenses on satellites.
5. Photogrammetry: 3D measurements from 2D images for maps.
Image Processing Notes
6. Spectral Sensing:
- Multispectral: 3-10 bands; land use, vegetation.
- Thermal: Detects heat; used in fire, temperature mapping.
- Hyperspectral: Hundreds of narrow bands; used in pollution, mineral analysis.
7. Earth Resource Satellites: Landsat, Sentinel, SPOT, Resourcesat for agriculture, environment monitoring.