Resolution Enhancement of Images With Interpolation and DWT-SWT Wavelet Domain Components
Resolution Enhancement of Images With Interpolation and DWT-SWT Wavelet Domain Components
Resolution Enhancement of Images With Interpolation and DWT-SWT Wavelet Domain Components
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 2, Issue 9, September 2013 ISSN 2319 - 4847
Resolution Enhancement of images with Interpolation and DWT-SWT Wavelet Domain Components
Mr. G.M. Khaire1, Prof. R.P.Shelkikar2
2 1 PG Student, college of engg, Osmanabad. Associate Professor, college of engg, Osmanabad.
ABSTRACT
Display resolution sizes are increasing uncontrollably. Imaging system resolution is not catching up with the display technology; this highlights the need for resolution enhancement methods. In this paper we have proposed new method for super resolution. The proposed method uses SWT and DWT high frequency sub bands. PSNR is used as quality measure to compare the proposed method with other methods. Proposed method proves to be superior to other state of the art super resolution methods.
1. INTRODUCTION
Image resolution is always a key feature for all kinds of images. With ever increasing sizes of the displays need for super resolution images has also been increased. This is also impacted by the limited size of the digital image sensor. Though widespread commercial cameras provide very high resolution images, most of the scientific cameras still have the resolution of only 512 X 512. Resolution enhancement is always being associated with the interpolation techniques. Research suggests that interpolation methods increase the intensity of low frequency components. This means interpolated image will have less number of sharp intensity transactions per pixel. A new method for resolution enhancement which preserves high frequency contents of the image is suggested in the paper. Spatial domain techniques lag in extraction and preservation of high frequency components of an image. This suggests that some other technique not involving spatial domain is to be used. So the image needs to be converted to some other domain, processed and then converted back to spatial domain. The domain can be Fourier domain, wavelet domain or any other. Fourier domain is more suitable for spectral filtering. The spectral filtering removes particular frequencies from the image. Wavelet domain separates components of an image in to individual matrices. These matrices then can be processed separately and combined together to get the desired result. Fast algorithms for implementation of discrete wavelet transform have enhanced the use of wavelet domain for image resolution improvement. Various image processing algorithms can be implemented with discrete wavelet transform (DWT)[1]. DWT decomposes image into four sub bands. These sub bands are low-low (LL), low-high (LH), high-low (HL) and high-high (HH). These sub bands are of . half the dimensions of that of image under consideration. Stationary wavelet transform (SWT) is also being used for the image resolution enhancement [2]. SWT also has four sub bands similar to DWT but sub bands in SWT are of same size of that of the image. Here we have proposed a new method for image resolution enhancement which is based on combination of DWT and SWT components and interpolation. Also we have proved that our proposed technique is better compared to previously available techniques for resolution improvements. In section II, a literature review for image resolution enhancement techniques has been given. In section III, the proposed method is described in detail. Results are demonstrated in section IV and concluding remarks are presented in section V.
Page 242
Page 243
Page 244
4. Image Enhancement
Denote a two-dimensional digital image of gray-level intensities by I. The image I is ordinarily represented in software accessible form as an M N matrix containing indexed elements I(i, j), where 0 i M - 1, 0 j N - 1. The elements I(i, j) represent samples of the image intensities, usually called pixels (picture elements). For simplicity, we assume that these come from a finite integer-valued range. This is not unreasonable, since a finite wordlength must be used to represent the intensities. Typically, the pixels represent optical intensity, but they may also represent other attributes of sensed radiation, such as radar, electron micrographs, x rays, or thermal imagery. This allows selective enhancement based on the contrast sensitivity function of the human visual system. We also proposed. An evaluation method for measuring the performance of the algorithm and for comparing it with existing approaches. The selective enhancement of the proposed approach is especially suitable for digital television applications to improve the perceived visual quality of the images when the source image contains less satisfactory amount of high frequencies due to various reasons, including interpolation that is used to convert standard definition sources into highdefinition images. Processing can presumably generate new frequency components and thus it is attractive in some applications. 4.1 Automatic image enhancement Camera or computer image editing programs often offer basic automatic image enhancement features that correct color hue and brightness imbalances as well as other image editing features, such as eye removal, sharpness adjustments, zoom features and automatic cropping. These are called automatic because generally they happen without user interaction or are offered with one click of a button or mouse button or by selecting an option from a menu. Additionally, some automatic editing features offer a combination of editing actions with little or no user interaction. 4.2 Point Operations Often, images obtained via photography, digital photography, flatbed scanning, or other sensors can be of low quality due to a poor image contrast or, more generally, from a poor usage of the available range of possible gray levels. The images may suffer from overexposure or from underexposure, as in the mandrill image. In performing image enhancement, we seek to compute J, an enhanced version of I. The most basic methods of image enhancement involve point operations, where each pixel in the enhanced image is computed as a one-to-one function of the corresponding pixel in the original image: J(i, j) = f[I(i, j)]. The most common point operation is the linear contrast stretching operation, which seeks to maximally utilize the available gray-scale range. If a is the minimum intensity value in image I and b is the maximum, the point operation for linear contrast stretching is defined by
Assuming that the pixel intensities are bounded by 0 I(i, j) K - 1, where K is the number of available pixel intensities. The result image J then has maximum gray level K - 1 and minimum gray level 0, with the other gray levels being distributed in-between according to Eq. (1). Several point operations utilize the image histogram, which is a graph of the frequency of occurrence of each gray level in I. The histogram value HI(k) equals n only if the image I contains exactly n pixels with gray level k. Qualitatively, an image that has a flat or well-distributed histogram may often strike an excellent balance between contrast and preservation of detail. Histogram flattening, also called histogram equalization in Gonzales and Woods (1), may be used to transform an image I into an image J with approximately flat histogram. This transformation can be achieved by assigning
Page 245
A third point operation, frame averaging, is useful when it is possible to obtain multiple images Gi , i = 1, same scene, each a version of the ideal image I to which deleterious noise has been unintentionally added:
, n, of the
where each noise image Ni is an M N matrix of discrete random variables with zero mean and variance 2. The noise may arise as electrical noise, noise in a communications channel, thermal noise, or noise in the sensed radiation. If the noise images are not mutually correlated, then averaging the n frames together will form an effective estimate of the uncorrupted image I, which will have a variance of only 2/n:
This technique is only useful, of course, when multiple frames are available of the same scene, when the information content between frames remains unchanged (disallowing, for example, motion between frames), and when the noise content does change between frames. Examples arise quite often, however. For example, frame averaging is often used to enhance synthetic aperture radar images, confocal microscope images, and electron micrographs. 4.3 Linear Filters Linear filters obey the classical linear superposition property as with other linear systems found in the controls, optics, and electronics areas of electrical engineering (2). Linear filters can be realized by linear convolution in the spatial domain or by pointwise multiplication of discrete Fourier transforms in the frequency domain. Thus, linear filters can be characterized by their frequency selectivity and spectrum shaping. As with 1-D signals, 2-D digital linear filters may be of the low-pass, high-pass or band-pass variety. Much of the current interest in digital image processing can be traced to the rediscovery of the fast Fourier transform (FFT) some 30 years ago (it was known by Gauss). The FFT computes the discrete Fourier transform (DFT) of an N N image with a computational cost of O(N2log2N), whereas naive DFT computation requires N4 operations. The speedup afforded by the FFT is tremendous. This is significant in linear filtering-based image enhancement, since linear filters are implemented via convolution: where F is the impulse response of the linear filter, G is the original image, and J is the filtered, enhanced result. The convolution in Eq. (6) may be implemented in the frequency domain by the following pointwise multiplication () and inverse Fourier transform (IFFT): where F0 and G0 are 2N 2N zero-padded versions of F and G. By this we mean that F0(i, j) = F(i, j) for 0 i, j N - 1 and F0(i, j) = 0 otherwise; similarly for G0. The zero padding is necessary to eliminate wraparound effects in the FFTs which occur because of the natural periodicities that occur in sampled data. If G is corrupted as in Eq. (4) and N contains white noise with zero mean, then enhancement means noise-smoothing, which is usually accomplished by applying a low-pass filter of a fairly wide bandwidth. Typical low-pass filters include the average filter, the Gaussian filter and the ideal low-pass filter. The average filter can be supplied by averaging a neighborhood (an m m neighborhood, for example) of pixels around G(i, j) to compute J(i, j). Likewise, average filtering can be viewed as convolving G with a box-shaped kernel F in Eq. (7).
5. Interpolation
Interpolation is the process of defining a function that takes on specified values at specified points. This chapter concentrates on two closely related interpolants: the piecewise cubic spline and the shape-preserving piecewise cubic named \pchip. Interpolation is the process of estimating the values of a continuous function from discrete samples. Image processing applications of interpolation include image magnification or reduction, subpixel image registration, to correct spatial distortions, and image decompression, as well as others. Of the many image interpolation techniques available, nearest neighbour, bilinear and cubic convolution are the most common, and will be talked about here. Sinc Interpolation provides a perfect reconstruction of a continuous function, provided that the data was obtained by uniform sampling at or above the Nyquist rate. Sinc Interpolation does not give good results within an image processing environment, since image data is generally acquired at a much lower sampling rate. The mapping between the unknown high-resolution image and the low-resolution image is not invertible, and thus a unique solution to the inverse problem cannot be computed. One of the essential aspects of interpolation is efficiency since the amount of data associated with digital images is large.
Figure 2 Pepper and Lena Image a) and its resolution enhanced outputs with : b) Bilinear Interpolation, c) Bicubic Interpolation, d) Nearest Neighbor Interpolation, e) SDWT f) Proposed method
Page 247
References
[1] Mallat, S., A Wavelet tour of Signal Processing. 2nd ed. 1999: New York: Academic. [2] Fowler, J.E., The Redundant Discrete Wavelet Transform and Additive Noise. Signal Processing Letters, IEEE, 2005. 12(9): p. 629-632. [3] Atkins, C.B., C.A. Bouman, and J.P. Allebach. Optimal image scaling using pixel classification. in Image Processing, 2001. Proceedings. 2001 International Conference on. 2001. [4] Demirel, H. and G. Anbarjafari, Satellite Image Resolution Enhancement Using Complex Wavelet Transform. Geoscience and Remote Sensing Letters, IEEE, 2010. 7(1): p. 123-126. [5] Yinji, P., S. Il-hong, and P. HyunWook. Image Resolution Enhancement using Inter-Subband Correlation in Wavelet Domain. in Image Processing, 2007. ICIP 2007. IEEE International Conference on. 2007. [6] Demirel, H., G. Anbarjafari, and S. Izadpanahi. Improved motion based localized super resolution technique using discrete wavelet transform for low resolution video enhancement. in 17th European Signal Processing Conference. Aug. 2009. Glasgow, Scotland. [7] G. Anbarjafari and H. Demirel, Image super resolution based on interpolation of wavelet domain high frequency subbands and the spatial domain input image. ETRI Journal, 2010. 32(3): p. 390-394. [8] Damirel, H. and G. Anbarjafari, Image super resolution based on interpolation of wavelet domain high frequency subbands and the spatial domain input image. ETRI Journal, 2011. 20(5). [9] Demirel, H. and G. Anbarjafari, IMAGE Resolution Enhancement by Using Discrete and Stationary Wavelet Decomposition. Image Processing, IEEE Transactions on, 2011. 20(5): p. 1458-1460.
Page 248