DWT Based Pan-Sharpening of Low Resolution Multispectral Satellite Images

G. Mamatha *  M.V. Lakshmaiah ** V. Sumalatha ***  S. Varadarajan ****
* Assistant Professor, ECE Department, JNTUA College of Engineering, Anantapur, AP, India.
** Assistant Professor, ECE Department, SK University, Anantapur, AP, India.
*** Associate Professor, ECE Department, NTUA College of Engineering, Anantapur, AP, India.
**** Professor, Department of ECE, SVUCE, Tirupati, AP, India.

Abstract

In this paper, a technique for enhancing the low resolution multi spectral image using Discrete Wavelet Transform (DWT) is proposed with the help of the corresponding high resolution panchromatic image. Image fusion, also called pansharpening, is a technique used to integrate the geometric detail of a high-resolution panchromatic (PAN) image and the color information of a low-resolution Multi spectral (MS) image to produce a high-resolution MS image. Remote Sensing systems, particularly those deployed on satellites, provide a redundant and consistent view of the Earth. In order to meet the requirements of different remote sensing applications, the systems offer a wide range of spectral, spatial radiometric and temporal resolutions. In general, sensors characterized with high spectral resolution, do not have an optimal spatial resolution, which may be inadequate to specific task identification in spite of its good spectral resolution. In panchromatic image with high spatial resolution, detailed geometric features can be easily recognized, while the multispectral images contain richer spectral information.

Keywords :

Introduction

The term fusion in general context refers to an approach of extracting relevant information from several domains. Image fusion aims at achieving high quality images by effective integration of complementary multisensory, multi-temporal and multifocal images [1,4]. The final fused image obtained is more informative than any of the input images. The term quality and its measurement depend on the particular application. Image fusion finds a wide variety of applications in the field of remote sensing, astronomy, medical imaging etc. The field of remote sensing is a continuously growing market with applications like vegetation mapping and observation. However, as a result of the demand for higher classification accuracy and the need in enhanced positioning precision, there is always a need to improve the spectral and spatial with higher resolution power or by the effective utilization of the fusion techniques.

Ideally, Image Fusion Techniques should allow combination of images with different spectral and spatial resolution keeping the radiometric information. Huge effort has been put in developing fusion methods that preserve the spectral information and increase detail information in the hybrid product produced by fusion process. A multispectral image contains a higher degree of spectral resolution than a panchromatic image, while often a panchromatic image will have a higher spatial resolution than a multispectral image. [2]

1. Discrete Wavelet Transform

Since Continuous Wavelet Transform (CoWT) is very redundant, a discretization of the scale and translation variables was introduced and this version of the CoWT is named the Discrete Wavelet Transform. The main advantage of the implementation of the DWT is its flexibility. The implementation of 1D DWT is presented in Figure 1.

Figure 1. 1D DWT Implementation

Each of the iterations of the algorithm used for the computation of the 2D DWT implies several operations. First, the lines of the input image (obtained at the end of the previous iteration) are passed through two different filters (a lowpass filter having the impulse response m0 and a highpass filter m1) resulting in two different sub-images [5]. Then the lines of the two sub-images obtained at the output of the two filters are decimated with a factor of 2. Next, the columns of the two images obtained are filtered with m0 and m1. The columns of those four sub images are also decimated with a factor of 2. Four new sub images, representing the result of the current iteration are obtained. These sub-images are called sub-bands. The first sub-image, obtained after two low-pass filterings, is called approximation sub-image (or LL sub-band). The other three are named as detail sub-images: LH, HL and HH. The LL sub image represents the input for the next iteration. An iteration of 2D DWT is shown in Figure 2.

Figure 2. Single level analysis filterbank for 2D DWT

2. Proposed Method

DWT decomposes a signal into multi-resolution representation with both low frequency coarse information and high frequency detailed information [3]. Image Fusion using Wavelet transform is shown in Figure 3 . In one-level DWT an image is decomposed into a set of low-resolution sub-images (DWT coefficients), LL, H1, H2 and H3. The LL sub-image is the approximation image while the H1, H2 and H3 sub-images contain the details of the image. The LL sub-image of the MS image is retained by the fusion algorithm and the H1, H2 and H3 components are replaced by those of the PAN image. Therefore, the fused image contain the extra spatial details from the high resolution PAN image as shown in Figure 4. Also, if the fused image is down sampled, the low-resolution fused image will be approximately equivalent to the original low-resolution MS image. That is, the DWT fusion method may outperform the standard fusion methods in terms of minimizing the spectral distortion.

Figure 3. Image fusion using wavelet transform

Figure 4. PAN Image

3. Evaluation Parameters

BIAS measures the error and spectral accuracy. [8]

 

Here MS Image is taken as Original Image.

 

CC (Correlation Coefficient): It analyzes and compares the spectral quality. It is defined as [7]

 

Entropy:

Entropy measures the amount of information contained in a image.

 

pi is the probability of occurrence of a particular gray level. ERGAS (Relative dimensionless global error in synthesis) calculates amount of spectral distortion in the image [6].

 

h/l is the ratio of PAN and MS (2.5 m / 24 m)

Mean (n) = mean of the nth band

RMSE (m) = Root Mean Square of nth band

 

I=no. of rows, m = no. of columns, n = no. of bands

Relative Average Spectral Error (RASE) – It characterizes the average performance of method of image fusion in spectral bands.

 

4. Results & Discussion

The low resolution Multi spectral Satellite image is obtained using LISS – III sensor with the resolution of 24 m. The Multi spectral Satellite image consists of 4 bands, viz., band 2, band 3, band 4 and b and 5 which are shown in Figures 5-11.

As the dimensions of the panchromatic image are in the order of 13000 X 13000, a portion of it, with dimensions 512 X 512 is cropped. The multispectral image corresponding to the cropped PAN image is determined and cropped. The proposed algorithm is carried out on the resultant PAN and multispectral images using db5 and db8 wavelet transforms with different levels of decomposition ranging from 3 to 5 to study the PAN-sharpening process using different wavelet transforms with different levels of decomposition. The pansharpened images are shown in Figures 12-29. It is obvious that, greater level of wavelet decomposition, is more in the quality of the PAN-sharpened image. But, with continuous increase in the level of wavelet decomposition, the spectral content (color) is going to be uniform. The corresponding performance parameters are shown in Table1.

Figure 5. Multispectral Image (Band 2)

Figure 6. Multispectral Image (Band 3)

Figure 7. Multispectral Image (Band 4)

Figure 8. Multispectral Image (Band 5)

Figure 9. Multispectral Image (Bands – 2, 3 & 4)

Figure 10. Multispectral Image (Bands 3, 4 & 5)

Figure 11. Multispectral Image (Bands 2, 3, 4 & 5) (ERDAS)

Figure 12. Fused (Pan-sharpened) Image (Bands 2, 3 & 4) using db5 with three level decomposition

Figure 13. Fused (Pan-sharpened) Image (Bands 3, 4 & 5) using db5 with three level decomposition

Figure 14. Fused (Pan-sharpened) Image (Bands 2, 3, 4 & 5) using db5 with three level decomposition (ERDAS)

Figure 15. Fused (Pan-sharpened) Image (Bands 2, 3 & 4) using db5 with four level decomposition

Figure 16. Fused (Pan-sharpened) Image (Bands 3, 4 & 5) using db5 with four level decomposition

Figure 17. Fused (Pan-sharpened) Image (Bands 2, 3, 4 & 5) using db5 with four level decomposition (ERDAS)

Figure 18. Fused (Pan-sharpened) Image (Bands 2, 3 & 4) using db5 with five level decomposition

Figure 19. Fused (Pan-sharpened) Image (Bands 3, 4 & 5) using db5 with five level decomposition

Figure 20. Fused (Pan-sharpened) Image (Bands 2, 3, 4 & 5) using db5 with four level decomposition (ERDAS)

Figure 21. Fused (Pan-sharpened) Image (Bands 2, 3 & 4) using db8 with three level decomposition

Figure 22. Fused (Pan-sharpened) Image (Bands 3, 4 & 5) using db8 with three level decomposition

Figure 23. Fused (Pan-sharpened) Image (Bands 2, 3, 4 & 5) using db8 with three level decomposition (ERDAS)

Figure 24. Fused (Pan-sharpened) Image (Bands 2, 3 & 4) using db8 with four level decomposition

Figure 25. Fused (Pan-sharpened) Image (Bands 3, 4 & 5) using db8 with four level decomposition

Figure 26. Fused (Pan-sharpened) Image (Bands 2, 3, 4 & 5) using db8 with four level decomposition (ERDAS)

Figure 27. Fused (Pan-sharpened) Image (Bands 2, 3 & 4) using db8 with five level decomposition

Figure 28. Fused (Pan-sharpened) Image (Bands 3, 4 & 5) using db8 with five level decomposition

Figure 29. Fused (Pan-sharpened) Image (Bands 2, 3, 4 & 5) using db8 with five level decomposition (ERDAS)

Table 1. Performance Parameters of fused image using db5 & db8 wavelet transform with different levels of decomposition ranging from 3 to 5

Conclusion

The information capabilities of the images can be enhanced if the advantages of both high spatial and spectral resolution can be integrated into one single image. Thus, detailed features of such an integrated image can be easily recognized and will benefit many applications, such as urban and environmental studies.

References

[1]. D. Lee Fugal (2009). “Conceptual Wavelets in Digital Signal Processing”. The Space & Signals Technical Publishing, San Diego, CA, U.S.
[ 2 ] . GeneRose,PAN Sharpening,Retrived from http://www.imstrat.ca/uploads/files/brochures/pansharpe ning.pdf.
[3]. Israa Amro, Javier Mateos, Miguel Vega (2011). A survey of classical methods and new trends in pansharpening of multispectral images. The EURASIP Journal on Advances in Signal Processing.
[4]. L. Wald (1999). Some terms of reference in data fusion, IEEE Trans. Geosci. Remote Sens., Vol. 37, No. 3, pp. 1190–1193.
[5]. S. Mallat (1999), A Wavelet Tour of Signal Processing, Academic Press.
[6]. Lucien Wald (2000). Fusion of Earth data: merging point measurements, raster maps and remotely sensed images, Sophia Antipolis, France, January 26-28.
[7]. Melissa strait , Sheida Rahmani, daria Markurjev (2008). Evaluation of Pan-sharpening parameters, UCLA department of Mathematics.
[8]. Brian Alan Johnson, Ryutaro Tateishi and Nguyen Thanh Hoan (2012). “Satellite Image Pan-sharpening Using a Hybrid Approach for Object-Based Image Analysis”, ISPRS International Journal of Geo-Information, pp. 228 -241,2012.