Document Type : Research Article
Authors
1 Biosystems Engineering Dept. Shahid Bahonar University of Kerman, Kerman, Iran
2 Agricultural Engineering Research Department, Fars Agricultural and Natural Resource Research and Education Center, AREEO, Shiraz, Iran
3 Department of Plant production and Genetics, School of Agriculture, Shiraz University, Shiraz, Iran
Abstract
Remote sensing is the science of data acquisition about an object, a complication, or phenomena related to a geographic location without physical contact with them. The use of remote sensing data is rapidly expanding. Researchers have always been interested in classifying land coverage phenomena accurately using multispectral images. One of the factors that reduces the accuracy of the classification map is the existence of uneven surfaces and high-altitude areas. The presence of high-altitude points makes some difficulties for the sensors to obtain accurate reflection information from the surface of the phenomena. Radar imagery with the use of digital elevation model (DEM) is effective in identifying and determining altitude phenomena. Using specifications of two completely different sensors in order to take the advantageous of each sensor capabilities is called image fusion technique. In this study, the feasibility of employing the fusion technique to improve the overall accuracy of classifying land coverage phenomena using time series NDVI images of Sentinel 2 satellite imagery and PALSAR radar imagery of ALOS satellite was investigated. Also, the results of predicted vs. measured area of fields under cultivation of wheat, barley and canola were studied.
Materials and Methods
Thirteen Sentinel-2 multispectral satellite images with 10-meter spatial resolution from the Bajgah region in Fars province dated from Nov 2018 to June 2019 were downloaded at the L1C processing level to classify the cultivated lands and other phenomena. Ground truth data were collected through several field visits using handheld GPS to pinpoint different phenomena in the region of study. Seven classes of land coverage and phenomena include 1) Wheat 2) Barley 3) Canola 4) Tree 5) Residential regions 6) Soil and 7) others were distinguished in the region. After preprocessing operations such as radiometric and atmospheric corrections using predefined built-in algorithms recommended by other researchers in ENVI 5.3 and cropping the region of study (ROI) from the original image, the Normalized Difference Vegetation Index (NDVI) was calculated for each individual image. The DEM was obtained from the PALSAR sensor radar image with the 12.5-meter spatial resolution of the ALOS satellite. After preprocessing and cropping the ROI, a binary mask of radar images was created using threshold values of altitudes between 1764 and 1799 meters above the sea level in ENVI 5.3. The NDVI time series were then composed of all 13 images and integrated with radar image by pixel-level integration method. The purpose of this process was to remove the high-altitude points in the study area that would reduce the accuracy of the classification map. The image fusion process was performed using ENVI 5.3 as well. The support Vector Machine (SVM) classification method was employed to train the classifier for both fused and unfused images as promoted by literatures.
To evaluate the effectiveness of deploying image fusion, the Overall accuracy, Commission and Omission errors were calculated using a Confusion matrix. To study the accuracy of the estimated area under cultivation of main crops in the region vs. actual measured values of the area, regression equation and percentage of difference were calculated.
Results and Discussion
Visual inspection of classified output maps shows the differences in classifying similar classes such as buildings and structures versus regions covered with bare soil and, lands under cultivation versus natural vegetation in high altitude points in the fused images when comparing them to the unfused images. These visual evaluations were verified by statistical metrics as well. The SVM algorithm in fusion mode provided 7.5% higher accuracy versus the non-fused image with 98.06% accuracy and 0.97 kappa coefficient.
As stated earlier, considering the similarities between the soil class (stones and rocks in mountains) and manmade buildings and infrastructures increase the omission error and miss classification in unfused image classification. The same problem was arisen, for crop lands additionally, due to shallow vegetation at high altitude points. These results were consistence with previous pieces of literature that reported the same miss classification in analogous classes. Considering the predicted vs. measured area under cultivation of wheat and barley show that predicted values were overestimated by 3 and 1.5 percent respectively but for canola, the area was underestimated by 3.5 percent with respect to annotations.
Conclusion
The main focus of this study was employing the image fusion technique to improve the classification accuracy of satellite imagery. Integration of PALSAR sensor data from ALOS radar satellite with multi-spectral imagery of Sentinel 2 satellite, was acceptably enhanced the classification quality of output maps by eliminating the high-altitude points and biases due to rocks and natural vegetation at hills and mountains. Statistical metrics such as overall accuracy, Kappa coefficient, commission and omission errors also confirmed by the visual findings in fused vs. unfused classification maps.
Keywords
- Confusion Matrix
- Normalized Difference Vegetation Index (NDVI)
- Radar Image
- Sentinel 2 satellite
- Support Vector Machine
Main Subjects
Send comment about this article