Image Processing
I. Ahmadi
Abstract
In the context of plant diseases, the selection of appropriate preventive measures, such as correct pesticide application, is only possible when plant diseases have been diagnosed quickly and accurately. In this study, a transfer learning model based on the pre-trained EfficientNet model was implemented ...
Read More
In the context of plant diseases, the selection of appropriate preventive measures, such as correct pesticide application, is only possible when plant diseases have been diagnosed quickly and accurately. In this study, a transfer learning model based on the pre-trained EfficientNet model was implemented to detect and classify some diseases in tomato crops, using an augmented training dataset of 2340 images of tomato plants. The study's findings indicate that during the model's validation phase, the rate of image categorization was roughly 5 fps (frames per second), which makes sense for a deep learning model operating on a laptop computer equipped with a standard CPU. Furthermore, the model was learned well because increasing the number of epochs no longer improved its accuracy. After all, the curves of the train and test accuracies, as well as the losses versus epoch numbers, remained largely horizontal for epoch numbers greater than 20. Notably, the highest coefficient of variation across these four cases was only 7%. Furthermore, the cells of the primary diagonal of the confusion matrix were filled with larger numbers in comparison with the values of the other cells; precisely, 88.8%, 7.7%, and 3.3% of the remaining cells of the matrix (cells of the primary diagonal excluded) were filled with 0, 1, and 2, respectively. The model's performance metrics are: sensitivity 85%, specificity 98%, precision 86%, F1-score 84%, and accuracy 85%.
Precision Farming
A. Naderi Beni; H. Bagherpour; J. Amiri Parian
Abstract
IntroductionDetection of tree leaf diseases plays a crucial role in the horticultural field. These diseases can originate from viruses, bacteria, fungi, and other pathogens. If proper attention is not given, these diseases can drastically affect trees, reducing both the quality and quantity of yields. ...
Read More
IntroductionDetection of tree leaf diseases plays a crucial role in the horticultural field. These diseases can originate from viruses, bacteria, fungi, and other pathogens. If proper attention is not given, these diseases can drastically affect trees, reducing both the quality and quantity of yields. Due to the importance of quince in Iran's export market, its diseases can cause significant economic losses to the country. Therefore, if leaf diseases can be automatically identified, appropriate actions can be taken in advance to mitigate these losses. Traditionally, the identification and detection of tree diseases rely on experts' naked-eye observations. However, the physical condition of the expert such as eyesight, fatigue, and work pressure can affect their decision-making capability. Today, deep convolutional neural networks (DCNNs), a novel approach to image classification, have become the most crucial detection method. DCNNs improve detection or classification accuracy by developing machine-learning models with many hidden layers to extract optimal features. This approach has significantly enhanced the classification and identification of diseases affecting plants and trees. This study employs a novel CNN algorithm alongside two pre-trained models to effectively identify and classify various types of quince diseases.Materials and MethodsImages of healthy and diseased leaves were acquired from several databases. The majority of these images were sourced from the Agricultural Research Center of Isfahan Province in Iran, supplemented by contributions from researchers who had previously studied in this field. Other supporting datasets were obtained from internet sources. This study incorporated a total of 1,600 images, which included 390 images of fire blight, 384 images of leaf blight, 406 images of powdery mildew, and 420 images of healthy leaves. Of all the images obtained, 70%, 20%, and 10% were randomly selected for the network's training, validation, and testing, respectively. Image flipping, rotation, and zooming were applied to augment the training dataset. In this research, a proposed convolutional neural network (CNN) combined with image processing was developed to classify quince leaf diseases into four distinct classes. Three CNN models, including Inception-ResNet-v2, ResNet-101, and our proposed CNN model, were investigated, and their performances were compared using essential indices including precision, sensitivity, F1-score, and accuracy. To optimize the models’ performance, the impact of dropout with a 50% probability and the number of neurons in the hidden layers were examined. Our proposed CNN model consists of an architecture with four convolutional layers, with 224 × 224 RGB images as input to the first layer, which has 16 filters, followed by additional convolutional layers with 32, 64, and 128 filters respectively. Activation functions of ReLU combined with max-pooling were used at each convolutional layer, and Softmax activation was applied in the last layer of the neural network to convert the output into a probability distribution.Results and DiscussionThree confusion matrices based on the test dataset were constructed for all the CNN models to compare and evaluate the performance of the classifiers. The indices obtained from the confusion matrices indicated that Inception-ResNet-v2 and ResNet-101 achieved accuracies of 79% and 72%, respectively. While all models exhibited promising efficiency in classifying leaf diseases, the proposed shallow CNN model stood out with an impressive accuracy of 91%, marking it as the most effective solution. The comprehensive results indicate that the optimized CNN model, featuring four convolutional layers, one hidden layer with 64 neurons, and a dropout rate of 0.5, outperformed the transfer learning models.ConclusionThe findings of this study demonstrate that our developed proposed CNN model provides a high-performance solution for the rapid identification of quince leaf diseases. It excels in real-time detection and monitoring, achieving remarkable accuracy. Notably, it can identify fire blight and powdery mildew with a precision exceeding 95%.
Precision Farming
M. Saadikhani; M. Maharlooei; M. A. Rostami; M. Edalat
Abstract
IntroductionRemote sensing is defined as data acquisition about an object or a phenomenon related to a geographic location without physical. The use of remote sensing data is expanding rapidly. Researchers have always been interested in accurately classifying land coverage phenomena using multispectral ...
Read More
IntroductionRemote sensing is defined as data acquisition about an object or a phenomenon related to a geographic location without physical. The use of remote sensing data is expanding rapidly. Researchers have always been interested in accurately classifying land coverage phenomena using multispectral images. One of the factors that reduces the accuracy of the classification map is the existence of uneven surfaces and high-altitude areas. The presence of high-altitude points makes it difficult for the sensors to obtain accurate reflection information from the surface of the phenomena. Radar imagery used with the digital elevation model (DEM) is effective for identifying and determining altitude phenomena. Image fusion is a technique that uses two sensors with completely different specifications and takes advantage of both of the sensors' capabilities. In this study, the feasibility of employing the fusion technique to improve the overall accuracy of classifying land coverage phenomena using time series NDVI images of Sentinel 2 satellite imagery and PALSAR radar imagery of ALOS satellite was investigated. Additionally, the results of predicted and measured areas of fields under cultivation of wheat, barley, and canola were studied.Materials and MethodsThirteen Sentinel-2 multispectral satellite images with 10-meter spatial resolution from the Bajgah region in Fars province, Iran from Nov 2018 to June 2019 were downloaded at the Level-1C processing level to classify the cultivated lands and other phenomena. Ground truth data were collected through several field visits using handheld GPS to pinpoint different phenomena in the region of study. The seven classes of distinguished land coverage and phenomena include (1) Wheat, (2) Barley, (3) Canola, (4) Tree, (5) Residential regions, (6) Soil, and (7) others. After the preprocessing operations such as radiometric and atmospheric corrections using predefined built-in algorithms recommended by other researchers in ENVI 5.3, and cropping the region of interest (ROI) from the original image, the Normalized Difference Vegetation Index (NDVI) was calculated for each image. The DEM was obtained from the PALSAR sensor radar image with the 12.5-meter spatial resolution of the ALOS satellite. After preprocessing and cropping the ROI, a binary mask of radar images was created using threshold values of altitudes between 1764 and 1799 meters above the sea level in ENVI 5.3. The NDVI time series was then composed of all 13 images and integrated with radar images using the pixel-level integration method. The purpose of this process was to remove the high-altitude points in the study area that would reduce the accuracy of the classification map. The image fusion process was also performed using ENVI 5.3. The support Vector Machine (SVM) classification method was employed to train the classifier for both fused and unfused images as suggested by other researchers.To evaluate the effectiveness of image fusion, Commission and Omission errors, and the Overall accuracy were calculated using a Confusion matrix. To study the accuracy of the estimated area under cultivation of main crops in the region versus the actual measured values of the area, regression equation and percentage of difference were calculated.Results and DiscussionVisual inspection of classified output maps shows the difference between the fused and unfused images in classifying similar classes such as buildings and structures versus regions covered with bare soil and lands under cultivation versus natural vegetation in high altitude points. Statistical metrics verified these visual evaluations. The SVM algorithm in fusion mode resulted in 98.06% accuracy and 0.97 kappa coefficient, 7.5% higher accuracy than the unfused images.As stated earlier, the similarities between the soil class (stones and rocks in the mountains) and manmade buildings and infrastructures increase omission error and misclassification in unfused image classification. The same misclassification occurred for the visually similar croplands and shallow vegetation at high altitude points. These results were consistence with previous literature that reported the same misclassification in analogous classes. The predicted area under cultivation of wheat and barley were overestimated by 3 and 1.5 percent, respectively. However, for canola, the area was underestimated by 3.5 percent.ConclusionThe main focus of this study was employing the image fusion technique and improving the classification accuracy of satellite imagery. Integration of PALSAR sensor data from ALOS radar satellite with multi-spectral imagery of Sentinel 2 satellite enhanced the classification accuracy of output maps by eliminating the high-altitude points and biases due to rocks and natural vegetation at hills and mountains. Statistical metrics such as the overall accuracy, Kappa coefficient, and commission and omission errors confirmed the visual findings of the fused vs. unfused classification maps.