with the collaboration of Iranian Society of Mechanical Engineers (ISME)

Document Type : Research Article

Authors

Biosystems Department, Shiraz University Shiraz, Iran

Abstract

Introduction: The quality of agricultural products is associated with their color, size and health, grading of fruits is regarded as an important step in post-harvest processing. In most cases, manual sorting inspections depends on available manpower, time consuming and their accuracy could not be guaranteed. Machine Vision is known to be a useful tool for external features measurement (e.g. size, shape, color and defects) and in recent century, Machine Vision technology has been used for shape sorting.
The main purpose of this study was to develop new method for tomato grading and sorting using Neuro-fuzzy system (ANFIS) and to compare the accuracies of the ANFIS predicted results with those suggested by a human expert.
Materials and Methods: In this study, a total of 300 image of tomatoes (Rev ground) was randomly harvested, classified in 3 ripeness stage, 3 sizes and 2 health.
The grading and sorting mechanism consisted of a lighting chamber (cloudy sky), lighting source and a digital camera connected to a computer.
The images were recorded in a special chamber with an indirect radiation (cloudy sky) with four florescent lampson each sides and camera lens was entire to lighting chamber by a hole which was only entranced to outer and covered by a camera lens.
Three types of features were extracted from final images; Shap, color and texture. To receive these features, we need to have images both in color and binary format in procedure shown in Figure 1.
For the first group; characteristics of the images were analysis that could offer information an surface area (S.A.), maximum diameter (Dmax), minimum diameter (Dmin) and average diameters. Considering to the importance of the color in acceptance of food quality by consumers, the following classification was conducted to estimate the apparent color of the tomato;
1. Classified as red (red > 90%)
2. Classified as red light (red or bold pink 60-90%)
3. Classified as pink (red 30-60%)
4. Classified as Turning (red 10-30%, It showed the color green change to pink)
5. Classified as Breakers (red < 10%, It showed the color green change to yellow)
6. Classified as green (The whole fruit area was green)
To estimate the quality of tomato, we need to estimate background of the images. For this purpose we should follow the preocedure as shown in Fig.2.
According to flowcharts shown in Fig.1, our samples will be in the following stages: (Fig.3.)
Fig.4 shows that during the ripening of tomato red color is increased and green color is decreased. Indicating chlorophyll degradation while lycopen started to be produced.
According to figure 6 we utilize the R and G value of tomato for ripening decision. As ripening data we utilize the mean of red and green values of pixels that are used for this goal. For correct processing of last group, edge of images were removed that had incompletely understood of the fruit color and determine color coefficients, the system with slight error could detect all parts of the damage. Quantity of damage area reported in the proportion of the total area of tomato.
In the present work, 5 factors were considered and the linguistic variables corresponding to the values were created in 4 levels: size, color or ripening, a healthy and final level that classified tomatoes in 8 classes. In size level input values were minimum diameter and surface area. These values classified the tomatoes into 3 groups. In color level input values were Red and Green component values. These values were used to classify the tomatoes in 3 group, too. In healthy and unhealthy level, input value was proportion of damage area to tomato total area. This value were used to classify the tomatoes in 2 group. In final level, outputs of previous levels are our inputs now. This values going to classify the tomatoes into 8 final groups.
Results and Discussion: This system can classify the tomatoes in 8 groups just with rules. For this reason we measured the accuracy of the system before training. This values were 70.7, 82.0, 95.7 and 75.5% for size, color, health and final system respectively. For achieving all ability of ANFIS in classifing we done the above measuring after training of machine. The results were 80.9, 89.5, 95.7 and 81% for size, color, health and final system respectively, that indicate the accuracy of the system is raised by 10%. A validation step is done in this study. The accuracy of the system is measured versus a human expert. The classification was done with 60 samples. The accuracies of machine were 75.9, 83.8, 94.2 and 76.5%. Analysis of results with qui-square test indicated that there is no significant difference between machine results and human expert choices.The validation process proved that system is useful in this purpose.
Conclusions: This research was about evaluating of using machine vision and ANFIS in grading machines and done in off-line mode. The research was redirected to the following general conclusions:
1. To obtain an estimate of tomatoes, sample sizes were measured by using calipers and machine vision, the results showed that this system can be used to obtain dimensions.
2. For the purpose of size grading, the small diameter and the surface area of the image was used whichyielded 67% and 62% accuracy for determining the mass, in comparison the ANFIS system performance was precisely 81%.
3. For the purpose of color grading, red and green were used which is a better description of quality. For this the ANFIS system was used for color grading and it performed at 89.5%.
4. For the purpose of sample selection grading (dividing the rotten from the good), optical robot was used. The outcome of system ANFIS and the optical robots had the same results of selection at 95%.
5. In an aggregate or globally, the criteria from the above was used as an input for the grading and classification. Based on these inputs, the ultimate output was consequently categorized into 8 groups. The precision of the division or the selection was determined to be 81.5%.
6. With respect to the testing based on chi-square, it can be determined that this system can replace human workers. In addition, based on the performance and necessary adjustments to the system and its grading criteria better system can be built to replace human workers.

Keywords

1. Casady, W. W., M. R. Paulsen, J. F. Reid, and J. B. Sinclair. 1992. A trainable algorithm for inspection of soybean quality. Transactions of the ASAE 35 (6): 2027-2034.
2. In-Suck, B., Ch. Byoung-kwon, and K. Young-sik. 2012. Development of a compact quality sorting machine for cherry tomatoes based on real-time color image processing. International Conference of Agricultural Engineering. Valencia. Spain. July 8-12.
3. Iraji, M. S., and A. Tosinia. 2011. Classification of Tomatoes on Machine Vision with Fuzzy the Mamdani Inference, Adaptive Neuro Fuzzy Inference System Based (Anfis-Sugeno). Australian Journal of Basic and Applied Sciences 5 (11): 846-853.
4. Jafari, A., A. Fazayeli, and M. R. Zarezadeh .2014. Estimation of orange skin thickness based on visual texture coarseness. Journal of Biosystems Engineering 117: 73-82.
5. Jafarlou, M., and R. Farrokhi Teimourlou. 2014. Estimation of apple volume and its shape indentation using image processing technique and neural network. Journal of Agricultural Machinery 4 (1): 57-64. (In Farsi).
6. Khalifa, S., and M. H. Komarizadeh. 2012. An intelligent approach based on adaptive neuro-fuzzy inference system (ANFIS) for walnut sorting. Australian Journal of Crop Science 06 (2): 183-187.
7. Lino, A. C. L., J. Sanches, and I. M. D. Fabbro. 2008. Image processing techniques for lemons and tomatoes classification. Journal of Bragantia 67: 785-789.
8. Mohamadi Monavar, H., R. Alimardani, and M. Omid. 2013. Computer vision utilization for detection of green house tomato under natural illumination. Journal of Agricultural Machinery 3 (3): 9-15. (In Farsi).
9. Polder, G., G. W. A. M. Heijdena, and I. T. Young. 2003. Tomato sorting using independent component analysis on spectral images. Real-Time Imaging 9: 253-259.
10. Shahin, M. A., E. W. Tollner, and R. W. McClendon. 2001. Artificial Intelligence Classifiers for sorting Apples based on Watercore. Journal of Agricultural Engineering Research 79 (3): 265-274.
11. Wen, Z., and T. Yang. 1999. Building a rule-based machine-vision system for defect inspection on apple sorting and packing lines. Expert Systems with Applications 16: 307-313.
12. Xing J., M. Ngadi, N. Wang, and J. D. Baerdemaeker. 2006. Bruise Detection on Tomatoes Based on the Light Scattering Image. ASABE Annual International Meeting. Sponsored by ASABE Oregon Convention Center, Portland, Oregon, USA.
CAPTCHA Image