نوع مقاله : مقاله پژوهشی انگلیسی
نویسندگان
گروه مهندسی بیوسیستم، دانشکده کشاورزی، دانشگاه بوعلی سینا، همدان، ایران
چکیده
چیدن دستی گلهای محمدی به دلیل وجود خارهای زیاد روی ساقههای آن بسیار دشوار است. بنابراین تشخیص بیدرنگ گل محمدی شکفته در مزارع روباز برای طراحی یک ربات بهمنظور برداشت خودکار این گل ضروری است. با توجه به سرعت بالا و دقت مناسب شبکههای عصبی کانولوشن عمیق (DCNN)، هدف از این مطالعه بررسی پتانسیل مدل بهینهشده YOLOv8s در تشخیص گلهای محمدی شکفته است. بهمنظور ارزیابی اندازه مدل YOLO بر عملکرد مدل، دقت و سرعت تشخیص نسخههای دیگر مدل YOLO ازجمله v5s و v6s نیز مورد بررسی قرار گرفت. برای رسیدن به این هدف، تصاویر گلهای محمدی تحت شرایط نور عادی (از سپیدهدم تا طلوع آفتاب) و شرایط نور شدید (از طلوع آفتاب تا ۱۰ صبح) تهیه شدند. نتایج ارزیابی نشان داد که مدل YOLOv8s با میانگین متوسط دقت(mAP50) و سرعت شناسایی بهترتیب %98 و 243.9 فریم در ثانیه (fps) بهترین عملکرد را به نمایش گذاشت و در مقایسه با مدلهای YOLOv5s و YOLOv6s مقدار mAP50 آن بهترتیب 0.3% و 6.1%، و مقدار سرعت تشخیص آن بهترتیب fps 169.3 و fps 198.6 بیشتر بود. نتایج تجربی نشان میدهد که YOLOv8s در تصاویر گرفتهشده در نور عادی عملکرد بهتری نسبت به تصاویر گرفتهشده در نور شدید دارد. کاهش 2.5% در مقدار mAP50 و 2.4% در سرعت تشخیص نشاندهنده تأثیر منفی نور شدید محیطی بر اثر بخشی مدل است. این تحقیق نشان میدهد که مدل YOLOv8s یک راهحل قابلقبول برای تشخیص بیدرنگ گل محمدی فراهم میکند و راهنمای خوبی برای تشخیص سایر گیاهان مشابه است.
کلیدواژهها
موضوعات
Introduction
Damask rose (Rosa damascena mill.) is a precious species of rose and has been extensively used in various cosmetic, health, and pharmaceutical industries. Bulgaria, Turkey, India, and Iran are ranked first through fourth in the cultivation area dedicated to this crop. Furthermore, Bulgaria, Turkey, and Iran hold this plant's top three positions in oil and essential oil production. (Ucar, Kazaz, Eraslan, and Baydar, 2017; Yousefi and Jaimand, 2018). Harvesting Damask rose is the most labor-intensive aspect of this flower’s production. This is due to the rapid emergence of rose blooms, which occur only once a year for a short period of 15 to 20 days. These plants produce numerous bloomed and fully-opened flowers each day, necessitating harvesting from 4:00 AM to 7:00 AM to obtain the highest-quality Damask rose oil in quantity and quality. Most Damask rose buds fully bloom early in the morning and should be harvested on the same day, as withered flowers that had fully bloomed the previous day are not harvested (Rusanov, Kovacheva, Rusanova, and Atanassov, 2011). In addition to the requirement to harvest fully-bloomed flowers in a narrow window, the harvesting of this crop is inherently difficult and has not yet been fully mastered technologically, so manual harvesting remains the traditional approach. While harvesting the buds, workers may sustain injuries from the thorns on the stems. As such, it is imperative to provide adequate training for these workers. The challenges related to labor, along with the costs and time required for worker training, significantly contribute to the total expense of harvesting this crop (Manikanta, Rao, and Venkatesh, 2017). Consequently, the real-time identification of bloomed Damask rose in open fields is crucial for developing a machine or robot capable of autonomously harvesting Damask roses. One approach to achieving high efficiency in this field involves the utilization of machine vision techniques.
In recent years, convolutional neural networks (CNN) have emerged as novel machine learning methods garnering substantial attention from researchers for flower classification and qualitative evaluations. (Guru, Kumar, and Manjunath, 2011; Wang, Underwood, and Walsh, 2018; Sun, Wang, Liu, and Liu, 2021; Zhang, Su, and Wen, 2021; Bataduwaarachchi et al., 2023). CNN model was designed to detect apple blossoms, which was able to identify apple tree blossoms with an accuracy of over 79%. This model, without the need for retraining, could identify apple, peach, and pear blossoms on the trees with an accuracy of over 67%, 86%, and 94%, respectively (Dias, Tabb, and Medeiros, 2018). Wu, Lv, Jiang, and Song (2020) developed a channel pruning-based YOLOv4 that facilitates the acquisition of apple blossom thinning robots. This model can identify apple blossoms with a mean average precision (mAP) of 97.31% and a detection speed of 72.33 fps, which compared to the base model YOLOv4, reduces the mAP, detection speed, and size by 0.24%, 39.47%, and 231.51 MB, respectively. By pruning low-load weights of model in apple blossom detection using the channel pruning method, they achieved a lighter model. Wang et al. (2022) used the developed YOLOv4, called YOLO-PEFL, to estimate the performance of pear orchards through detecting and counting flowers. ShuffleNetv2, embedded by the SENet (Squeeze-and-Excitation Networks) module replacing the original backbone network of the YOLOv4 model, formed the backbone of the YOLO-PEFL model. The empirical findings indicated that the mean accuracy of the YOLO-PEFL framework was 96.71%, the framework's dimensions were decreased by approximately 80%, and the mean recognition velocity was 0.027 s. In comparison to the YOLOv4 framework and the YOLOv4-tiny framework, the YOLO-PEFL framework exhibited superior performance in framework dimensions, recognition precision, and recognition speed, thereby effectively decreasing framework deployment expenditure and enhancing framework effectiveness. YOLO network training using drone-captured images was employed to create a map depicting pumpkin flower distribution in the field. In this research, the mAP50 was 91% (Mithra and Nagamalleswari, 2023). To aid the marketing of roses, Anjani, Pratiwi, and Nurhuda (2021) developed a Convolutional Neural Network (CNN) model capable of categorizing the variety of roses without manual categorization. In this study, the accuracy achieved on the evaluation dataset was 96.33%. Shinoda et al. (2023) recognized that strategic planning for cut flower production is pivotal, as demand varies throughout the year. Nevertheless, manual enumeration of all rose blossoms in the greenhouse is time-intensive and arduous. They used YOLOv5 to identify small rose blossoms from various angles during camera motion, diminishing detection inaccuracies and attaining an F1 score of 0.950.
By reviewing the research literature, it has become apparent that there is a gap in the existing literature regarding a thorough examination of the precise and real-time identification of bloomed Damask rose flowers in agricultural fields for the purpose of automating the harvesting process. As a result, the present study aims to address this deficiency by leveraging the potential of deep learning models, specifically focusing on the compact YOLO models, known for their adeptness in accurately and swiftly identifying various types of flowers. In this investigation, upon completion of training the models and fine-tuning their weights, the performance of each individual model was assessed using a collection of images captured during harvest time. To carefully examine the effect of ambient lighting on the detection proficiency of the chosen model, the model underwent training and evaluation using images captured under two distinct lighting conditions: normal light and intense light.
Materials and Methods
Data collection and preparation
In order to extract the optimal essence from high-quality Damask rose petals, it is imperative to harvest these blooms during the early hours of the morning (Kumar, Sharma, Sood, Agnihotri, and Singh, 2013; Thakur, Sharma, and Kumar, 2019). To train models, two distinct sets of videos were acquired from Damascus rose fields situated in the village of Sarab, Dehgolan, Kurdistan Province, Iran. These videos were obtained using a Samsung Galaxy Note 9 smartphone camera during May–June 2022. The first collection of videos, which were labeled "Normal Light Condition," was obtained in the morning from twilight until sunrise. Conversely, the second set, labeled "Intense Light Condition," was acquired from sunrise to 10 AM to assess the impact of intense illumination on the efficacy of the chosen model trained using the aforementioned images (Fig. 1).In the study conducted by Sharma and Kumar (2018), the six distinct flowering stages of Damask rose were explored. These stages that affect the yield and quality of the essence are: 1) Sepals intact with dark immature petals, 2) Sepals separated from petals, petals whorl closed, 3) Petals whorl loosened, 4) Petal whorl opened, 5) Fully opened flower, and 6) Flower opened the previous day. The outcomes of their study indicated that the early harvest stages of flowers (1, 2, and 3) exhibited variations in scent characteristics when compared to fully bloomed flowers. Moreover, the maximum essential oil content exhibited notable differences across various harvest stages and the duration of hydrodistillation. Notably, at the fourth stage of flowering (fully open petal whorl), along with a hydrodistillation duration of 5 hours, yielded the highest quality of essential oil. Additionally, immature or overly mature flowers not only diminish essential oil yield but also compromise oil quality. Consequently, stages 4 and 5 were identified as the target harvesting stages for rose flowers, classified as bloomed flowers in this study. Fig. 2 depicts different flower opening stages from 1 to 6, as described earlier.
Labeling
The LabelImg v1.8.0 software was utilized for the purpose of annotating images of damask roses. Utilizing this software resulted in the generation of output files saved in the TXT format specifically tailored for YOLO networks. As depicted in Fig. 3, a visual representation is provided, illustrating the contents of the output file pertaining to two individual flowers. Within this illustration, various symbols such as NO, Nc, Xc, Yc, W, and H are utilized to represent specific parameters including the number of objects in the image, the object class, longitudinal and transverse coordinates of the frame's center, width, and height of bounding boxes, respectively. All these values are normalized within a range of zero to one. Given that the main focus of the present study was the identification of bloomed Damask roses, a single class was considered, denoted as "Ripe=0".
Fig. 1. (a) Geographical coordinates of the garden, (b) The garden,
Fig. 2. (1) sepals intact with dark immature petals, (2) sepals separated from petals, petals whorl closed, (3) petals whorl loosened, (4) petal whorl opened, (5) fully opened flower, and (6) flower opened the day before
In this study, five hundred images were extracted from collected videos. Extracting consecutive frames from the video is essential for the stability of YOLO detection (19 Tung et al., 2019). To reduce the computational cost and increase the image processing speed and speed up the model training, all images were 512 x 512 pixels. To check the robustness of the model and issues like overfitting and underfitting, a technique called K-fold cross validation was used. We created 10 folds of the dataset, and each fold was executed 5 times. In this study, 10% of images were allocated for testing purposes.
Fig. 3. (a) original Damask rose, (b) labeled desired flowers, and (c) annotation results in .txt format
YOLO Model
The YOLO model has had a notable development in the field of real-time object detection. By employing a convolutional network that evaluates images in a single step, YOLO can detect objects directly and calculate the precise object coordinates. The utilization of this methodological approach has resulted in a substantial enhancement in detection speed (Redmon, Divvala, Girshick, and Farhadi, 2016; Silva, Monteiro, Ferreira, Carvalho, and Corte-Real, 2019).
In January 2023, Ultralytics unveiled the YOLOv8 model, building upon their prior launch of the YOLOv5 model. This latest version represents the pinnacle of advancements in comparison to its predecessors. The YOLOv8 model, which underwent training on ImageNet, demonstrated heightened accuracy and speed of detection in contrast to the YOLOv5 and YOLOv6 models that had undergone similar training (7 Jocher, Chaurasia, and Qiu, 2023). A comprehensive schematic depiction of the YOLOv8 model can be observed in Fig. 4. This model retains the primary network of YOLOv5, but features a notable modification in its CSP layer, now referred to as the C2f module. The C2f module improves detection accuracy by combining high-level features with contextual information. YOLOv8 is an anchor-free model that employs a distinct head for the autonomous processing of object detection, classification, and regression tasks. This design facilitates each branch's concentration on its respective task, thus enhancing the overall precision of the model. In the output layer of YOLOv8, the sigmoid function serves as the activation function for abjectness, while the softmax function is employed for class probabilities (Terven and Cordova-Esparza, 2023).
Among the different scales of each architecture in the YOLO family, only those meeting the following criteria were chosen: 1- Having a parameter count below 2 million, and 2- Achieving a detection speed of less than 1.5 ms per image on the COCO dataset using a GPU A100. Ultimately, the scale with the highest mAP50-95 value was selected for each architecture, in the YOLO family, only YOLOv8s, YOLOv6s, and YOLOv5s met these criteria (Ultralytics, n.d.).
Fig. 4. The architecture of YOLOv8 used in the detection of Damask rose
Evaluation Parameters of YOLOv8s
Adjustable parameters of the YOLOv8s model, pre-trained on the COCO val2017 dataset, primarily include changes in input size, batch size, number of classes, learning rate, and number of iterations (Table 1). Additionally, to generalize the model's detection to other farm-like conditions close to the flower harvest timeframe, data augmentation techniques were utilized during training. By adjusting hyperparameters related to these techniques, changes were made in the color values (HSV color space), image brightness, clarity, and images were rotated and flipped in different directions.
Parameter | Value |
---|---|
Input size | 512×512 |
Learning rate | 1×10-3 |
Batch size | 32 |
Classes | 1 |
Epochs | 75 |
A loss function is a mathematical function that quantifies the difference between predicted and actual values in a machine learning model. According to Equations 1 to 8, the loss function in the training of YOLO models mainly comprised three sections: the bounding box location loss (LCIoU), the confidence loss (Lconfidence), and the class loss (Lclass) (Wu et al., 2020):
(8) K=1obji,j
IoU is defined as the ratio of the intersection and union of the predicted bounding box and the ground truth bounding box, with c and d denoting the distances between the centers of the two bounding boxes and the diagonal distance of their union, respectively. The parameters wgt and hgt represent the width and height of the ground truth bounding box, while w and h correspond to the width and height of the predicted bounding box. The variable S stands for the number of grids, while B signifies the anchor number associated with each grid. K is a symbol for weight, taking the value of 1 in case there is an object in the j-th anchor of the i-th grid; otherwise, it is 0. Moreover, and n indicate the actual and predicted classes of the j-th anchor in the i-th grid, and p represents the probability of the object being a Damask rose flower. The mean average precision (mAP), precision, recall, F1 score, F2 score, and detection speed were employed to assess the efficacy of the models:
where c refers to the number of classes (here, c = 1), and TP, FP, FN, and TN are true positive (the bloomed flowers that are correctly classified as bloomed flower), false positive (a region of background that is classified as a bloomed flower), false negative (the bloomed flowers that are considered as background), and true negative (defined as all background areas in the image except for regions where bloomed flowers are present), respectively.
Results and Discussion
Comparison of different detection algorithms
Fig. 5 depicts the training mAP50 of four models on images captured under normal light conditions. The training curve for the four models indicated that the YOLOv8s model reached saturation faster than the other models, exhibited lower fluctuations, and maintained a more uniform curve. Table 2 presents the performance results of the YOLOv8s model compared with the YOLOv5s and YOLOv6s models regarding detecting bloomed Damask roses. Based on the results, the mAP50 scores for the three object detection models were as follows: 0.98%, 93.9%, and 97.7%, respectively. According to these results, YOLOv8s demonstrated the highest mAP50 among the three models. A preliminary analysis suggested that the CSPDarknet53 feature extractor, as a backbone of the YOLOv8-Seg model, which is followed by a novel C2f module instead of the traditional YOLO neck architecture, is more competent in extracting diverse and complex features of targets, playing a fundamental role in the detection accuracy improvement of YOLOv8.
Fig. 5. Comparing mAP50 of different YOLO models obtained from the training dataset
Algorithm | Recall (%) | Precision (%) | mAP50 (%) | Detection speed (fps) | F1 (%) | F2 (%) | Model size (MB) |
---|---|---|---|---|---|---|---|
YOLOv8s | 93.7 | 97.3 | 98.0 | 243.9 | 95.5 | 94.4 | 21.5 |
YOLOv6s | 84.7 | 88.2 | 93.9 | 45.3 | 86.4 | 85.4 | 41.3 |
YOLOv5s | 94.1 | 95.1 | 97.7 | 74.6 | 94.6 | 94.3 | 14.1 |
The analysis of the results indicates that all models demonstrated high precision in detecting the bloomed Damask roses. Notably, the YOLOv8s model exhibited superior precision at 98% and a remarkable detection speed of 243.9 fps, outperforming the other models. In contrast, the YOLOv5s model, while achieving a close precision rate of 97.7% compared to the YOLOv8s model and having a smaller size of 14.1 MB, exhibited a significantly lower detection speed, being 3.27 times slower. This underscores the YOLOv8s model's exceptional suitability for real-time detection tasks. Worth noting is that the YOLOv6s model achieved a detection precision of 88.2%. Nevertheless, its applicability for real-time and robotic tasks was limited due to its low detection speed of 45.3 fps and a substantial size of 41.3 MB (Wu et al., 2020). This limitation is especially significant considering that the frame rate of most videos is 30 fps, and economic robot controllers typically possess limited memory capacity. The YOLOv5s model was explicitly designed for real-time detection tasks like apple thinning and crop yield estimation before thinning. Its parameters and size were optimized through channel pruning and weight adjustments. Consequently, boasting a size of 1.4 MB and a detection speed of 125 fps, this model performed well (Wang and He, 2021).
Furthermore, the YOLOv8s model, with its enhanced attributes of precision, speed (198.6 fps), and smaller size (19.8 MB), surpassed the YOLOv6s model in all aspects. This positions the proposed model as an ideal choice for real-time detection of bloomed Damask rose, effectively addressing the challenges associated with precision, size, and speed. Consequently, it can be seamlessly integrated into mobile phone applications or employed in Damask rose harvesting robots.
The efficacy of various versions of the YOLO model is impacted by their scale (quantified by the number of parameters), as well as the dataset employed for both training and evaluation. Hence, it is essential to assess the performance of the intended models. Apeinans et al. (2024) created a cherry dataset (CherryBBCH81) for training neural networks. They aimed to find the best YOLO model for fruit detection. YOLOv5m performed better with the CherryBBCH81, achieving a mAP50 of 0.886, compared to YOLOv8m with 0.870. However, YOLOv8m showed better results with the Pear640 dataset, reaching 0.951 compared to 0.943 for YOLOv5m. Estrada, Vasconez, Fu, and Cheein (2024) tested YOLO models 5, 7, and 8 of various sizes (n, s, m, l, and x) for peach fruit detection. The findings indicated that YOLO version 7 X model exhibited the highest performance.
Fig. 6. Loss of training YOLOv8s
Fig. 7. mAP50 of training YOLOv8s
Ambient light effect on YOLOv8s performance
Figs. 6 and 7 display the model's loss curves and mAP50 during training, based on two images taken under normal and intense light conditions. The visual data from these figures reveals that when Damask rose bushes were blooming, the model showed increased learning efficiency and faster convergence in the early stages of object detection training. However, as time passed, the learning curve gradually flattened, indicating a slower rate of improvement until the model's learning efficiency reached a saturation point through deep learning processes. It is also important to note that the loss function stabilized at a constant value after the 64th epoch for normal light and the 71st training epoch for intense light. This indicates that the training process has been completed, resulting in a stable and well-optimized detection model.
The Fig. 8 illustrates the confusion matrix obtained from the YOLOv8s results related to the data of normal and intense light conditions. The confusion matrix of this model highlights the potential of this method in the detection of bloomed flowers under normal light conditions. As this confusion matrix shows, just fifteen flowers (6.6%) were classified incorrectly as background, whereas under intense light conditions, 31 (13%) samples were incorrectly classified as background. The results of these matrices indicated the negative impact of intense lighting conditions on model performance.
To analyze and compare the performance of DCNN models, four important metrics, such as precision, recall, F1, and F2, were extracted from these figures based on equations 10 to 13, respectively.
Fig. 8. Confusion matrix of YOLOv8s for: (a) normal, and (b) intense light conditions
Table 3 presents the YOLOv8s model training results on two images on normal and intense light conditions. In this table, the performance metrics for images captured under normal light conditions were as follows: mAP50 at 98%, precision at 97.3%, recall at 93.7%, F1 at 95.5%, and F2 at 94.4%. For images taken under intense light conditions, the corresponding metrics were mAP50 at 92.8%, precision at 88.1%, recall at 86.8%, F1 at 93%, and F2 at 87.1%. Additionally, the detection speed reached 243.9 and 238.1 fps, respectively The data presented in this table suggests that the model performed significantly better under normal lighting conditions, indicating that sunlight adversely impacts its effectiveness. In general, the results of this research can be used in the open field. However, we cannot infer that other object detection tasks will exhibit similar mAP to the present study.
Light condition | Evaluation index | |||||
---|---|---|---|---|---|---|
Recall (%) | Precision (%) | mAP50 (%) | Detection speed (fps) | F1 (%) | F2 (%) | |
Normal (twilight to sunrise) | 93.7 | 97.3 | 98.0 | 243.9 | 95.5 | 94.4 |
Intense (sunrise to 10 AM) | 86.8 | 88.1 | 92.8 | 238.1 | 93 | 87.1 |
Tung et al. (2019) have pinpointed that Ultralytics utilizes images sourced from COCO, ImageNet, and various datasets, with a primary focus on solitary objects positioned at the center of the image, for the purpose of training and assessing YOLO models. These images have been acquired through the utilization of diverse cameras featuring distinct configurations, positioned at varying distances and under different lighting conditions. The findings presented by this company are unable to comprehensively capture the potential influence of environmental variables, such as lighting conditions, on the efficacy of the models. Consequently, they have demonstrated the impact of ambient light on the performance of YOLO models.
Fig. 9. (a, b) Original images, and (c, d) the results of YOLOv8s in detecting desired Damask rose flowers
Fig. 9 visually illustrates the output of the YOLOv8s for two input images. In addition to ambient lighting conditions, which can impact the precision and speed of bloomed Damask rose detection, various other factors must also be considered. These factors include the meticulous care of flowers, variations in background, deployment, orientation, flower size, distance from the camera, potential obstructions by factors like foliage and other flowers, and the presence of only a few flowers in specific frames. These complexities can sometimes confuse researchers and experts when labeling the flowers, as depicted in Fig. 10. In this figure, flower number 2 was wrongly detected as fully bloomed, whereas flower number 1 was not identified due to being blocked by leaves.
Fig. 10. (a) Original image, and (b) result of target detection by YOLOv8s; a flower that (1) could not be detected or (2) was wrongly detected
Conclusion
In this study, the YOLOv8s detection model was introduced for the real-time identification of bloomed Damask rose plants in natural field settings. The principal findings of the investigation were highlighted, revealing the model's remarkable capabilities in achieving high precision and real-time detection of bloomed Damask rose plants. Specifically, when data collected under normal light conditions were applied, an impressive precision rate of 98% was exhibited by the model, underscoring the influence of ambient lighting conditions, which can introduce noise during the detection process. The YOLOv8s model was found to outperform YOLOv5s and YOLOv6s models in terms of both size and detection performance, presenting a more compact footprint while superior detection speed and precision were maintained. Consequently, the YOLOv8s model is well-suited for integration into mobile applications, such as crop yield estimation and the operation of Damask rose harvesting robots, by which it can be utilized. This study highlights the efficacy and practicality of the YOLOv8s detection model for real-time detection tasks in agriculture, particularly for the precise identification of bloomed Damask rose flowers, and it is positioned as a valuable tool for enhancing the efficiency of crop management and automation tasks in Damask rose harvesting.
Conflict of Interest
The authors declare no competing interests.
Authors Contribution
F. Fatehi: Conceptualization, Methodology, Software services, Validation, Data acquisition, Writing original draft preparation.
H. Bagherpour: Supervision, Conceptualization, methodology, Technical advice, Validation, Text mining, Review and editing.
J. Amiri Parian: Review and editing.
References
- Anjani, I. A., Pratiwi, Y. R., and Nurhuda, S. N. B. (2021, March). Implementation of deep learning using convolutional neural network algorithm for classification rose flower. In Journal of Physics: Conference Series (Vol. 1842, No. 1, p. 012002). IOP Publishing.DOI
- Apeinans, I., Sondors, M., Litavniece, L., Kodors, S., Zarembo, I., and Feldmane, D. (2024, June). Cherry Fruitlet Detection using YOLOv5 or YOLOv8? In ENVIRONMENT. TECHNOLOGIES. RESOURCES. Proceedings of the International Scientific and Practical Conference (Vol. 2, pp. 29-33).DOI
- Bataduwaarachchi, S. D., Sattarzadeh, A. R., Stewart, M., Ashcroft, B., Morrison, A., and North, S. (2023). Towards autonomous cross-pollination: Portable multi-classification system for in situ growth monitoring of tomato flowers. Smart Agricultural Technology, 4, 100205.DOI
- Dias, P. A., Tabb, A., and Medeiros, H. (2018). Apple flower detection using deep convolutional networks. Computers in Industry, 99, 17-28.DOI
- Estrada, J. S., Vasconez, J. P., Fu, L., and Cheein, F. A. (2024). Deep Learning based flower detection and counting in highly populated images: A peach grove case study. Journal of Agriculture and Food Research, 15, 100930.DOI
- Guru, D. S., Kumar, Y. S., and Manjunath, S. (2011). Textural features in flower classification. Mathematical and Computer Modelling, 54(3-4), 1030-1036.DOI
- Jocher, G., Chaurasia, A., and Qiu, J. (2023). YOLO by Ultralytics. https://github.com/ultralytics/ultralytics
- Kumar, R., Sharma, S., Sood, S., Agnihotri, V. K., and Singh, B. (2013). Effect of diurnal variability and storage conditions on essential oil content and quality of damask rose (Rosa damascena Mill.) flowers in north western Himalayas. Scientia Horticulturae, 154, 102-108.DOI
- Manikanta, Y. E. R. R. A. P. O. T. H. U., Rao, S. S., and Venkatesh, R. (2017). The design and simulation of rose harvesting robot. International Journal of Mechanical and Production Engineering Research and Development, 9(1), 191-200.
- Mithra, S., and Nagamalleswari, T. Y. J. (2023). Cucurbitaceous family flower inferencing using deep transfer learning approaches: CuCuFlower UAV imagery data. Soft Computing, 27(12), 8345-8356.DOI
- Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788)..
- Rusanov, K., Kovacheva, N., Rusanova, M., and Atanassov, I. (2011). Traditional Rosa damascena flower harvesting practices evaluated through GC/MS metabolite profiling of flower volatiles. Food Chemistry, 129(4), 1851-1859.DOI
- Sharma, S., and Kumar, R. (2018). Influence of Harvesting Stage and Distillation Time of Damask Rose (Rosa damascena Mill.) Flowers on Essential Oil Content and Composition in the Western Himalayas. Journal of Essential Oil-Bearing Plants, 21(1), 92-102.DOI
- Shinoda, R., Motoki, K., Hara, K., Kataoka, H., Nakano, R., Nakazaki, T., and Noguchi, R. (2023). RoseTracker: a system for automated rose growth monitoring. Smart Agricultural Technology, 100271.DOI
- Silva, G., Monteiro, R., Ferreira, A., Carvalho, P., and Corte-Real, L. (2019). Face detection in thermal images with YOLOv3. In Advances in Visual Computing: 14th International Symposium on Visual Computing, ISVC 2019, Lake Tahoe, NV, USA, October 7–9, 2019, Proceedings, Part II 14 (pp. 89-99). Springer International Publishing.DOI
- Sun, K., Wang, X., Liu, S., and Liu, C. (2021). Apple, peach, and pear flower detection using semantic segmentation network and shape constraint level set. Computers and Electronics in Agriculture, 185, 106150.DOI
- Terven, J., and Cordova-Esparza, D. (2023). A comprehensive review of YOLO: From YOLOv1 to YOLOv8 and beyond. arXiv preprint arXiv:2304.00501.DOI
- Thakur, M., Sharma, S., Sharma, U., and Kumar, R. (2019). Study on effect of pruning time on growth, yield and quality of scented rose (Rosa damascena Mill.) varieties under acidic conditions of western Himalayas. Journal of Applied Research on Medicinal and Aromatic Plants, 13, 100202.DOI
- Tung, C., Kelleher, M. R., Schlueter, R. J., Xu, B., Lu, Y. H., Thiruvathukal, G. K., ... and Lu, Y. (2019, March). Large-scale object detection of images from network cameras in variable ambient lighting conditions. In 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR) (pp. 393-398). IEEE.DOI
- Ucar, Y., Kazaz, S., Eraslan, F., and Baydar, H. (2017). Effects of different irrigation water and nitrogen levels on the water use, rose flower yield and oil yield of Rosa damascena. Agricultural Water Management, 182, 94-102.DOI
- Ultralytics. (n.d.). YOLOv8: Model architecture. Retrieved from. https://docs.ultralytics.com/models/yolov8/
- Wang, C., Wang, Y., Liu, S., Lin, G., He, P., Zhang, Z., and Zhou, Y. (2022). Study on pear flowers detection performance of YOLO-PEFL model trained with synthetic target images. Frontiers in Plant Science, 13.DOI
- Wang, Z., Underwood, J., and Walsh, K. B. (2018). Machine vision assessment of mango orchard flowering. Computers and Electronics in Agriculture, 151, 501-511.DOI
- Wang, D., and He, D. (2021). Channel pruned YOLO V5s-based deep learning approach for rapid and accurate apple fruitlet detection before fruit thinning. Biosystems Engineering, 210, 271-281.DOI
- Wu, D., Lv, S., Jiang, M., and Song, H. (2020). Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments. Computers and Electronics in Agriculture, 178, 105742.DOI
- Yousefi, B., and Jaimand, K. (2018). Chemical variation in the essential oil of Iranian Rosa damascena landraces under semi-arid and cool conditions. International Journal of Horticultural Science and Technology, 5(1), 81-92.DOI
- Zhang, M., Su, H., and Wen, J. (2021). Classification of flower image based on attention mechanism and multi-loss attention network. Computer Communications, 179, 307-317.DOI
©2025 The author(s). This is an open access article distributed under Creative Commons Attribution 4.0 International License (CC BY 4.0)
ارسال نظر در مورد این مقاله