Z. Khosrobeygi; Sh. Rafiee; S. S. Mohtasebi; A. Nasiri
Abstract
Introduction Increasing the production efficiency is an important goal in precision farming. The use of precision farming requires a lot of labor work. Also, due to the risk of agricultural operations, it is not recommended to do it directly by humans. Therefore, it is necessary for agricultural operations ...
Read More
Introduction Increasing the production efficiency is an important goal in precision farming. The use of precision farming requires a lot of labor work. Also, due to the risk of agricultural operations, it is not recommended to do it directly by humans. Therefore, it is necessary for agricultural operations to be carried out automatically. For this reason, the application of robotics in agricultural environments, especially in the greenhouse, is increasing. The first step in automatic farming is autonomous navigation. For autonomous navigation, a robot must be the ability to understand its environment and recognize its position. In other words, a robot must be able to create a map of an unknown environment, locate itself on this map and finally plane for the path. This problem is solvable by Simultaneous Localization and Mapping (SLAM). The SLAM problem is a recursive estimation process. In the other words, when a robot moves in an unknown environment, mapping and localization errors increase incrementally. To reduce these two errors, a recursive estimation process is used to solve the SLAM problem. Materials and Methods In this research, two webcams, made by Microsoft Corporation with the resolution of 960×544, are connected to the computer via USB2 in order to produce a stereo parallel camera. For this study, we used a greenhouse that was located the Arak, Iran. Before taking stereo images, a camera path was designed in the greenhouse. This path may be either straight or curved. The designed path was implemented in the greenhouse. The entire path traversed by a stereo camera was 32.7 m and 150 stereo images were taken. Graph-SLAM algorithm was used for Simultaneous Localization and Mapping in the greenhouse. Using the ROS framework, the SLAM algorithm was designed with nodes and network for connecting the nodes. Results and Discussion For evaluation, the stereo camera locations, every step was measured manually and compared with the stereo camera locations that were estimated in the graph-SLAM algorithm. The position error was calculated through the Euclidean distance (DE) between the estimated points and the actual points. The results of this study showed that, the proposed algorithm has an average of error 0.0679412, standard deviation of 0.0456431 and root mean square error (RMSE) of 0.0075569 for camera localization. In this research, only a stereo camera was used to prepare a map of the environment, but other researches have used multiple sensor combinations. Another advantage of this research related to others was created a 3D map (point cloud) of the environment and loop closer detection. In the 3D map, in addition to determining the exact location of the plant, the height of the plant can also be estimated. Plant height estimate is important in some agricultural operations such as spot spray, harvesting and pruning. Conclusion Due to the risk of agricultural activities, the use of robotics is essential. Autonomous navigation is one of the branches of the robotics. For autonomous navigation, a map of environment and localization in this map is need. The purpose of our research was to provide simultaneous localization and mapping (SLAM) in agricultural environments. ROS is a strong framework for solving the SLAM problem. So that, this problem can be solved by combining different nodes in ROS. The method depended only on the information from the stereo camera because stereo camera provided exact distance information. We believe that this study will contribute to the field of autonomous robot applications in agriculture. In future studies, it is possible to use an actual robot in the greenhouse with various sensors for SLAM and path planning.
A. Azizi; Y. Abbaspour Gilandeh; T. Mesri Gundoshmian; H. Abrishami Moghaddam
Abstract
IntroductionStereo vision is an approach to 3D information from multiple 2D views of a scene. The 3D information can be extracted from a pair image, as known stereo pair by estimating the relative depth of points in the scene.Soil aggregate size distribution is one of the most important issues in the ...
Read More
IntroductionStereo vision is an approach to 3D information from multiple 2D views of a scene. The 3D information can be extracted from a pair image, as known stereo pair by estimating the relative depth of points in the scene.Soil aggregate size distribution is one of the most important issues in the agriculture sector which highly affects energy consumed for preparing the field before planting. Mean weight diameter of clods is a standard metric for determining clod (big aggregates) size. Conventional methods are based on sieving soil samples to calculate the MWD. However, they are faced with several challenges in larger scales and practical applications. Furthermore, due to inherent limitations of soil environment and also being a tedious work, traditional methods would beuse to estimate the metric higher or lower than actual value.As new methods, researchers are using computer vision techniques as virtual sieve so that the size of clods can be determined via processing digital images which have been taken from soil surface. Although, image-based methods have solved many of previous problems, their accuracy is not so high due to the complexity of soil environment and overlapping colds, and needs to be improved. In order to overcome the mentioned challenges, in the current study stereo vision method was developed so that it is possible to extract the third dimension information as height of clods which helps us to categorize clods into their own class.Materials and MethodsIn this study, the W3-Fujifilm stereo camera equipped with two 10-megapixel CCD sensors for both left and right lenses, and baseline spacing of 7.5 cm was used. The distance between the camera lens and the ground was also set to 60 cm.In order to get three components of soil clods including (x, y, z), point cloud was investigated. For this, local features were extracted using a SIFT feature detector. The SIFT algorithm is robust against scale, rotation and illumination changes, so that these specifications have made it as a strong tool in the field of stereo vision. Then, the extracted features (keypoints) were matched between two stereo pair images by means of Brute Force algorithm and the location of all corresponding points were determined and point cloud was obtained.At the final stage, three features including length, width and height of all six classes of soil clods were entered into a linear classifier entitled discriminant analysis. This classifier as a linear separator classified these six classes based on appropriate functions in a 5 dimensional space.Results and DiscussionResults of classification model showed that the height (thickness) of clods have more distinguishing different soil clods. The reason for this refers to the event of overlapping, because most of clods were touched each other after sieving. Consequently, the length and width of clods had not significant effect in soil aggregates classification.In order to analysis the result of soil aggregate classification, confusion matrix was calculated and the overall classification accuracy was achieved 83.7%. The lowest and highest accuracy were obtained for class 1 (the littlest class) and class 6 (the biggest class), respectively due to their low and high height from the soil surface.ConclusionIn this research, the basic geometrical features including length, width and height were extracted from stereo pair digital images via stereo vision techniques to classify six classes of soil clods. This aim was reached by 3-D reconstruction of image data, so that the height of each image as the third component of (x,y,z) was obtained as well as the length and width. The results of classification indicated that the stereo vision technique had the satisfactory performance in determining the aggregate size distribution which is one of the most important indices for tilled soil quality.
S. M. Hosseini; A. A. Jafari
Abstract
Introduction Great areas of the orchards in the world are dedicated to cultivation of the grapevine. Normally grape vineyards are pruned twice a year. Among the operations of grape production, winter pruning of the bushes is the only operation that still has not been fully mechanized while it is known ...
Read More
Introduction Great areas of the orchards in the world are dedicated to cultivation of the grapevine. Normally grape vineyards are pruned twice a year. Among the operations of grape production, winter pruning of the bushes is the only operation that still has not been fully mechanized while it is known as the most laborious jobs in the farm. Some of the grape producing countries use various mechanical machines to prune the grapevines, but in most cases, these machines do not have a good performance. Therefore intelligent pruning machine seems to be necessary in this regard and this intelligent pruning machines can reduce the labor required to prune the vineyards. It this study in was attempted to develop an algorithm that uses image processing techniques to identify which parts of the grapevine should be cut. Stereo vision technique was used to obtain three dimensional images from the bare bushes whose leaves were fallen in autumn. Stereo vision systems are used to determine the depth from two images taken at the same time but from slightly different viewpoints using two cameras. Each pair of images of a common scene is related by a popular geometry, and corresponding points in the images pairs are constrained to lie on pairs of conjugate popular lines. Materials and Methods Photos were taken from gardens of the Research Center for Agriculture and Natural Resources of Fars province, Iran. At first, the distance between the plants and the cameras should be determined. The distance between the plants and cameras can be obtained by using the stereo vision techniques. Therefore, this method was used in this paper by two pictures taken from each plant with the left and right cameras. The algorithm was written in MATLAB. To facilitate the segmentation of the branches from the rows at the back, a blue plate with dimensions of 2×2 m2 were used at the background. After invoking the images, branches were segmented from the background to produce the binary image. Then, the plant distance from the cameras was calculated by using the stereo vision. In next stage, the main trunk and one year old branches were identified and branches with thicknesses less than 7 mm were removed from the image. To omit these branches consecutive dilation and erosion operations were applied with circular structures having radii of 2 and 4 pixels. Then, based on the branch diameter, one-year-old branches were detected and pruned through considering the pruning parameters. The branches were pruned so that only three buds were left on them. For this aim, the branches should be pruned to have a length of 15 cm. To truncate the branches to 15 cm, the length of the main stem was measured for each of the branches, and branches with length less than 15 cm were omitted from the images. Then the main skeleton of grapevine was determined. Using this skeleton, the attaching points of the branches as well as attachment points to the trunk were identified. Distance between the branches was maintained. At the last step, the cutting points on the branches were determined by labeling the removed branches at each step. Results and Discussion The results indicated that the color components in the texture of the branches could not be used to identify one year old branches and evaluation results of algorithm showed that the proposed algorithm had acceptable performance and in all photos, one year old branches were correctly identified and pruning point of the grapevines were correctly marked. Also among 254 cut off-points extracted from 20 images, just 7 pruning points were misdiagnosed. These results revealed that the accuracy of the algorithm was about 96.8 percent. Conclusion Based on the reasonable achievement of the algorithm it can be concluded that it is possible to use machine vision routines to determine the most suitable cut off points for pruning robots. By an intelligent pruning robot, the one year old branches are diagnosed properly and the cut off points of the plants are determined. This can reduce the required labor to perform winter pruning in vineyards which subsequently reduces the time required and the costs needed for pruning the vineyards.
A. Nasiri; H. Mobli; S. Hosseinpour; Sh. Rafiee
Abstract
Introduction Stereo vision means the capability of extracting the depth based on analysis of two images taken from different angles of one scene. The result of stereo vision is a collection of three-dimensional points which describes the details of scene proportional to the resolution of the obtained ...
Read More
Introduction Stereo vision means the capability of extracting the depth based on analysis of two images taken from different angles of one scene. The result of stereo vision is a collection of three-dimensional points which describes the details of scene proportional to the resolution of the obtained images. Vehicle automatic steering and crop growth monitoring are two important operations in agricultural precision. The essential aspects of an automated steering are position and orientation of the agricultural equipment in relation to crop row, detection of obstacles and design of path planning between the crop rows. The developed map can provide this information in the real time. Machine vision has the capabilities to perform these tasks in order to execute some operations such as cultivation, spraying and harvesting. In greenhouse environment, it is possible to develop a map and perform an automatic control by detecting and localizing the cultivation platforms as the main moving obstacle. The current work was performed to meet a method based on the stereo vision for detecting and localizing platforms, and then, providing a two-dimensional map for cultivation platforms in the greenhouse environment. Materials and Methods In this research, two webcams, made by Microsoft Corporation with the resolution of 960×544, are connected to the computer via USB2 in order to produce a stereo parallel camera. Due to the structure of cultivation platforms, the number of points in the point cloud will be decreased by extracting the only upper and lower edges of the platform. The proposed method in this work aims at extracting the edges based on depth discontinuous features in the region of platform edge. By getting the disparity image of the platform edges from the rectified stereo images and translating its data to 3D-space, the point cloud model of the environments is constructed. Then by projecting the points to XZ plane and putting local maps together based on the visual odometry, global map of the environment is constructed. To evaluate the accuracy of the obtained algorithm in estimation of the position of the corners, Euclidian distances of coordinates of the corners achieved by Leica Total Station and coordinates and resulted from local maps, were computed. Results and Discussion Results showed that the lower edges have been detected with better accuracy than the upper ones. Upper edges were not desirably extracted because of being close to the pots. In contrast, due to the distance between lower edge and the ground surface, lower edges were extracted with a higher quality. Since the upper and lower edges of the platform are in the same direction, the lower edges of the platform have been only used for producing an integrated map of the greenhouse environment. The total length of the edge of the cultivation platforms was 106.6 meter, that 94.79% of which, was detected by the proposed algorithm. Some regions of the edge of the platforms were not detected, since they were not located in the view angle of the stereo camera. By the proposed algorithm, 83.33% of cultivation platforms’ corners, were detected with the average error of 0.07309 meter and mean squared error of 0.0076. Non- detected corners are due the fact that they were not located in the camera view angle. The maximum and minimum errors in the localization, according to the Euclidian distance, were 0.169 and 0.0001 meters, respectively. Conclusion Stereo vision is the perception of the depth of 3D with the disparity of the two images. In navigation, stereo vision is used for localizing the obstacles of movement. Cultivation platforms are the main obstacle of movement in greenhouses. Therefore, it is possible to design an integrated map of greenhouse environment and perform automatic control by localization of the cultivation platforms. In this research, the depth discontinuity feature in the locations of the edges, was used for the localization of the cultivation platforms’ edges. Using this feature, the size of the points required for establishing the point cloud model and also the associated processing time decreased, resulting improvement in the accuracy of determining coordination of the platforms’ corners.