Z. Khosrobeygi; Sh. Rafiee; S. S. Mohtasebi; A. Nasiri
Abstract
Introduction Increasing the production efficiency is an important goal in precision farming. The use of precision farming requires a lot of labor work. Also, due to the risk of agricultural operations, it is not recommended to do it directly by humans. Therefore, it is necessary for agricultural operations ...
Read More
Introduction Increasing the production efficiency is an important goal in precision farming. The use of precision farming requires a lot of labor work. Also, due to the risk of agricultural operations, it is not recommended to do it directly by humans. Therefore, it is necessary for agricultural operations to be carried out automatically. For this reason, the application of robotics in agricultural environments, especially in the greenhouse, is increasing. The first step in automatic farming is autonomous navigation. For autonomous navigation, a robot must be the ability to understand its environment and recognize its position. In other words, a robot must be able to create a map of an unknown environment, locate itself on this map and finally plane for the path. This problem is solvable by Simultaneous Localization and Mapping (SLAM). The SLAM problem is a recursive estimation process. In the other words, when a robot moves in an unknown environment, mapping and localization errors increase incrementally. To reduce these two errors, a recursive estimation process is used to solve the SLAM problem. Materials and Methods In this research, two webcams, made by Microsoft Corporation with the resolution of 960×544, are connected to the computer via USB2 in order to produce a stereo parallel camera. For this study, we used a greenhouse that was located the Arak, Iran. Before taking stereo images, a camera path was designed in the greenhouse. This path may be either straight or curved. The designed path was implemented in the greenhouse. The entire path traversed by a stereo camera was 32.7 m and 150 stereo images were taken. Graph-SLAM algorithm was used for Simultaneous Localization and Mapping in the greenhouse. Using the ROS framework, the SLAM algorithm was designed with nodes and network for connecting the nodes. Results and Discussion For evaluation, the stereo camera locations, every step was measured manually and compared with the stereo camera locations that were estimated in the graph-SLAM algorithm. The position error was calculated through the Euclidean distance (DE) between the estimated points and the actual points. The results of this study showed that, the proposed algorithm has an average of error 0.0679412, standard deviation of 0.0456431 and root mean square error (RMSE) of 0.0075569 for camera localization. In this research, only a stereo camera was used to prepare a map of the environment, but other researches have used multiple sensor combinations. Another advantage of this research related to others was created a 3D map (point cloud) of the environment and loop closer detection. In the 3D map, in addition to determining the exact location of the plant, the height of the plant can also be estimated. Plant height estimate is important in some agricultural operations such as spot spray, harvesting and pruning. Conclusion Due to the risk of agricultural activities, the use of robotics is essential. Autonomous navigation is one of the branches of the robotics. For autonomous navigation, a map of environment and localization in this map is need. The purpose of our research was to provide simultaneous localization and mapping (SLAM) in agricultural environments. ROS is a strong framework for solving the SLAM problem. So that, this problem can be solved by combining different nodes in ROS. The method depended only on the information from the stereo camera because stereo camera provided exact distance information. We believe that this study will contribute to the field of autonomous robot applications in agriculture. In future studies, it is possible to use an actual robot in the greenhouse with various sensors for SLAM and path planning.
A. Nasiri; H. Mobli; S. Hosseinpour; Sh. Rafiee
Abstract
Introduction Stereo vision means the capability of extracting the depth based on analysis of two images taken from different angles of one scene. The result of stereo vision is a collection of three-dimensional points which describes the details of scene proportional to the resolution of the obtained ...
Read More
Introduction Stereo vision means the capability of extracting the depth based on analysis of two images taken from different angles of one scene. The result of stereo vision is a collection of three-dimensional points which describes the details of scene proportional to the resolution of the obtained images. Vehicle automatic steering and crop growth monitoring are two important operations in agricultural precision. The essential aspects of an automated steering are position and orientation of the agricultural equipment in relation to crop row, detection of obstacles and design of path planning between the crop rows. The developed map can provide this information in the real time. Machine vision has the capabilities to perform these tasks in order to execute some operations such as cultivation, spraying and harvesting. In greenhouse environment, it is possible to develop a map and perform an automatic control by detecting and localizing the cultivation platforms as the main moving obstacle. The current work was performed to meet a method based on the stereo vision for detecting and localizing platforms, and then, providing a two-dimensional map for cultivation platforms in the greenhouse environment. Materials and Methods In this research, two webcams, made by Microsoft Corporation with the resolution of 960×544, are connected to the computer via USB2 in order to produce a stereo parallel camera. Due to the structure of cultivation platforms, the number of points in the point cloud will be decreased by extracting the only upper and lower edges of the platform. The proposed method in this work aims at extracting the edges based on depth discontinuous features in the region of platform edge. By getting the disparity image of the platform edges from the rectified stereo images and translating its data to 3D-space, the point cloud model of the environments is constructed. Then by projecting the points to XZ plane and putting local maps together based on the visual odometry, global map of the environment is constructed. To evaluate the accuracy of the obtained algorithm in estimation of the position of the corners, Euclidian distances of coordinates of the corners achieved by Leica Total Station and coordinates and resulted from local maps, were computed. Results and Discussion Results showed that the lower edges have been detected with better accuracy than the upper ones. Upper edges were not desirably extracted because of being close to the pots. In contrast, due to the distance between lower edge and the ground surface, lower edges were extracted with a higher quality. Since the upper and lower edges of the platform are in the same direction, the lower edges of the platform have been only used for producing an integrated map of the greenhouse environment. The total length of the edge of the cultivation platforms was 106.6 meter, that 94.79% of which, was detected by the proposed algorithm. Some regions of the edge of the platforms were not detected, since they were not located in the view angle of the stereo camera. By the proposed algorithm, 83.33% of cultivation platforms’ corners, were detected with the average error of 0.07309 meter and mean squared error of 0.0076. Non- detected corners are due the fact that they were not located in the camera view angle. The maximum and minimum errors in the localization, according to the Euclidian distance, were 0.169 and 0.0001 meters, respectively. Conclusion Stereo vision is the perception of the depth of 3D with the disparity of the two images. In navigation, stereo vision is used for localizing the obstacles of movement. Cultivation platforms are the main obstacle of movement in greenhouses. Therefore, it is possible to design an integrated map of greenhouse environment and perform automatic control by localization of the cultivation platforms. In this research, the depth discontinuity feature in the locations of the edges, was used for the localization of the cultivation platforms’ edges. Using this feature, the size of the points required for establishing the point cloud model and also the associated processing time decreased, resulting improvement in the accuracy of determining coordination of the platforms’ corners.