Background
According to the latest United Nations’ revision of world population predictions, by 2050 the world’s population could surpass 9.7 billion (United Nations, 2019). Therefore, food and fiber production need to increase at a steady pace to meet the expected demand of agricultural products (Hunter et al., 2017, Voss-Fels et al., 2019). Plant phenotyping is an important tool for improving crops, but collecting plant traits in the field is challenging and time-consuming, and it is still considered a bottleneck for plant breeding programs (Song et al. 2021). The implementation of innovative technologies to accelerate crop production effectively and overcome this bottleneck will be key for the sustainability of agriculture (Morisse et al., 2022).
Recent advances in proximal and remote sensing and the continuous need for the researchers to study plants under conditions similar to their target environment have contributed to the development of innovative methods tailored to collect and process in-field crop data. New approaches for field-based High-Throughput Phenotyping (HTP), including image analysis, three-dimensional (3D) technologies, and methods based on artificial intelligence have attracted considerable interest recently and are contributing to improving field operations and optimizing the use of resources (Costa et al., 2019). However, traditionally in-field phenotyping methods have been usually developed ad hoc to target specific crops, and the generalization of methods and workflows to other species or field layouts may require significant time (Fiorani and Schurr, 2013). Current trends in developing field-based HTP solutions are trying to close this gap by improving system modularity through the integration of different remote sensing technologies, a process that will enable the assessment of complex traits directly in the field in a more flexible manner.
Objectives
The main goal of this work was to fully automate the phenotyping process under field conditions using robotics. The specific objectives were:
PLATFORM DEVELOPMENT
Develop a crop phenotyping platform for autonomous in-field terrestrial laser scanning (TLS) crop surveying.
DATA PROCESSING
Develop a data processing pipeline to register and process point cloud data automatically.
EVALUATION
Conduct experiments to assess the performance of our methodology.
Results
The proposed system is capable of producing high-quality point clouds of an entire field. The morphological characteristics of individual plants are readily discernable.

Materials and Methods
System Integration
The autonomous phenotyping robot was developed based on a modular design with 3 basic subsystems: the control module was in charge of managing the global operation of the platform. It was composed of a Husky platform running the Robotic Operating System (ROS) as the master. The second module was a plant phenotyping module to collect 3D LiDAR data from the crop. This module was composed of a high resolution FARO laser scanner and a Jetson TX2 as the interface controller. This module was integrated using CAN protocol, and was conceived as a ROS slave to enable the easy integration of additional sensors. Finally, the navigation module provided pose information and guided the platform during the mission. It was composed of a dual GNSS antenna system. This module provided robot’s position information with centimeter-level accuracy using Real Time Kinematics corrections, and robot’s orientation using GNSS compassing techniques.

Point Cloud Processing

The system was deployed in the field to autonomously navigate and collect LiDAR data in a cotton breeding field following a stop-and-go approach. 10 scan locations were selected around the field as TLS scan locations. Pose information at each scan location was used to roughly pre-align the individual point clouds into a common coordinates frame. Then a cloud-to-cloud registration was applied to finely co-register the point clouds into a single 3D point cloud. This point cloud was further processed to remove noisy points created during data collection due to ranging errors or wind. Then a digital elevation model was generated from the lowest points of the point cloud to model the ground surface of the field. This DEM model was used as reference to normalize the height of the rest of the objects in the scene by subtracting it from the denoised point cloud. To extract each individual plot, a height threshold was applied to the normalized point cloud to isolate non-ground points and then a connected components algorithm was used to segment individual plots.
Conclusions
Autonomous robotic systems can efficiently collect high quality LiDAR data in the field without human intervention. The accurate and stable pose estimates provided by the navigation system during the scan operation allowed us to register the individual point clouds into a common frame with errors comparable to those using artificial targets. By reducing the labor and time needed for planning TLS missions, this system can greatly improve the efficiency of the LiDAR-based field phenotyping process. The highly dense point clouds allowed us to reconstruct 3D crop models with high spatial resolution and quality, potentially enabling the estimation of morphological traits at the plot and plant levels.