2025
Jiang, Lizhi; Li, Changying; Fu, Longsheng
Apple tree architectural trait phenotyping with organ-level instance segmentation from point cloud Journal Article
In: Computers and Electronics in Agriculture, vol. 229, pp. 109708, 2025, ISSN: 0168-1699.
Abstract | Links | BibTeX | Tags: 3D segmentation, Apple tree, Plant phenotyping, Point Transformer V2, PointNeXt, SoftGroup++
@article{Jiang2025,
title = {Apple tree architectural trait phenotyping with organ-level instance segmentation from point cloud},
author = {Lizhi Jiang and Changying Li and Longsheng Fu},
url = {https://www.sciencedirect.com/science/article/pii/S0168169924010998},
doi = {https://doi.org/10.1016/j.compag.2024.109708},
issn = {0168-1699},
year = {2025},
date = {2025-01-01},
journal = {Computers and Electronics in Agriculture},
volume = {229},
pages = {109708},
abstract = {Three-dimensional (3D) plant phenotyping techniques measure organ-level traits effectively and provide detailed plant growth information to breeders. In apple tree breeding, architectural traits can determine photosynthesis efficiency and characterize the developmental stages of trees. The overall goal of this study was to develop a deep learning-based organ-level instance segmentation method to quantify the 3D architectural traits of apple trees. This study utilized PointNeXt for the semantic segmentation of apple tree point clouds, classifying them into trunks and branches, and benchmarked its performance against several competitive models, including PointNet, PointNet++, and Point Transformer V2 (PTv2). A cylinder-based constraint method was introduced to refine the semantic segmentation results. Next, the branches were identified with the density-based spatial clustering of applications with noise (DBSCAN) algorithm. The type of 3D skeleton vertices determined whether a cluster represented a single branch or multiple branches. If multiple, a graph-based technique further separated them. This study also directly applied the instance segmentation model SoftGroup++ to the apple tree point clouds and analyzed the segmentation results on the apple tree dataset. Finally, seven architectural traits of apple trees were extracted, including height, volume, and crown width of the tree, as well as height and diameter for the trunk, and length and count for the branches. The experimental results showed that the post-processed mIoU values for PointNet, PointNet++, PTv2, and PointNeXt were 0.8495, 0.8535, 0.9500, and 0.9481, respectively. The final instance segmentation results based on SoftGroup++ and PointNeXt achieved mAP_50 of 0.815 and 0.842, respectively. For traits such as tree height, trunk length and diameter, branch length, and branch count, the method based on PointNeXt achieved R2 values of 0.987, 0.788, 0.877, 0.796, and 0.934, with mean absolute percentage errors of 0.86 %, 2.17 %, 5.93 %, 10.24 %, and 13.55 %, respectively. The segmentation results of PTv2 and SoftGroup++ were also used to extract the phenotypic traits of apple trees, achieving results comparable to those of PointNeXt. The proposed method demonstrates a cost-effective and accurate approach for extracting the architectural traits of apple trees, which will benefit apple breeding programs as well as the precision management of apple orchards.},
keywords = {3D segmentation, Apple tree, Plant phenotyping, Point Transformer V2, PointNeXt, SoftGroup++},
pubstate = {published},
tppubtype = {article}
}
Three-dimensional (3D) plant phenotyping techniques measure organ-level traits effectively and provide detailed plant growth information to breeders. In apple tree breeding, architectural traits can determine photosynthesis efficiency and characterize the developmental stages of trees. The overall goal of this study was to develop a deep learning-based organ-level instance segmentation method to quantify the 3D architectural traits of apple trees. This study utilized PointNeXt for the semantic segmentation of apple tree point clouds, classifying them into trunks and branches, and benchmarked its performance against several competitive models, including PointNet, PointNet++, and Point Transformer V2 (PTv2). A cylinder-based constraint method was introduced to refine the semantic segmentation results. Next, the branches were identified with the density-based spatial clustering of applications with noise (DBSCAN) algorithm. The type of 3D skeleton vertices determined whether a cluster represented a single branch or multiple branches. If multiple, a graph-based technique further separated them. This study also directly applied the instance segmentation model SoftGroup++ to the apple tree point clouds and analyzed the segmentation results on the apple tree dataset. Finally, seven architectural traits of apple trees were extracted, including height, volume, and crown width of the tree, as well as height and diameter for the trunk, and length and count for the branches. The experimental results showed that the post-processed mIoU values for PointNet, PointNet++, PTv2, and PointNeXt were 0.8495, 0.8535, 0.9500, and 0.9481, respectively. The final instance segmentation results based on SoftGroup++ and PointNeXt achieved mAP_50 of 0.815 and 0.842, respectively. For traits such as tree height, trunk length and diameter, branch length, and branch count, the method based on PointNeXt achieved R2 values of 0.987, 0.788, 0.877, 0.796, and 0.934, with mean absolute percentage errors of 0.86 %, 2.17 %, 5.93 %, 10.24 %, and 13.55 %, respectively. The segmentation results of PTv2 and SoftGroup++ were also used to extract the phenotypic traits of apple trees, achieving results comparable to those of PointNeXt. The proposed method demonstrates a cost-effective and accurate approach for extracting the architectural traits of apple trees, which will benefit apple breeding programs as well as the precision management of apple orchards.
Saeed, Farah; Tan, Chenjiao; Liu, Tianming; Li, Changying
3D neural architecture search to optimize segmentation of plant parts Journal Article
In: Smart Agricultural Technology, vol. 10, pp. 100776, 2025, ISSN: 2772-3755.
Abstract | Links | BibTeX | Tags: 3D Deep learning, 3D Neural architecture search, LiDAR, Plant part segmentation, Plant phenotyping
@article{Saeed2025,
title = {3D neural architecture search to optimize segmentation of plant parts},
author = {Farah Saeed and Chenjiao Tan and Tianming Liu and Changying Li},
url = {https://www.sciencedirect.com/science/article/pii/S2772375525000103},
doi = {https://doi.org/10.1016/j.atech.2025.100776},
issn = {2772-3755},
year = {2025},
date = {2025-01-01},
journal = {Smart Agricultural Technology},
volume = {10},
pages = {100776},
abstract = {Accurately segmenting plant parts from imagery is vital for improving crop phenotypic traits. However, current 3D deep learning models for segmentation in point cloud data require specific network architectures that are usually manually designed, which is both tedious and suboptimal. To overcome this issue, a 3D neural architecture search (NAS) was performed in this study to optimize cotton plant part segmentation. The search space was designed using Point Voxel Convolution (PVConv) as the basic building block of the network. The NAS framework included a supernetwork with weight sharing and an evolutionary search to find optimal candidates, with three surrogate learners to predict mean IoU, latency, and memory footprint. The optimal candidate searched from the proposed method consisted of five PVConv layers with either 32 or 512 output channels, achieving mean IoU and accuracy of over 90 % and 96 %, respectively, and outperforming manually designed architectures. Additionally, the evolutionary search was updated to search for architectures satisfying memory and time constraints, with searched architectures achieving mean IoU and accuracy of >84 % and 94 %, respectively. Furthermore, a differentiable architecture search (DARTS) utilizing PVConv operation was implemented for comparison, and our method demonstrated better segmentation performance with a margin of >2 % and 1 % in mean IoU and accuracy, respectively. Overall, the proposed method can be applied to segment cotton plants with an accuracy over 94 %, while adjusting to available resource constraints.},
keywords = {3D Deep learning, 3D Neural architecture search, LiDAR, Plant part segmentation, Plant phenotyping},
pubstate = {published},
tppubtype = {article}
}
Accurately segmenting plant parts from imagery is vital for improving crop phenotypic traits. However, current 3D deep learning models for segmentation in point cloud data require specific network architectures that are usually manually designed, which is both tedious and suboptimal. To overcome this issue, a 3D neural architecture search (NAS) was performed in this study to optimize cotton plant part segmentation. The search space was designed using Point Voxel Convolution (PVConv) as the basic building block of the network. The NAS framework included a supernetwork with weight sharing and an evolutionary search to find optimal candidates, with three surrogate learners to predict mean IoU, latency, and memory footprint. The optimal candidate searched from the proposed method consisted of five PVConv layers with either 32 or 512 output channels, achieving mean IoU and accuracy of over 90 % and 96 %, respectively, and outperforming manually designed architectures. Additionally, the evolutionary search was updated to search for architectures satisfying memory and time constraints, with searched architectures achieving mean IoU and accuracy of >84 % and 94 %, respectively. Furthermore, a differentiable architecture search (DARTS) utilizing PVConv operation was implemented for comparison, and our method demonstrated better segmentation performance with a margin of >2 % and 1 % in mean IoU and accuracy, respectively. Overall, the proposed method can be applied to segment cotton plants with an accuracy over 94 %, while adjusting to available resource constraints.