Modern agriculture is facing tremendous challenges in its sustainability, productivity, and quality for almost ten billion people by 2050. To address these issues, we need to gain further knowledge of genetics and environment interactions (G×E), and apply those knowledge to facilitate breeding programs for cultivating new crop genotypes suitable for various production purposes and environments. These heavily rely on field based high throughput phenotyping (FB-HTP). As engineers, we are integrating various techniques (e.g. computer vision, robotics, and machine learning) to develop the state-of-the-art solutions for non-destructive, accurate, and rapid phenotyping of various crops in field conditions. Our lab developed the GPhenoVision system in 2016 and the paper presenting the system was awarded at the annual international meeting of the ASABE in 2017.
Awards
2017, Best paper award from the Information Technology, Sensors & Control Systems (ITSC) devision of the American Society of Agricultural and Biological Engineers (ASABE).
Publications
2026
Petti, Daniel; Li, Changying; Liu, Ninghao
Contrastive multi-view representation learning for multi-camera plant phenotyping: A cotton field study Journal Article
In: Plant Phenomics, vol. 8, no. 2, pp. 100193, 2026, ISSN: 2643-6515.
@article{Petti2026,
title = {Contrastive multi-view representation learning for multi-camera plant phenotyping: A cotton field study},
author = {Daniel Petti and Changying Li and Ninghao Liu},
url = {https://www.sciencedirect.com/science/article/pii/S2643651526000300},
doi = {https://doi.org/10.1016/j.plaphe.2026.100193},
issn = {2643-6515},
year = {2026},
date = {2026-01-01},
journal = {Plant Phenomics},
volume = {8},
number = {2},
pages = {100193},
abstract = {Attempts to deploy computer vision in agricultural tasks often suffer from a shortage of annotated data. One strategy to alleviate the impact of limited data is Self-Supervised Learning (SSL), which involves pre-training a model on a pretext task that utilizes automatically generated annotations. The primary objective of this study is to leverage a multi-camera view dataset of cotton boll images for contrastive learning in order to enable phenotyping tasks with minimal data annotation. This dataset was collected in the field using six camera views. The efficacy of two contrastive learning frameworks (SimCLR and MoCo) in producing representations when positive examples originate from different cameras was investigated, and a comprehensive study of how the camera positions affect performance was conducted. After self-supervised pre-training, linear evaluation and semi-supervised learning experiments were performed on boll detection and plot status downstream tasks. In general, using multiple camera views with SimCLR and MoCo improves cotton boll detection mean average precision by 14% compared to vanilla SimCLR and MoCo. Through careful investigation using synthetic data, it was determined that relative camera poses with an intermediate amount of overlap seem more likely to perform well. Neither MoCo nor SimCLR was consistently superior to the other in this context. The representations embed meaningful features about the cotton plants, such as overall boll density, but also less meaningful ones, such as lighting variations. This technique could potentially accelerate the development of phenotyping algorithms based on data collected from field robots.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2025
Adhikari, Jeevan; Petti, Daniel; Vitrakoti, Deepak; Ployaram, Wiriyanat; Li, Changying; Paterson, Andrew H.
Characterizing season-long floral trajectories in cotton with low-altitude remote sensing and deep learning Journal Article
In: PLANTS, PEOPLE, PLANET, vol. n/a, no. n/a, 2025.
@article{Adhikari2025,
title = {Characterizing season-long floral trajectories in cotton with low-altitude remote sensing and deep learning},
author = {Jeevan Adhikari and Daniel Petti and Deepak Vitrakoti and Wiriyanat Ployaram and Changying Li and Andrew H. Paterson},
url = {https://nph.onlinelibrary.wiley.com/doi/abs/10.1002/ppp3.10644},
doi = {https://doi.org/10.1002/ppp3.10644},
year = {2025},
date = {2025-01-01},
journal = {PLANTS, PEOPLE, PLANET},
volume = {n/a},
number = {n/a},
abstract = {Societal Impact Statement Plant breeding is a critical tool for increasing the productivity, climate resilience, and sustainability of agriculture, but current phenotyping methods are a bottleneck due to the amount of human labor involved. Here, we demonstrate high-throughput phenotyping with an unmanned aerial vehicle (UAV) to analyze the season-long flowering pattern in cotton, subsequently mapping relevant genetic factors underpinning the trait. Season-long flowering is a complex trait, with implications for adaptation of perennials to specific environments. We believe our approach can improve the speed and efficacy of breeding for a variety of woody perennials. Summary Many perennial plants make important contributions to agroeconomies and agroecosystems but have complex architecture and/or long flowering duration that hinders measurement and selection. Iteratively tracking productivity over a long flowering/fruiting season may permit the identification of genetic factors conferring different reproductive strategies that might be successful in different environments, ranging from rapid early maturation that avoids stresses, to late maturation that utilizes the full seasonal duration to maximize productivity. In cotton, a perennial plant that is generally cultivated as an annual crop, we apply aerial imagery and deep learning methods to novel and stable genetic stocks, identifying genetic factors influencing the duration and rate of fruiting. Our phenotyping method was able to identify 24 QTLs that affect flowering behavior in cotton. A total of five of these corresponded to previously identified QTLs from other studies. While these factors may have different relationships with crop productivity and quality in different environments, their determination adds potentially important information to breeding decisions. With transfer learning of the deep learning models, this approach could be applied widely, potentially improving gains from selection in diverse perennial shrubs and trees essential to sustainable agricultural intensification.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Zhengkun; Xu, Rui; Brown, Nino; Tillman, Barry L.; Li, Changying
Plot-scale peanut yield estimation using a phenotyping robot and transformer-based image analysis Journal Article
In: Smart Agricultural Technology, vol. 12, pp. 101154, 2025, ISSN: 2772-3755.
@article{LI2025101154,
title = {Plot-scale peanut yield estimation using a phenotyping robot and transformer-based image analysis},
author = {Zhengkun Li and Rui Xu and Nino Brown and Barry L. Tillman and Changying Li},
url = {https://www.sciencedirect.com/science/article/pii/S2772375525003867},
doi = {https://doi.org/10.1016/j.atech.2025.101154},
issn = {2772-3755},
year = {2025},
date = {2025-01-01},
journal = {Smart Agricultural Technology},
volume = {12},
pages = {101154},
abstract = {Peanuts rank as the seventh-largest crop in the United States with a farm gate value exceeding $1 billion. Conventional peanut yield estimation methods involve digging, harvesting, transporting, and weighing, which are labor-intensive and inefficient for large-scale research operations. This inefficiency is particularly pronounced in peanut breeding, which requires precise pod yield estimations of each plot in order to compare genetic potential for yield to select new, high-performing breeding lines. To improve efficiency and throughput for accelerating genetic improvement, we proposed an automated robotic imaging system to predict peanut yields in the field after digging and inversion of plots. A workflow was developed to estimate yield accurately across different genotypes by counting the pods from stitched plot-scale images. After the robotic scanning in the field, the sequential images of each peanut plot were stitched together using the Local Feature Transformer (LoFTR)-based feature matching and estimated translation between adjusted images, which avoided replicated pod counting in overlapped image regions. Additionally, the Real-Time Detection Transformer (RT-DETR) was customized for pod detection by integrating partial convolution into a lightweight ResNet-18 backbone and refining the up-sampling and down-sampling modules in cross-scale feature fusion. The customized detector achieved a mean Average Precision (mAP50) of 89.3% and a mAP95 of 55.0%, improving by 3.3% and 5.9% over the original RT-DETR model with lighter weights and less computation. To determine the number of pods within the stitched plot-scale image, a sliding window-based method was used to divide it into smaller patches to improve the accuracy of pod detection. In a case study of a total of 68 plots across 19 genotypes in a peanut breeding yield trial, the result presented a correlation (R2=0.47) between the yield and predicted pod count, better than the structure-from-motion (SfM) method. The yield ranking among different genotypes using image prediction achieved an average consistency of 84.8% with manual measurement. When the yield difference between two genotypes exceeded 12%, the consistency surpassed 90%. Overall, our robotic plot-scale peanut yield estimation workflow showed promise to replace the human measurement process, reducing the time and labor required for yield determination and improving the efficiency of peanut breeding.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Petti, Daniel; Li, Changying; Chee, Peng
Real-Time Multi-View Flower Counting With a Ground Mobile Robot Journal Article
In: Journal of Field Robotics, vol. 42, no. 8, pp. 1-27, 2025.
@article{Petti2025a,
title = {Real-Time Multi-View Flower Counting With a Ground Mobile Robot},
author = {Daniel Petti and Changying Li and Peng Chee},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.70093},
doi = {https://doi.org/10.1002/rob.70093},
year = {2025},
date = {2025-01-01},
journal = {Journal of Field Robotics},
volume = {42},
number = {8},
pages = {1-27},
abstract = {ABSTRACT Although season-long cotton flowering time characterization has value to breeders and growers, a manual data collection process is too laborious to be practical in most cases. In recent years, several fully automated flower counting approaches have been proposed. However, such approaches are typically designed to run offline and require a significant amount of computation. Furthermore, little thought has gone into developing convenient interfaces and integrations so that a layperson can use such systems without extensive training. The goal of this study is to develop a flower tracking system that is deployable on a ground robot and can operate in real time. A previous GCNNMatch++ approach was modified to increase the inference speed. Additionally, data from multiple cameras were fused to avoid canopy occlusions, and three-dimensional flower locations were extracted by integrating GPS data from the robot. It is shown that the approach significantly outperforms UAV-based counting and single-camera counting while running at above 40 FPS on an edge device, achieving a counting error of 15. Overall, it is believed that the highly integrated, automated, and simplified flower counting solution makes significant strides toward a practical commercial cotton phenotyping platform.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2024
Petti, Daniel; Zhu, Ronghang; Li, Sheng; Li, Changying
Graph Neural Networks for lightweight plant organ tracking Journal Article
In: Computers and Electronics in Agriculture, vol. 225, pp. 109294, 2024, ISSN: 0168-1699.
@article{PETTI2024109294,
title = {Graph Neural Networks for lightweight plant organ tracking},
author = {Daniel Petti and Ronghang Zhu and Sheng Li and Changying Li},
url = {https://www.sciencedirect.com/science/article/pii/S0168169924006859},
doi = {https://doi.org/10.1016/j.compag.2024.109294},
issn = {0168-1699},
year = {2024},
date = {2024-01-01},
journal = {Computers and Electronics in Agriculture},
volume = {225},
pages = {109294},
abstract = {Many specific problems within the domain of high throughput phenotyping require the accurate localization of plant organs. To track and count plant organs, we propose GCNNMatch++, a Graph Convolutional Neural Network (GCNN) that is capable of online tracking objects from videos. Based upon the GCNNMatch tracker with an improved CensNet GNN, our end-to-end tracking approach achieves fast inference. In order to adapt this approach to flower counting, we collected a large, high-quality dataset of cotton flower videos by leveraging our custom-built MARS-X robotic platform. Specifically, our system can count cotton flowers in the field with 80% accuracy, achieving a Higher-Order Tracking Accuracy (HOTA) of 51.09 and outperforming more generic tracking methods. Without any optimization (such as employing TensorRT), our association model runs in 44 ms on a central processing unit (CPU). On appropriate hardware, our model holds promise for achieving real-time counting performance when coupled with a fast detector. Overall, our approach is useful in counting cotton flowers and other relevant plant organs for both breeding programs and yield estimation.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Zhengkun; Xu, Rui; Li, Changying; Tillman, Barry; Brown, Nino
Robotic Plot-scale Peanut Counting and Yield Estimation using LoFTR-based Image Stitching and Improved RT-DETR Proceedings Article
In: pp. 1, ASABE, St. Joseph, MI, 2024.
@inproceedings{Li2024,
title = {Robotic Plot-scale Peanut Counting and Yield Estimation using LoFTR-based Image Stitching and Improved RT-DETR},
author = {Zhengkun Li and Rui Xu and Changying Li and Barry Tillman and Nino Brown},
url = {https://elibrary.asabe.org/abstract.asp?aid=54774&t=5},
doi = {10.13031/aim.202400615},
year = {2024},
date = {2024-01-01},
journal = {2024 ASABE Annual International Meeting},
pages = {1},
publisher = {ASABE},
address = {St. Joseph, MI},
series = {ASABE Paper No. 2400615},
abstract = {Peanuts, ranking as the seventh-largest crop in the United States with a farm value exceeding $1 billion, are pivotal to global food security. Conventional peanut yield estimation methods involve digging, harvesting, transporting, and weighing, which are labor-intensive and inefficient for large-scale operations. This inefficiency is particularly pronounced in peanut breeding, where requires precise yield estimations of each plot-scale pods for genotypes comparison and selection. We proposed an automated approach utilizing a robotic system equipped with machine vision to predict peanut yields post-digging and inverting. This system leverages a mobile robot with an imaging system that captures sequential images of peanut plots, each representing a different genotype, utilizing spatial geographic information. A robust hierarchical strategy was introduced for plot-scale image stitching, employing a Local Feature Transformer (LoFTR)-based feature matching algorithm. Additionally, the Real-Time Detection Transformer (RT-DETR) was customized for pod detection by integrating partial convolution into a lightweight ResNet-18 backbone and refining the upsampling and downsampling modules in Cross-scale Feature Fusion. Our methods were validated in two breeding fields, where the LoFTR-based stitching achieved approximately three times denser and more uniform feature matching than the conventional Scale-Invariant Feature Transform (SIFT) approach. The customized peanut pod detector demonstrated a mean Average Precision (mAP50) of 89.3% and an mAP95 of 55.0% with lighter weights and less computation, improving by 3.3% and 5.9%, respectively, over the original RT-DETR model. Finally, we deployed the detector on the stitched plot-scale images and calculated the pods number for predicting the yield. Achieving a Mean Absolute Percentage Error (MAPE) of 9% and an R-square of 0.47, our approach outperforms the mainstream Structure from Motion (SfM) based methods. This innovative approach significantly reduces the time and labor required for yield determination, thereby advancing the efficiency of peanut breeding operations in complex, dynamic outdoor environments.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Petti, Daniel J.; Li, Changying
Active Learning for Real-Time Flower Counting with a Ground Mobile Robot Proceedings Article
In: pp. 1, ASABE, St. Joseph, MI, 2024.
@inproceedings{Petti2024,
title = {Active Learning for Real-Time Flower Counting with a Ground Mobile Robot},
author = {Daniel J. Petti and Changying Li},
url = {https://elibrary.asabe.org/abstract.asp?aid=54773&t=5},
doi = {10.13031/aim.202400607},
year = {2024},
date = {2024-01-01},
journal = {2024 ASABE Annual International Meeting},
pages = {1},
publisher = {ASABE},
address = {St. Joseph, MI},
series = {ASABE Paper No. 2400607},
abstract = {Modern computer vision has made great strides in object recognition and counting, which have slowly filtered into the agricultural domain. Cotton flower counting is a good example of this. Though season-long flower counts have value to plant breeders, a manual data collection process is too laborious to be practical in most cases. In recent years, several fully automated flower counting approaches have been proposed. However, such approaches are typically designed to run offline and require a significant amount of computation. Furthermore, little thought has gone into developing convenient interfaces and integrations so that a layperson can use such systems without extensive training. The goal of this study is twofold: First, we use self-supervised representations to build a strong, black-box active learning framework. We then adopt this framework in order to build a lightweight flower tracking model that is deployable on a ground robot and can operate in real-time. Second, by using camera and GPS data from the robot, we extract flower locations automatically. We show that our approach can achieve <10% MAPE in flower counts while running in real-time on an Nvidia Jetson Xavier AGX. Overall, we believe that our highly integrated, automated, and simplified flower counting solution makes significant strides towards a practical commercial cotton phenotyping platform.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
2023
Lu, Guoyu; Li, Sheng; Mai, Gengchen; Sun, Jin; Zhu, Dajiang; Chai, Lilong; Sun, Haijian; Wang, Xianqiao; Dai, Haixing; Liu, Ninghao; Xu, Rui; Petti, Daniel; Li, Changying; Liu, Tianming; Li, Changying
AGI for Agriculture Journal Article
In: 2023.
@article{lu2023agi,
title = {AGI for Agriculture},
author = {Guoyu Lu and Sheng Li and Gengchen Mai and Jin Sun and Dajiang Zhu and Lilong Chai and Haijian Sun and Xianqiao Wang and Haixing Dai and Ninghao Liu and Rui Xu and Daniel Petti and Changying Li and Tianming Liu and Changying Li},
url = {https://arxiv.org/abs/2304.06136},
year = {2023},
date = {2023-04-12},
urldate = {2023-01-01},
abstract = {Artificial General Intelligence (AGI) is poised to revolutionize a variety of sectors, including healthcare, finance, transportation, and education. Within healthcare, AGI is being utilized to analyze clinical medical notes, recognize patterns in patient data, and aid in patient management. Agriculture is another critical sector that impacts the lives of individuals worldwide. It serves as a foundation for providing food, fiber, and fuel, yet faces several challenges, such as climate change, soil degradation, water scarcity, and food security. AGI has the potential to tackle these issues by enhancing crop yields, reducing waste, and promoting sustainable farming practices. It can also help farmers make informed decisions by leveraging real-time data, leading to more efficient and effective farm management. This paper delves into the potential future applications of AGI in agriculture, such as agriculture image processing, natural language processing (NLP), robotics, knowledge graphs, and infrastructure, and their impact on precision livestock and precision crops. By leveraging the power of AGI, these emerging technologies can provide farmers with actionable insights, allowing for optimized decision-making and increased productivity. The transformative potential of AGI in agriculture is vast, and this paper aims to highlight its potential to revolutionize the industry. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Saeed, Farah; Sun, Shangpeng; Rodriguez-Sanchez, Javier; Snider, John; Liu, Tianming; Li, Changying
Cotton plant part 3D segmentation and architectural trait extraction using point voxel convolutional neural networks Journal Article
In: Plant Methods, vol. 19, no. 1, pp. 33, 2023, ISSN: 1746-4811.
@article{Saeed2023,
title = {Cotton plant part 3D segmentation and architectural trait extraction using point voxel convolutional neural networks},
author = {Farah Saeed and Shangpeng Sun and Javier Rodriguez-Sanchez and John Snider and Tianming Liu and Changying Li},
url = {https://doi.org/10.1186/s13007-023-00996-1},
doi = {10.1186/s13007-023-00996-1},
issn = {1746-4811},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {Plant Methods},
volume = {19},
number = {1},
pages = {33},
abstract = {Plant architecture can influence crop yield and quality. Manual extraction of architectural traits is, however, time-consuming, tedious, and error prone. The trait estimation from 3D data addresses occlusion issues with the availability of depth information while deep learning approaches enable learning features without manual design. The goal of this study was to develop a data processing workflow by leveraging 3D deep learning models and a novel 3D data annotation tool to segment cotton plant parts and derive important architectural traits.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Herr, Andrew W.; Adak, Alper; Carroll, Matthew E.; Elango, Dinakaran; Kar, Soumyashree; Li, Changying; Jones, Sarah E.; Carter, Arron H.; Murray, Seth C.; Paterson, Andrew; Sankaran, Sindhuja; Singh, Arti; Singh, Asheesh K.
Unoccupied aerial systems imagery for phenotyping in cotton, maize, soybean, and wheat breeding Journal Article
In: Crop Science, vol. 63, no. 4, pp. 1722-1749, 2023.
@article{https://doi.org/10.1002/csc2.21028,
title = {Unoccupied aerial systems imagery for phenotyping in cotton, maize, soybean, and wheat breeding},
author = {Andrew W. Herr and Alper Adak and Matthew E. Carroll and Dinakaran Elango and Soumyashree Kar and Changying Li and Sarah E. Jones and Arron H. Carter and Seth C. Murray and Andrew Paterson and Sindhuja Sankaran and Arti Singh and Asheesh K. Singh},
url = {https://acsess.onlinelibrary.wiley.com/doi/abs/10.1002/csc2.21028},
doi = {https://doi.org/10.1002/csc2.21028},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {Crop Science},
volume = {63},
number = {4},
pages = {1722-1749},
abstract = {Abstract High-throughput phenotyping (HTP) with unoccupied aerial systems (UAS), consisting of unoccupied aerial vehicles (UAV; or drones) and sensor(s), is an increasingly promising tool for plant breeders and researchers. Enthusiasm and opportunities from this technology for plant breeding are similar to the emergence of genomic tools ∼30 years ago, and genomic selection more recently. Unlike genomic tools, HTP provides a variety of strategies in implementation and utilization that generate big data on the dynamic nature of plant growth formed by temporal interactions between growth and environment. This review lays out strategies deployed across four major staple crop species: cotton (Gossypium hirsutum L.), maize (Zea mays L.), soybean (Glycine max L.), and wheat (Triticum aestivum L.). Each crop highlighted in this review demonstrates how UAS-collected data are employed to automate and improve estimation or prediction of objective phenotypic traits. Each crop section includes four major topics: (a) phenotyping of routine traits, (b) phenotyping of previously infeasible traits, (c) sample cases of UAS application in breeding, and (d) implementation of phenotypic and phenomic prediction and selection. While phenotyping of routine agronomic and productivity traits brings advantages in time and resource optimization, the most potentially beneficial application of UAS data is in collecting traits that were previously difficult or impossible to quantify, improving selection efficiency of important phenotypes. In brief, UAS sensor technology can be used for measuring abiotic stress, biotic stress, crop growth and development, as well as productivity. These applications and the potential implementation of machine learning strategies allow for improved prediction, selection, and efficiency within breeding programs, making UAS HTP a potentially indispensable asset.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Tan, Chenjiao; Li, Changying; He, Dongjian; Song, Huaibo
Anchor-free deep convolutional neural network for tracking and counting cotton seedlings and flowers Journal Article
In: Computers and Electronics in Agriculture, vol. 215, pp. 108359, 2023, ISSN: 0168-1699.
@article{Tan2023a,
title = {Anchor-free deep convolutional neural network for tracking and counting cotton seedlings and flowers},
author = {Chenjiao Tan and Changying Li and Dongjian He and Huaibo Song},
url = {https://www.sciencedirect.com/science/article/pii/S0168169923007470},
doi = {https://doi.org/10.1016/j.compag.2023.108359},
issn = {0168-1699},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {Computers and Electronics in Agriculture},
volume = {215},
pages = {108359},
abstract = {Accurate counting of plants and their organs in natural environments is essential for breeders and growers. For breeders, counting plants during the seedling stage aids in selecting genotypes with superior emergence rates, while for growers, it informs decisions about potential replanting. Meanwhile, counting specific plant organs, such as flowers, forecasts yields for different genotypes, offering insights into production levels. The overall goal of this study was to investigate a deep convolutional neural network-based tracking method, CenterTrack, for cotton seedling and flower counting from video frames. The network is extended from a customized CenterNet, which is an anchor-free object detector. CenterTrack predicts the detections of the current frame and displacements of detections between the previous frame and the current frame, which are used to associate the same object in consecutive frames. The modified CenterNet detector achieved high accuracy on both seedling and flower datasets with an overall AP50 of 0.962. The video tracking hyperparameters were optimized for each dataset using orthogonal tests. Experimental results showed that seedling and flower counts with optimized hyperparameters highly correlated with those of manual counts (R2 = 0.98 andR2 = 0.95) and the mean relative errors of 75 cotton seedling testing videos and 50 flower testing videos were 5.5 % and 10.8 %, respectively. An average counting speed of 20.4 frames per second was achieved with an input resolution of 1920 × 1080 pixels for both seedling and flower videos. The anchor-free deep convolution neural network-based tracking method provides automatic tracking and counting in video frames, which will significantly benefit plant breeding and crop management.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Petti, Daniel J.; Li, Changying
Contrastive Learning from Multiple Cameras for Accurate Plant Organ Counting Proceedings Article
In: pp. 1, ASABE, St. Joseph, MI, 2023.
@inproceedings{Petti2023,
title = {Contrastive Learning from Multiple Cameras for Accurate Plant Organ Counting},
author = {Daniel J. Petti and Changying Li},
url = {https://elibrary.asabe.org/abstract.asp?aid=54198&t=5},
doi = {10.13031/aim.202300954},
year = {2023},
date = {2023-01-01},
journal = {2023 ASABE Annual International Meeting},
pages = {1},
publisher = {ASABE},
address = {St. Joseph, MI},
series = {ASABE Paper No. 2300954},
abstract = {Attempts to deploy computer vision in agricultural tasks often suffer from a shortage of annotated data. One strategy to alleviate the impact of limited data is self-supervised learning, which involves pre-training a model on a pretext task that utilizes automatically generated annotations. A ground robot was used to gather a large, multi-view video dataset of cotton plants across several growing seasons. The primary objective of this study is to develop a contrastive learning framework that leverages this dataset to learn useful representations. Specifically, we adopt the SimCLR framework, and investigate the potential benefits of incorporating data from multiple cameras, including whether SimCLR can produce better representations when positive examples originate from different cameras. To evaluate our method, we employ the learned representations to perform linear regression on the number of flowers in various test images, and find that it can reduce counting MAE to 1.09 and 1.64 on our cotton flower datasets (compared to 1.59 and 1.89 respectively with fully-supervised pretraining). In summary, self-supervised learning has the potential to significantly expedite the progress of agricultural computer vision by decreasing the demand for laborious annotations.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
2022
Xu, Rui; Li, Changying
A review of field-based high-throughput phenotyping systems: focusing on ground robots Journal Article
In: Plant Phenomics, vol. 2022, no. Article ID 9760269, pp. 20, 2022.
@article{nokey,
title = {A review of field-based high-throughput phenotyping systems: focusing on ground robots},
author = {Rui Xu and Changying Li},
url = {https://spj.sciencemag.org/journals/plantphenomics/2022/9760269/},
doi = {https://doi.org/10.34133/2022/9760269.},
year = {2022},
date = {2022-06-18},
urldate = {2022-06-18},
journal = {Plant Phenomics},
volume = {2022},
number = {Article ID 9760269},
pages = {20},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Rodriguez-Sanchez, Javier; Li, Changying; Paterson, Andrew
Cotton yield estimation from aerial imagery using machine learning approaches Journal Article
In: Frontiers in Plant Science, vol. 13, 2022.
@article{nokey,
title = {Cotton yield estimation from aerial imagery using machine learning approaches},
author = {Javier Rodriguez-Sanchez and Changying Li and Andrew Paterson},
url = {https://www.frontiersin.org/articles/10.3389/fpls.2022.870181/full},
year = {2022},
date = {2022-04-01},
urldate = {2022-04-01},
journal = {Frontiers in Plant Science},
volume = {13},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Petti, Daniel; Li, Changying
Weakly-supervised learning to automatically count cotton flowers from aerial imagery Journal Article
In: Computers and Electronics in Agriculture, vol. 194, pp. 106734, 2022, ISSN: 0168-1699.
@article{Petti2022,
title = {Weakly-supervised learning to automatically count cotton flowers from aerial imagery},
author = {Daniel Petti and Changying Li},
url = {https://www.sciencedirect.com/science/article/pii/S0168169922000515},
doi = {https://doi.org/10.1016/j.compag.2022.106734},
issn = {0168-1699},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Computers and Electronics in Agriculture},
volume = {194},
pages = {106734},
abstract = {Counting plant flowers is a common task with applications for estimating crop yields and selecting favorable genotypes. Typically, this requires a laborious manual process, rendering it impractical to obtain accurate flower counts throughout the growing season. The model proposed in this study uses weak supervision, based on Convolutional Neural Networks (CNNs), which automates such a counting task for cotton flowers using imagery collected from an unmanned aerial vehicle (UAV). Furthermore, the model is trained using Multiple Instance Learning (MIL) in order to reduce the required amount of annotated data. MIL is a binary classification task in which any image with at least one flower falls into the positive class, and all others are negative. In the process, a novel loss function was developed that is designed to improve the performance of image-processing models that use MIL. The model is trained on a large dataset of cotton plant imagery which was collected over several years and will be made publicly available. Additionally, an active-learning-based approach is employed in order to generate the annotations for the dataset while minimizing the required amount of human intervention. Despite having minimal supervision, the model still demonstrates good performance on the testing dataset. Multiple models were tested with different numbers of parameters and input sizes, achieving a minimum average absolute count error of 2.43. Overall, this study demonstrates that a weakly-supervised model is a promising method for solving the flower counting problem while minimizing the human labeling effort.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Xu, Rui; Li, Changying
A modular agricultural robotic system (MARS) for precision farming: Concept and implementation Journal Article
In: Journal of Field Robotics, vol. 39, no. 4, pp. 387-409, 2022.
@article{https://doi.org/10.1002/rob.22056,
title = {A modular agricultural robotic system (MARS) for precision farming: Concept and implementation},
author = {Rui Xu and Changying Li},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.22056},
doi = {https://doi.org/10.1002/rob.22056},
year = {2022},
date = {2022-01-01},
journal = {Journal of Field Robotics},
volume = {39},
number = {4},
pages = {387-409},
abstract = {Abstract Increasing global population, climate change, and shortage of labor pose significant challenges for meeting the global food and fiber demand, and agricultural robots offer a promising solution to these challenges. This paper presents a new robotic system architecture and the resulting modular agricultural robotic system (MARS) that is an autonomous, multi-purpose, and affordable robotic platform for in-field plant high throughput phenotyping and precision farming. There are five essential hardware modules (wheel module, connection module, robot controller, robot frame, and power module) and three optional hardware modules (actuation module, sensing module, and smart attachment). Various combinations of the hardware modules can create different robot configurations for specific agricultural tasks. The software was designed using the Robot Operating System (ROS) with three modules: control module, navigation module, and vision module. A robot localization method using dual Global Navigation Satellite System antennas was developed. Two line-following algorithms were implemented as the local planner for the ROS navigation stack. Based on the MARS design concept, two MARS designs were implemented: a low-cost, lightweight robotic system named MARS mini and a heavy-duty robot named MARS X. The autonomous navigation of both MARS X and mini was evaluated at different traveling speeds and payload levels, confirming satisfactory performances. The MARS X was further tested for its performance and navigation accuracy in a crop field, achieving a high accuracy over a 537 m long path with only 15% of the path having an error larger than 0.05 m. The MARS mini and MARS X were shown to be useful for plant phenotyping in two field tests. The modular design makes the robots easily adaptable to different agricultural tasks and the low-cost feature makes it affordable for researchers and growers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2021
Sun, Shangpeng; Li, Changying; Chee, Peng W.; Paterson, Andrew H.; Meng, Cheng; Zhang, Jingyi; Ma, Ping; Robertson, Jon S.; Adhikari, Jeevan
High resolution 3D terrestrial LiDAR for cotton plant main stalk and node detection Journal Article
In: Computers and Electronics in Agriculture, vol. 187, pp. 106276, 2021, ISSN: 0168-1699.
@article{SUN2021106276,
title = {High resolution 3D terrestrial LiDAR for cotton plant main stalk and node detection},
author = {Shangpeng Sun and Changying Li and Peng W. Chee and Andrew H. Paterson and Cheng Meng and Jingyi Zhang and Ping Ma and Jon S. Robertson and Jeevan Adhikari},
url = {https://www.sciencedirect.com/science/article/pii/S0168169921002933},
doi = {https://doi.org/10.1016/j.compag.2021.106276},
issn = {0168-1699},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
journal = {Computers and Electronics in Agriculture},
volume = {187},
pages = {106276},
abstract = {Dense three-dimensional point clouds provide opportunities to retrieve detailed characteristics of plant organ-level phenotypic traits, which are helpful to better understand plant architecture leading to its improvements via new plant breeding approaches. In this study, a high-resolution terrestrial LiDAR was used to acquire point clouds of plants under field conditions, and a data processing pipeline was developed to detect plant main stalks and nodes, and then to extract two phenotypic traits including node number and main stalk length. The proposed method mainly consisted of three steps: first, extract skeletons from original point clouds using a Laplacian-based contraction algorithm; second, identify the main stalk by converting a plant skeleton point cloud to a graph; and third, detect nodes by finding the intersection between the main stalk and branches. Main stalk length was calculated by accumulating the distance between two adjacent points from the lowest to the highest point of the main stalk. Experimental results based on 26 plants showed that the proposed method could accurately measure plant main stalk length and detect nodes; the average R2 and mean absolute percentage error were 0.94 and 4.3% for the main stalk length measurements and 0.7 and 5.1% for node counting, respectively, for point numbers between 80,000 and 150,000 for each plant. Three-dimensional point cloud-based high throughput phenotyping may expedite breeding technologies to improve crop production.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Xu, Rui; Li, Changying; Bernardes, Sergio
Development and Testing of a UAV-Based Multi-Sensor System for Plant Phenotyping and Precision Agriculture Journal Article
In: Remote Sensing, vol. 13, no. 17, 2021, ISSN: 2072-4292.
@article{Xu2021,
title = {Development and Testing of a UAV-Based Multi-Sensor System for Plant Phenotyping and Precision Agriculture},
author = {Rui Xu and Changying Li and Sergio Bernardes},
url = {https://www.mdpi.com/2072-4292/13/17/3517},
doi = {10.3390/rs13173517},
issn = {2072-4292},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
journal = {Remote Sensing},
volume = {13},
number = {17},
abstract = {Unmanned aerial vehicles have been used widely in plant phenotyping and precision agriculture. Several critical challenges remain, however, such as the lack of cross-platform data acquisition software system, sensor calibration protocols, and data processing methods. This paper developed an unmanned aerial system that integrates three cameras (RGB, multispectral, and thermal) and a LiDAR sensor. Data acquisition software supporting data recording and visualization was implemented to run on the Robot Operating System. The design of the multi-sensor unmanned aerial system was open sourced. A data processing pipeline was proposed to preprocess the raw data and to extract phenotypic traits at the plot level, including morphological traits (canopy height, canopy cover, and canopy volume), canopy vegetation index, and canopy temperature. Protocols for both field and laboratory calibrations were developed for the RGB, multispectral, and thermal cameras. The system was validated using ground data collected in a cotton field. Temperatures derived from thermal images had a mean absolute error of 1.02 °C, and canopy NDVI had a mean relative error of 6.6% compared to ground measurements. The observed error for maximum canopy height was 0.1 m. The results show that the system can be useful for plant breeding and precision crop management.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Petti, Daniel J.; Li, Changying
Graph Neural Networks for Plant Organ Tracking Proceedings Article
In: pp. 1, ASABE, St. Joseph, MI, 2021.
@inproceedings{Petti2021,
title = {Graph Neural Networks for Plant Organ Tracking},
author = {Daniel J. Petti and Changying Li},
url = {https://elibrary.asabe.org/abstract.asp?aid=52526&t=5},
doi = {10.13031/aim.202100843},
year = {2021},
date = {2021-01-01},
journal = {2021 ASABE Annual International Virtual Meeting},
pages = {1},
publisher = {ASABE},
address = {St. Joseph, MI},
series = {ASABE Paper No. 2100843},
abstract = {Much progress has been made over the past decade on the problem of multi-object tracking. Many recent techniques leverage Convolutional Neural Networks (CNNs) and are focused on the domain of autonomous driving or people-tracking. In contrast, we concern ourselves with how these recent advances can be adapted to the domain of High-Throughput Phenotyping (HTP). HTP leverages automated sensing capabilities in order to speed up the process of measuring plant phenotypic traits to advance breeding programs. Many specific problems within the domain of HTP require the accurate localization of plant organs, as well as the tracking of the organs over time. Mobile robotic platforms (both ground and air) are typically equipped with localization sensors as well as RGB cameras. We propose a Graph Convolutional Neural Network (GCNN) that is capable of extracting and fusing features from RGB cameras over multiple frames, as well as using these features in a Graph Neural Network to solve the tracking association problem. Our end-to-end tracking approach requires minimal hyperparameters and is easier to train than older approaches that separate affinity computation and track association into two steps. Specifically, we demonstrate our system‘s ability to detect and track individual cotton blossoms in the field, which will be important for both breeding programs and yield estimation.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
2018
Sun, S.; Li, C.; Paterson, A. H.; Jiang, Y.; Xu, R.; Roberson, J.; Snider, J.; Chee, P.
In-field high throughput phenotyping and cotton plant growth analysis using LiDAR Journal Article
In: Frontiers in Plant Sciences, 9, 16, 2018.
@article{Sun2018,
title = {In-field high throughput phenotyping and cotton plant growth analysis using LiDAR},
author = {S. Sun and C. Li and A.H. Paterson and Y. Jiang and R. Xu and J. Roberson and J. Snider and P. Chee},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/In-Field-High-Throughput-Phenotyping-of-Cotton-Plant-Height-Using-LiDAR.pdf},
doi = {10.3389/fpls.2018.00016},
year = {2018},
date = {2018-01-30},
urldate = {2018-01-30},
journal = {Frontiers in Plant Sciences, 9, 16},
abstract = {Sun, S., Li, C., Paterson, A. H., Jiang, Y., Xu, R., Robertson, J. S., ... & Chee, P. W. (2018). In-field high throughput phenotyping and cotton plant growth analysis using LiDAR. Frontiers in Plant Science, 9, 16.
A LiDAR-based high-throughput phenotyping (HTP) system was developed for cotton plant phenotyping in the field. The HTP system consists of a 2D LiDAR and an RTK-GPS mounted on a high clearance tractor. The LiDAR scanned three rows of cotton plots simultaneously from the top and the RTK-GPS was used to provide the spatial coordinates of the point cloud during data collection. Configuration parameters of the system were optimized to ensure the best data quality. A height profile for each plot was extracted from the dense three dimensional point clouds; then the maximum height and height distribution of each plot were derived. In lab tests, single plants were scanned by LiDAR using 0.5◦ angular resolution and results showed an R 2 value of 1.00 (RMSE = 3.46 mm) in comparison to manual measurements. In field tests using the same angular resolution; the LiDAR-based HTP system achieved average R2 values of 0.98 (RMSE = 65 mm) for cotton plot height estimation; compared to manual measurements. This HTP system is particularly useful for large field application because it provides highly accurate measurements; and the efficiency is greatly improved compared to similar studies using the side view scan.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A LiDAR-based high-throughput phenotyping (HTP) system was developed for cotton plant phenotyping in the field. The HTP system consists of a 2D LiDAR and an RTK-GPS mounted on a high clearance tractor. The LiDAR scanned three rows of cotton plots simultaneously from the top and the RTK-GPS was used to provide the spatial coordinates of the point cloud during data collection. Configuration parameters of the system were optimized to ensure the best data quality. A height profile for each plot was extracted from the dense three dimensional point clouds; then the maximum height and height distribution of each plot were derived. In lab tests, single plants were scanned by LiDAR using 0.5◦ angular resolution and results showed an R 2 value of 1.00 (RMSE = 3.46 mm) in comparison to manual measurements. In field tests using the same angular resolution; the LiDAR-based HTP system achieved average R2 values of 0.98 (RMSE = 65 mm) for cotton plot height estimation; compared to manual measurements. This HTP system is particularly useful for large field application because it provides highly accurate measurements; and the efficiency is greatly improved compared to similar studies using the side view scan.
2017
Xu, R.; Li, C.; Paterson, A. H.; Jiang, Y.; Sun, S.; Roberson, J.
Aerial Images and Convolutional Neural Network for Cotton Bloom Detection Journal Article
In: Frontiers in Plant Sciences, 8, 2235, 2017.
@article{Xu2018,
title = {Aerial Images and Convolutional Neural Network for Cotton Bloom Detection},
author = {R. Xu and C. Li and A.H. Paterson and Y. Jiang and S. Sun and J. Roberson},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/Aerial-Images-and-Convolutional-Neural-Network-for-Cotton-Bloom-Detection.pdf},
doi = {10.3389/fpls.2017.02235},
year = {2017},
date = {2017-12-19},
urldate = {2017-12-19},
journal = {Frontiers in Plant Sciences, 8, 2235},
abstract = {Xu, R., Li, C., Paterson, A. H., Jiang, Y., Sun, S., & Robertson, J. S. (2018). Aerial images and convolutional neural network for cotton bloom detection. Frontiers in Plant Science, 8, 2235.
Monitoring flower development can provide useful information for production management, estimating yield and selecting specific genotypes of crops. The main goal of this study was to develop a methodology to detect and count cotton flowers, or blooms, using color images acquired by an unmanned aerial system. The aerial images were collected from two test fields in 4 days. A convolutional neural network (CNN) was designed and trained to detect cotton blooms in raw images, and their 3D locations were calculated using the dense point cloud constructed from the aerial images with the structure from motion method. The quality of the dense point cloud was analyzed and plots with poor quality were excluded from data analysis. A constrained clustering algorithm was developed to register the same bloom detected from different images based on the 3D location of the bloom. The accuracy and incompleteness of the dense point cloud were analyzed because they affected the accuracy of the 3D location of the blooms and thus the accuracy of the bloom registration result. The constrained clustering algorithm was validated using simulated data, showing good efficiency and accuracy. The bloom count from the proposed method was comparable with the number counted manually with an error of −4 to 3 blooms for the field with a single plant per plot. However, more plots were underestimated in the field with multiple plants per plot due to hidden blooms that were not captured by the aerial images. The proposed methodology provides a high-throughput method to continuously monitor the flowering progress of cotton.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Monitoring flower development can provide useful information for production management, estimating yield and selecting specific genotypes of crops. The main goal of this study was to develop a methodology to detect and count cotton flowers, or blooms, using color images acquired by an unmanned aerial system. The aerial images were collected from two test fields in 4 days. A convolutional neural network (CNN) was designed and trained to detect cotton blooms in raw images, and their 3D locations were calculated using the dense point cloud constructed from the aerial images with the structure from motion method. The quality of the dense point cloud was analyzed and plots with poor quality were excluded from data analysis. A constrained clustering algorithm was developed to register the same bloom detected from different images based on the 3D location of the bloom. The accuracy and incompleteness of the dense point cloud were analyzed because they affected the accuracy of the 3D location of the blooms and thus the accuracy of the bloom registration result. The constrained clustering algorithm was validated using simulated data, showing good efficiency and accuracy. The bloom count from the proposed method was comparable with the number counted manually with an error of −4 to 3 blooms for the field with a single plant per plot. However, more plots were underestimated in the field with multiple plants per plot due to hidden blooms that were not captured by the aerial images. The proposed methodology provides a high-throughput method to continuously monitor the flowering progress of cotton.
Jiang, Y.; Li, C.; Paterson, A. H.; Sun, S.; Xu, R.; Roberson, J.
Quantitative Analysis of Cotton Canopy Size in Field Conditions Using a Consumer-Grade RGB-D Camera Journal Article
In: Frontiers in Plant Sciences, 8, 2233, 2017.
@article{Jiang2018,
title = {Quantitative Analysis of Cotton Canopy Size in Field Conditions Using a Consumer-Grade RGB-D Camera},
author = {Y. Jiang and C. Li and A.H. Paterson and S. Sun and R. Xu and J. Roberson},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/Quantitative-Analysis-of-Cotton-Canopy-Size-in-Field-Conditions-Using-a-Consumer-Grade-RGB-D-Camera.pdf},
doi = {10.3389/fpls.2017.02233},
year = {2017},
date = {2017-12-19},
urldate = {2017-12-19},
journal = {Frontiers in Plant Sciences, 8, 2233},
abstract = {Jiang, Y., Li, C., Paterson, A. H., Sun, S., Xu, R., & Robertson, J. (2018). Quantitative analysis of cotton canopy size in field conditions using a consumer-grade RGB-D camera. Frontiers in plant science, 8, 2233.
Plant canopy structure can strongly affect crop functions such as yield and stress tolerance, and canopy size is an important aspect of canopy structure. Manual assessment of canopy size is laborious and imprecise, and cannot measure multi-dimensional traits such as projected leaf area and canopy volume. Field-based high throughput phenotyping systems with imaging capabilities can rapidly acquire data about plants in field conditions, making it possible to quantify and monitor plant canopy development. The goal of this study was to develop a 3D imaging approach to quantitatively analyze cotton canopy development in field conditions. A cotton field was planted with 128 plots, including four genotypes of 32 plots each. The field was scanned by GPhenoVision (a customized field-based high throughput phenotyping system) to acquire color and depth images with GPS information in 2016 covering two growth stages: canopy development, and flowering and boll development. A data processing pipeline was developed, consisting of three steps: plot point cloud reconstruction, plant canopy segmentation, and trait extraction. Plot point clouds were reconstructed using color and depth images with GPS information. In colorized point clouds, vegetation was segmented from the background using an excess-green (ExG) color filter, and cotton canopies were further separated from weeds based on height, size, and position information. Static morphological traits were extracted on each day, including univariate traits (maximum and mean canopy height and width, projected canopy area, and concave and convex volumes) and a multivariate trait (cumulative height profile). Growth rates were calculated for univariate static traits, quantifying canopy growth and development. Linear regressions were performed between the traits and fiber yield to identify the best traits and measurement time for yield prediction. The results showed that fiber yield was correlated with static traits after the canopy development stage (R2 = 0.35–0.71) and growth rates in early canopy development stages (R2 = 0.29–0.52). Multi-dimensional traits (e.g., projected canopy area and volume) outperformed one-dimensional traits, and the multivariate trait (cumulative height profile) outperformed univariate traits. The proposed approach would be useful for identification of quantitative trait loci (QTLs) controlling canopy size in genetics/genomics studies or for fiber yield prediction in breeding programs and production environments.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Plant canopy structure can strongly affect crop functions such as yield and stress tolerance, and canopy size is an important aspect of canopy structure. Manual assessment of canopy size is laborious and imprecise, and cannot measure multi-dimensional traits such as projected leaf area and canopy volume. Field-based high throughput phenotyping systems with imaging capabilities can rapidly acquire data about plants in field conditions, making it possible to quantify and monitor plant canopy development. The goal of this study was to develop a 3D imaging approach to quantitatively analyze cotton canopy development in field conditions. A cotton field was planted with 128 plots, including four genotypes of 32 plots each. The field was scanned by GPhenoVision (a customized field-based high throughput phenotyping system) to acquire color and depth images with GPS information in 2016 covering two growth stages: canopy development, and flowering and boll development. A data processing pipeline was developed, consisting of three steps: plot point cloud reconstruction, plant canopy segmentation, and trait extraction. Plot point clouds were reconstructed using color and depth images with GPS information. In colorized point clouds, vegetation was segmented from the background using an excess-green (ExG) color filter, and cotton canopies were further separated from weeds based on height, size, and position information. Static morphological traits were extracted on each day, including univariate traits (maximum and mean canopy height and width, projected canopy area, and concave and convex volumes) and a multivariate trait (cumulative height profile). Growth rates were calculated for univariate static traits, quantifying canopy growth and development. Linear regressions were performed between the traits and fiber yield to identify the best traits and measurement time for yield prediction. The results showed that fiber yield was correlated with static traits after the canopy development stage (R2 = 0.35–0.71) and growth rates in early canopy development stages (R2 = 0.29–0.52). Multi-dimensional traits (e.g., projected canopy area and volume) outperformed one-dimensional traits, and the multivariate trait (cumulative height profile) outperformed univariate traits. The proposed approach would be useful for identification of quantitative trait loci (QTLs) controlling canopy size in genetics/genomics studies or for fiber yield prediction in breeding programs and production environments.
Jiang, Y.; Li, C.; Robertson, J. S.; Sun, S.; Xu, R.; Paterson, A. H.
GPhenoVision: A Ground Mobile System with Multi-modal Imaging for Field-Based High Throughput Phenotyping of Cotton Journal Article
In: Scientific Reports, 8(1), 1213, 2017.
@article{Jiang2018b,
title = {GPhenoVision: A Ground Mobile System with Multi-modal Imaging for Field-Based High Throughput Phenotyping of Cotton},
author = {Y. Jiang and C. Li and J. S. Robertson and S. Sun and R. Xu and A.H. Paterson },
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/GPhenoVision-A-Ground-Mobile-System-with-Multi-modal-Imaging-for-Field-Based-High-Throughput-Phenotyping-of-Cotton.pdf},
doi = {10.1038/s41598-018-19142-2},
year = {2017},
date = {2017-11-30},
urldate = {2017-11-30},
journal = {Scientific Reports, 8(1), 1213},
abstract = {Jiang, Y., Li, C., Robertson, J. S., Sun, S., Xu, R., & Paterson, A. H. (2018). Gphenovision: a ground mobile system with multi-modal imaging for field-based high throughput phenotyping of cotton. Scientific reports, 8(1), 1213.
Imaging sensors can extend phenotyping capability, but they require a system to handle high-volume data. The overall goal of this study was to develop and evaluate a field-based high throughput phenotyping system accommodating high-resolution imagers. The system consisted of a high-clearance tractor and sensing and electrical systems. The sensing system was based on a distributed structure, integrating environmental sensors, real-time kinematic GPS, and multiple imaging sensors including RGB-D, thermal, and hyperspectral cameras. Custom software was developed with a multilayered architecture for system control and data collection. The system was evaluated by scanning a cotton field with 23 genotypes for quantification of canopy growth and development. A data processing pipeline was developed to extract phenotypes at the canopy level, including height, width, projected leaf area, and volume from RGB-D data and temperature from thermal images. Growth rates of morphological traits were accordingly calculated. The traits had strong correlations (r = 0.54–0.74) with fiber yield and good broad sense heritability (H2 = 0.27–0.72), suggesting the potential for conducting quantitative genetic analysis and contributing to yield prediction models. The developed system is a useful tool for a wide range of breeding/genetic, agronomic/physiological, and economic studies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Imaging sensors can extend phenotyping capability, but they require a system to handle high-volume data. The overall goal of this study was to develop and evaluate a field-based high throughput phenotyping system accommodating high-resolution imagers. The system consisted of a high-clearance tractor and sensing and electrical systems. The sensing system was based on a distributed structure, integrating environmental sensors, real-time kinematic GPS, and multiple imaging sensors including RGB-D, thermal, and hyperspectral cameras. Custom software was developed with a multilayered architecture for system control and data collection. The system was evaluated by scanning a cotton field with 23 genotypes for quantification of canopy growth and development. A data processing pipeline was developed to extract phenotypes at the canopy level, including height, width, projected leaf area, and volume from RGB-D data and temperature from thermal images. Growth rates of morphological traits were accordingly calculated. The traits had strong correlations (r = 0.54–0.74) with fiber yield and good broad sense heritability (H2 = 0.27–0.72), suggesting the potential for conducting quantitative genetic analysis and contributing to yield prediction models. The developed system is a useful tool for a wide range of breeding/genetic, agronomic/physiological, and economic studies.
Patrick, A.; Li, C.
High Throughput Phenotyping of Blueberry Bush Morphological Traits Using Unmanned Aerial Systems Journal Article
In: Remote Sensing, 9(12), 1250, 2017.
@article{Patrick2017,
title = {High Throughput Phenotyping of Blueberry Bush Morphological Traits Using Unmanned Aerial Systems},
author = {A. Patrick and C. Li
},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/High-Throughput-Phenotyping-of-Blueberry-Bush-Morphological-Traits-Using-Unmanned-Aerial-Systems.pdf},
doi = {10.3390/rs9121250},
year = {2017},
date = {2017-11-30},
urldate = {2017-11-30},
journal = {Remote Sensing, 9(12), 1250},
abstract = {Patrick, A., & Li, C. (2017). High throughput phenotyping of blueberry bush morphological traits using unmanned aerial systems. Remote Sensing, 9(12), 1250.
Phenotyping morphological traits of blueberry bushes in the field is important for selecting genotypes that are easily harvested by mechanical harvesters. Morphological data can also be used to assess the effects of crop treatments such as plant growth regulators, fertilizers, and environmental conditions. This paper investigates the feasibility and accuracy of an inexpensive unmanned aerial system in determining the morphological characteristics of blueberry bushes. Color images collected by a quadcopter are processed into three-dimensional point clouds via structure from motion algorithms. Bush height, extents, canopy area, and volume, in addition to crown diameter and width, are derived and referenced to ground truth. In an experimental farm, twenty-five bushes were imaged by a quadcopter. Height and width dimensions achieved a mean absolute error of 9.85 cm before and 5.82 cm after systematic under-estimation correction. Strong correlation was found between manual and image derived bush volumes and their traditional growth indices. Hedgerows of three Southern Highbush varieties were imaged at a commercial farm to extract five morphological features (base angle, blockiness, crown percent height, crown ratio, and vegetation ratio) associated with cultivation and machine harvestability. The bushes were found to be partially separable by multivariate analysis. The methodology developed from this study is not only valuable for plant breeders to screen genotypes with bush morphological traits that are suitable for machine harvest, but can also aid producers in crop management such as pruning and plot layout organization.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Phenotyping morphological traits of blueberry bushes in the field is important for selecting genotypes that are easily harvested by mechanical harvesters. Morphological data can also be used to assess the effects of crop treatments such as plant growth regulators, fertilizers, and environmental conditions. This paper investigates the feasibility and accuracy of an inexpensive unmanned aerial system in determining the morphological characteristics of blueberry bushes. Color images collected by a quadcopter are processed into three-dimensional point clouds via structure from motion algorithms. Bush height, extents, canopy area, and volume, in addition to crown diameter and width, are derived and referenced to ground truth. In an experimental farm, twenty-five bushes were imaged by a quadcopter. Height and width dimensions achieved a mean absolute error of 9.85 cm before and 5.82 cm after systematic under-estimation correction. Strong correlation was found between manual and image derived bush volumes and their traditional growth indices. Hedgerows of three Southern Highbush varieties were imaged at a commercial farm to extract five morphological features (base angle, blockiness, crown percent height, crown ratio, and vegetation ratio) associated with cultivation and machine harvestability. The bushes were found to be partially separable by multivariate analysis. The methodology developed from this study is not only valuable for plant breeders to screen genotypes with bush morphological traits that are suitable for machine harvest, but can also aid producers in crop management such as pruning and plot layout organization.
Sun, S.; Li, C.; Paterson, A. H.
In-Field High-Throughput Phenotyping of Cotton Plant Height Using LiDAR Journal Article
In: Remote Sensing, 9(4), 377, 2017.
@article{Sun2017,
title = {In-Field High-Throughput Phenotyping of Cotton Plant Height Using LiDAR},
author = {S. Sun and C. Li and A.H. Paterson},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/In-Field-High-Throughput-Phenotyping-of-Cotton-Plant-Height-Using-LiDAR-1.pdf},
doi = {10.3390/rs9040377},
year = {2017},
date = {2017-04-13},
urldate = {2017-04-13},
journal = {Remote Sensing, 9(4), 377},
abstract = {Sun, S., Li, C., & Paterson, A. (2017). In-field high-throughput phenotyping of cotton plant height using LiDAR. Remote Sensing, 9(4), 377.
A LiDAR-based high-throughput phenotyping (HTP) system was developed for cotton plant phenotyping in the field. The HTP system consists of a 2D LiDAR and an RTK-GPS mounted on a high clearance tractor. The LiDAR scanned three rows of cotton plots simultaneously from the top and the RTK-GPS was used to provide the spatial coordinates of the point cloud during data collection. Configuration parameters of the system were optimized to ensure the best data quality. A height profile for each plot was extracted from the dense three dimensional point clouds; then the maximum height and height distribution of each plot were derived. In lab tests, single plants were scanned by LiDAR using 0.5° angular resolution and results showed an R2 value of 1.00 (RMSE = 3.46 mm) in comparison to manual measurements. In field tests using the same angular resolution; the LiDAR-based HTP system achieved average R2 values of 0.98 (RMSE = 65 mm) for cotton plot height estimation; compared to manual measurements. This HTP system is particularly useful for large field application because it provides highly accurate measurements; and the efficiency is greatly improved compared to similar studies using the side view scan.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A LiDAR-based high-throughput phenotyping (HTP) system was developed for cotton plant phenotyping in the field. The HTP system consists of a 2D LiDAR and an RTK-GPS mounted on a high clearance tractor. The LiDAR scanned three rows of cotton plots simultaneously from the top and the RTK-GPS was used to provide the spatial coordinates of the point cloud during data collection. Configuration parameters of the system were optimized to ensure the best data quality. A height profile for each plot was extracted from the dense three dimensional point clouds; then the maximum height and height distribution of each plot were derived. In lab tests, single plants were scanned by LiDAR using 0.5° angular resolution and results showed an R2 value of 1.00 (RMSE = 3.46 mm) in comparison to manual measurements. In field tests using the same angular resolution; the LiDAR-based HTP system achieved average R2 values of 0.98 (RMSE = 65 mm) for cotton plot height estimation; compared to manual measurements. This HTP system is particularly useful for large field application because it provides highly accurate measurements; and the efficiency is greatly improved compared to similar studies using the side view scan.
Patrick, A.; Pelham, S.; Culbreath, A.; Holbrook, C.; Godoy, I. J. d.; Li, C.
High Throughput Phenotyping of Tomato Spot Wilt Disease in Peanuts Using Unmanned Aerial Systems and Multispectral Imaging Journal Article
In: IEEE Instrumentation & Measurement Magazine, 20(3), 4-12, 2017.
@article{Patrick2017b,
title = {High Throughput Phenotyping of Tomato Spot Wilt Disease in Peanuts Using Unmanned Aerial Systems and Multispectral Imaging},
author = {A. Patrick and S. Pelham and A. Culbreath and C. Holbrook and I.J.d. Godoy and C. Li},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/High-Throughput-Phenotyping-of-Tomato-Spot-Wilt-Disease-in-Peanuts-Using-Unmanned-Aerial-Systems-and-Multispectral-Imaging.pdf},
doi = {10.1109/MIM.2017.7951684},
year = {2017},
date = {2017-02-08},
urldate = {2017-02-08},
journal = {IEEE Instrumentation & Measurement Magazine, 20(3), 4-12},
abstract = {Patrick, A., Pelham, S., Culbreath, A., Holbrook, C. C., De Godoy, I. J., & Li, C. (2017). High throughput phenotyping of tomato spot wilt disease in peanuts using unmanned aerial systems and multispectral imaging. IEEE Instrumentation & Measurement Magazine, 20(3), 4-12.
The amount of visible and near infrared light reflected by plants varies depending on their health. In this study, multispectral images were acquired by a quadcopter for high throughput phenotyping of tomato spot wilt disease resistance among twenty genotypes of peanuts. The plants were visually assessed to acquire ground truth ratings of disease incidence. Multispectral images were processed into several vegetation indices. The vegetation index image of each plot has a unique distribution of pixel intensities. The percentage and number of pixels above and below varying thresholds were extracted. These features were correlated with manually acquired data to develop a model for assessing the percentage of each plot diseased. Ultimately, the best vegetation indices and pixel distribution feature for disease detection were determined and correlated with manual ratings and yield. The relative resistance of each genotype was then compared. Image-based disease ratings effectively ranked genotype resistance as early as 93 days from seeding.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The amount of visible and near infrared light reflected by plants varies depending on their health. In this study, multispectral images were acquired by a quadcopter for high throughput phenotyping of tomato spot wilt disease resistance among twenty genotypes of peanuts. The plants were visually assessed to acquire ground truth ratings of disease incidence. Multispectral images were processed into several vegetation indices. The vegetation index image of each plot has a unique distribution of pixel intensities. The percentage and number of pixels above and below varying thresholds were extracted. These features were correlated with manually acquired data to develop a model for assessing the percentage of each plot diseased. Ultimately, the best vegetation indices and pixel distribution feature for disease detection were determined and correlated with manual ratings and yield. The relative resistance of each genotype was then compared. Image-based disease ratings effectively ranked genotype resistance as early as 93 days from seeding.
2016
Jiang, Y.; Li, C.; Paterson, A. H.
High-throughput phenotyping of cotton plant height using depth images under field conditions Journal Article
In: Computers and Electronics in Agriculture, 130, 57-68, 2016.
@article{Jiang2016b,
title = {High-throughput phenotyping of cotton plant height using depth images under field conditions},
author = {Y. Jiang and C. Li and A.H. Paterson},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/High-throughput-phenotyping-of-cotton-plant-height-using-depth-images-under-field-conditions-.pdf},
doi = {10.1016/j.compag.2016.09.017},
year = {2016},
date = {2016-09-26},
urldate = {2016-09-26},
journal = {Computers and Electronics in Agriculture, 130, 57-68},
abstract = {Jiang, Y., Li, C., & Paterson, A. H. (2016). High throughput phenotyping of cotton plant height using depth images under field conditions. Computers and Electronics in Agriculture, 130, 57-68.
Plant height is an important phenotypic trait that can be used not only as an indicator of overall plant growth but also a parameter to calculate advanced traits such as biomass and yield. Currently, cotton plant height is primarily measured manually, which is laborious and has become a bottleneck for cotton research and breeding programs. The goal of this research was to develop and evaluate a high throughput phenotyping (HTP) system using depth images for measuring cotton plant height under field conditions. For this purpose, a Kinect-v2 camera was evaluated in a static configuration to obtain a performance baseline and in a dynamic configuration to measure plant height in the field. In the static configuration, the camera was mounted on a partially covered wooden frame and oriented towards nadir to acquire depth images of potted cotton plants. Regions of interest of plants were manually selected in the depth images to calculate plant height. In the dynamic configuration, the Kinect-v2 camera was installed inside a partially covered metal-frame that was attached to a high-clearance tractor equipped with real time kinematic GPS. A six-step algorithm was developed to measure the maximum and average heights of individual plots by using the depth images acquired by the system. System performance was evaluated on 108 plots of cotton plants. Results showed that the Kinect-v2 camera could acquire valid depth images of cotton plants under field conditions, when a shaded environment was provided. The plot maximum and average heights calculated by the proposed algorithm were strongly correlated (adjusted R2 = 0.922–0.987) with those measured manually with accuracies of over 92%. The average processing time was 0.01 s to calculate the heights of a plot that typically has 34 depth images, indicating that the proposed algorithm was computationally efficient. Therefore, these results confirmed the ability of the HTP system with depth images to measure cotton plant height under field conditions accurately and rapidly. Furthermore, the imaging-based system has great potential for measuring more complicated geometric traits of plants, which can significantly advance field-based HTP system development in general.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Plant height is an important phenotypic trait that can be used not only as an indicator of overall plant growth but also a parameter to calculate advanced traits such as biomass and yield. Currently, cotton plant height is primarily measured manually, which is laborious and has become a bottleneck for cotton research and breeding programs. The goal of this research was to develop and evaluate a high throughput phenotyping (HTP) system using depth images for measuring cotton plant height under field conditions. For this purpose, a Kinect-v2 camera was evaluated in a static configuration to obtain a performance baseline and in a dynamic configuration to measure plant height in the field. In the static configuration, the camera was mounted on a partially covered wooden frame and oriented towards nadir to acquire depth images of potted cotton plants. Regions of interest of plants were manually selected in the depth images to calculate plant height. In the dynamic configuration, the Kinect-v2 camera was installed inside a partially covered metal-frame that was attached to a high-clearance tractor equipped with real time kinematic GPS. A six-step algorithm was developed to measure the maximum and average heights of individual plots by using the depth images acquired by the system. System performance was evaluated on 108 plots of cotton plants. Results showed that the Kinect-v2 camera could acquire valid depth images of cotton plants under field conditions, when a shaded environment was provided. The plot maximum and average heights calculated by the proposed algorithm were strongly correlated (adjusted R2 = 0.922–0.987) with those measured manually with accuracies of over 92%. The average processing time was 0.01 s to calculate the heights of a plot that typically has 34 depth images, indicating that the proposed algorithm was computationally efficient. Therefore, these results confirmed the ability of the HTP system with depth images to measure cotton plant height under field conditions accurately and rapidly. Furthermore, the imaging-based system has great potential for measuring more complicated geometric traits of plants, which can significantly advance field-based HTP system development in general.