Modern agriculture is facing tremendous challenges in its sustainability, productivity, and quality for almost ten billion people by 2050. To address these issues, we need to gain further knowledge of genetics and environment interactions (G×E), and apply those knowledge to facilitate breeding programs for cultivating new crop genotypes suitable for various production purposes and environments. These heavily rely on field based high throughput phenotyping (FB-HTP). As engineers, we are integrating various techniques (e.g. computer vision, robotics, and machine learning) to develop the state-of-the-art solutions for non-destructive, accurate, and rapid phenotyping of various crops in field conditions. Our lab developed the GPhenoVision system in 2016 and the paper presenting the system was awarded at the annual international meeting of the ASABE in 2017.
Awards
2017, Best paper award from the Information Technology, Sensors & Control Systems (ITSC) devision of the American Society of Agricultural and Biological Engineers (ASABE).
Publications
2023
Lu, Guoyu; Li, Sheng; Mai, Gengchen; Sun, Jin; Zhu, Dajiang; Chai, Lilong; Sun, Haijian; Wang, Xianqiao; Dai, Haixing; Liu, Ninghao; Xu, Rui; Petti, Daniel; Li, Changying; Liu, Tianming; Li, Changying
AGI for Agriculture Journal Article
In: 2023.
@article{lu2023agi,
title = {AGI for Agriculture},
author = {Guoyu Lu and Sheng Li and Gengchen Mai and Jin Sun and Dajiang Zhu and Lilong Chai and Haijian Sun and Xianqiao Wang and Haixing Dai and Ninghao Liu and Rui Xu and Daniel Petti and Changying Li and Tianming Liu and Changying Li},
url = {https://arxiv.org/abs/2304.06136},
year = {2023},
date = {2023-04-12},
urldate = {2023-01-01},
abstract = {Artificial General Intelligence (AGI) is poised to revolutionize a variety of sectors, including healthcare, finance, transportation, and education. Within healthcare, AGI is being utilized to analyze clinical medical notes, recognize patterns in patient data, and aid in patient management. Agriculture is another critical sector that impacts the lives of individuals worldwide. It serves as a foundation for providing food, fiber, and fuel, yet faces several challenges, such as climate change, soil degradation, water scarcity, and food security. AGI has the potential to tackle these issues by enhancing crop yields, reducing waste, and promoting sustainable farming practices. It can also help farmers make informed decisions by leveraging real-time data, leading to more efficient and effective farm management. This paper delves into the potential future applications of AGI in agriculture, such as agriculture image processing, natural language processing (NLP), robotics, knowledge graphs, and infrastructure, and their impact on precision livestock and precision crops. By leveraging the power of AGI, these emerging technologies can provide farmers with actionable insights, allowing for optimized decision-making and increased productivity. The transformative potential of AGI in agriculture is vast, and this paper aims to highlight its potential to revolutionize the industry. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Saeed, Farah; Sun, Shangpeng; Rodriguez-Sanchez, Javier; Snider, John; Liu, Tianming; Li, Changying
Cotton plant part 3D segmentation and architectural trait extraction using point voxel convolutional neural networks Journal Article
In: Plant Methods, vol. 19, no. 1, pp. 33, 2023, ISSN: 1746-4811.
@article{Saeed2023,
title = {Cotton plant part 3D segmentation and architectural trait extraction using point voxel convolutional neural networks},
author = {Farah Saeed and Shangpeng Sun and Javier Rodriguez-Sanchez and John Snider and Tianming Liu and Changying Li},
url = {https://doi.org/10.1186/s13007-023-00996-1},
doi = {10.1186/s13007-023-00996-1},
issn = {1746-4811},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {Plant Methods},
volume = {19},
number = {1},
pages = {33},
abstract = {Plant architecture can influence crop yield and quality. Manual extraction of architectural traits is, however, time-consuming, tedious, and error prone. The trait estimation from 3D data addresses occlusion issues with the availability of depth information while deep learning approaches enable learning features without manual design. The goal of this study was to develop a data processing workflow by leveraging 3D deep learning models and a novel 3D data annotation tool to segment cotton plant parts and derive important architectural traits.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Herr, Andrew W.; Adak, Alper; Carroll, Matthew E.; Elango, Dinakaran; Kar, Soumyashree; Li, Changying; Jones, Sarah E.; Carter, Arron H.; Murray, Seth C.; Paterson, Andrew; Sankaran, Sindhuja; Singh, Arti; Singh, Asheesh K.
Unoccupied aerial systems imagery for phenotyping in cotton, maize, soybean, and wheat breeding Journal Article
In: Crop Science, vol. 63, no. 4, pp. 1722-1749, 2023.
@article{https://doi.org/10.1002/csc2.21028,
title = {Unoccupied aerial systems imagery for phenotyping in cotton, maize, soybean, and wheat breeding},
author = {Andrew W. Herr and Alper Adak and Matthew E. Carroll and Dinakaran Elango and Soumyashree Kar and Changying Li and Sarah E. Jones and Arron H. Carter and Seth C. Murray and Andrew Paterson and Sindhuja Sankaran and Arti Singh and Asheesh K. Singh},
url = {https://acsess.onlinelibrary.wiley.com/doi/abs/10.1002/csc2.21028},
doi = {https://doi.org/10.1002/csc2.21028},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {Crop Science},
volume = {63},
number = {4},
pages = {1722-1749},
abstract = {Abstract High-throughput phenotyping (HTP) with unoccupied aerial systems (UAS), consisting of unoccupied aerial vehicles (UAV; or drones) and sensor(s), is an increasingly promising tool for plant breeders and researchers. Enthusiasm and opportunities from this technology for plant breeding are similar to the emergence of genomic tools ∼30 years ago, and genomic selection more recently. Unlike genomic tools, HTP provides a variety of strategies in implementation and utilization that generate big data on the dynamic nature of plant growth formed by temporal interactions between growth and environment. This review lays out strategies deployed across four major staple crop species: cotton (Gossypium hirsutum L.), maize (Zea mays L.), soybean (Glycine max L.), and wheat (Triticum aestivum L.). Each crop highlighted in this review demonstrates how UAS-collected data are employed to automate and improve estimation or prediction of objective phenotypic traits. Each crop section includes four major topics: (a) phenotyping of routine traits, (b) phenotyping of previously infeasible traits, (c) sample cases of UAS application in breeding, and (d) implementation of phenotypic and phenomic prediction and selection. While phenotyping of routine agronomic and productivity traits brings advantages in time and resource optimization, the most potentially beneficial application of UAS data is in collecting traits that were previously difficult or impossible to quantify, improving selection efficiency of important phenotypes. In brief, UAS sensor technology can be used for measuring abiotic stress, biotic stress, crop growth and development, as well as productivity. These applications and the potential implementation of machine learning strategies allow for improved prediction, selection, and efficiency within breeding programs, making UAS HTP a potentially indispensable asset.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Tan, Chenjiao; Li, Changying; He, Dongjian; Song, Huaibo
Anchor-free deep convolutional neural network for tracking and counting cotton seedlings and flowers Journal Article
In: Computers and Electronics in Agriculture, vol. 215, pp. 108359, 2023, ISSN: 0168-1699.
@article{Tan2023a,
title = {Anchor-free deep convolutional neural network for tracking and counting cotton seedlings and flowers},
author = {Chenjiao Tan and Changying Li and Dongjian He and Huaibo Song},
url = {https://www.sciencedirect.com/science/article/pii/S0168169923007470},
doi = {https://doi.org/10.1016/j.compag.2023.108359},
issn = {0168-1699},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {Computers and Electronics in Agriculture},
volume = {215},
pages = {108359},
abstract = {Accurate counting of plants and their organs in natural environments is essential for breeders and growers. For breeders, counting plants during the seedling stage aids in selecting genotypes with superior emergence rates, while for growers, it informs decisions about potential replanting. Meanwhile, counting specific plant organs, such as flowers, forecasts yields for different genotypes, offering insights into production levels. The overall goal of this study was to investigate a deep convolutional neural network-based tracking method, CenterTrack, for cotton seedling and flower counting from video frames. The network is extended from a customized CenterNet, which is an anchor-free object detector. CenterTrack predicts the detections of the current frame and displacements of detections between the previous frame and the current frame, which are used to associate the same object in consecutive frames. The modified CenterNet detector achieved high accuracy on both seedling and flower datasets with an overall AP50 of 0.962. The video tracking hyperparameters were optimized for each dataset using orthogonal tests. Experimental results showed that seedling and flower counts with optimized hyperparameters highly correlated with those of manual counts (R2 = 0.98 andR2 = 0.95) and the mean relative errors of 75 cotton seedling testing videos and 50 flower testing videos were 5.5 % and 10.8 %, respectively. An average counting speed of 20.4 frames per second was achieved with an input resolution of 1920 × 1080 pixels for both seedling and flower videos. The anchor-free deep convolution neural network-based tracking method provides automatic tracking and counting in video frames, which will significantly benefit plant breeding and crop management.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2022
Xu, Rui; Li, Changying
A review of field-based high-throughput phenotyping systems: focusing on ground robots Journal Article
In: Plant Phenomics, vol. 2022, no. Article ID 9760269, pp. 20, 2022.
@article{nokey,
title = {A review of field-based high-throughput phenotyping systems: focusing on ground robots},
author = {Rui Xu and Changying Li},
url = {https://spj.sciencemag.org/journals/plantphenomics/2022/9760269/},
doi = {https://doi.org/10.34133/2022/9760269.},
year = {2022},
date = {2022-06-18},
urldate = {2022-06-18},
journal = {Plant Phenomics},
volume = {2022},
number = {Article ID 9760269},
pages = {20},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Rodriguez-Sanchez, Javier; Li, Changying; Paterson, Andrew
Cotton yield estimation from aerial imagery using machine learning approaches Journal Article
In: Frontiers in Plant Science, vol. 13, 2022.
@article{nokey,
title = {Cotton yield estimation from aerial imagery using machine learning approaches},
author = {Javier Rodriguez-Sanchez and Changying Li and Andrew Paterson},
url = {https://www.frontiersin.org/articles/10.3389/fpls.2022.870181/full},
year = {2022},
date = {2022-04-01},
urldate = {2022-04-01},
journal = {Frontiers in Plant Science},
volume = {13},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Petti, Daniel; Li, Changying
Weakly-supervised learning to automatically count cotton flowers from aerial imagery Journal Article
In: Computers and Electronics in Agriculture, vol. 194, pp. 106734, 2022, ISSN: 0168-1699.
@article{Petti2022,
title = {Weakly-supervised learning to automatically count cotton flowers from aerial imagery},
author = {Daniel Petti and Changying Li},
url = {https://www.sciencedirect.com/science/article/pii/S0168169922000515},
doi = {https://doi.org/10.1016/j.compag.2022.106734},
issn = {0168-1699},
year = {2022},
date = {2022-01-01},
urldate = {2022-01-01},
journal = {Computers and Electronics in Agriculture},
volume = {194},
pages = {106734},
abstract = {Counting plant flowers is a common task with applications for estimating crop yields and selecting favorable genotypes. Typically, this requires a laborious manual process, rendering it impractical to obtain accurate flower counts throughout the growing season. The model proposed in this study uses weak supervision, based on Convolutional Neural Networks (CNNs), which automates such a counting task for cotton flowers using imagery collected from an unmanned aerial vehicle (UAV). Furthermore, the model is trained using Multiple Instance Learning (MIL) in order to reduce the required amount of annotated data. MIL is a binary classification task in which any image with at least one flower falls into the positive class, and all others are negative. In the process, a novel loss function was developed that is designed to improve the performance of image-processing models that use MIL. The model is trained on a large dataset of cotton plant imagery which was collected over several years and will be made publicly available. Additionally, an active-learning-based approach is employed in order to generate the annotations for the dataset while minimizing the required amount of human intervention. Despite having minimal supervision, the model still demonstrates good performance on the testing dataset. Multiple models were tested with different numbers of parameters and input sizes, achieving a minimum average absolute count error of 2.43. Overall, this study demonstrates that a weakly-supervised model is a promising method for solving the flower counting problem while minimizing the human labeling effort.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Xu, Rui; Li, Changying
A modular agricultural robotic system (MARS) for precision farming: Concept and implementation Journal Article
In: Journal of Field Robotics, vol. 39, no. 4, pp. 387-409, 2022.
@article{https://doi.org/10.1002/rob.22056,
title = {A modular agricultural robotic system (MARS) for precision farming: Concept and implementation},
author = {Rui Xu and Changying Li},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.22056},
doi = {https://doi.org/10.1002/rob.22056},
year = {2022},
date = {2022-01-01},
journal = {Journal of Field Robotics},
volume = {39},
number = {4},
pages = {387-409},
abstract = {Abstract Increasing global population, climate change, and shortage of labor pose significant challenges for meeting the global food and fiber demand, and agricultural robots offer a promising solution to these challenges. This paper presents a new robotic system architecture and the resulting modular agricultural robotic system (MARS) that is an autonomous, multi-purpose, and affordable robotic platform for in-field plant high throughput phenotyping and precision farming. There are five essential hardware modules (wheel module, connection module, robot controller, robot frame, and power module) and three optional hardware modules (actuation module, sensing module, and smart attachment). Various combinations of the hardware modules can create different robot configurations for specific agricultural tasks. The software was designed using the Robot Operating System (ROS) with three modules: control module, navigation module, and vision module. A robot localization method using dual Global Navigation Satellite System antennas was developed. Two line-following algorithms were implemented as the local planner for the ROS navigation stack. Based on the MARS design concept, two MARS designs were implemented: a low-cost, lightweight robotic system named MARS mini and a heavy-duty robot named MARS X. The autonomous navigation of both MARS X and mini was evaluated at different traveling speeds and payload levels, confirming satisfactory performances. The MARS X was further tested for its performance and navigation accuracy in a crop field, achieving a high accuracy over a 537 m long path with only 15% of the path having an error larger than 0.05 m. The MARS mini and MARS X were shown to be useful for plant phenotyping in two field tests. The modular design makes the robots easily adaptable to different agricultural tasks and the low-cost feature makes it affordable for researchers and growers.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2021
Sun, Shangpeng; Li, Changying; Chee, Peng W.; Paterson, Andrew H.; Meng, Cheng; Zhang, Jingyi; Ma, Ping; Robertson, Jon S.; Adhikari, Jeevan
High resolution 3D terrestrial LiDAR for cotton plant main stalk and node detection Journal Article
In: Computers and Electronics in Agriculture, vol. 187, pp. 106276, 2021, ISSN: 0168-1699.
@article{SUN2021106276,
title = {High resolution 3D terrestrial LiDAR for cotton plant main stalk and node detection},
author = {Shangpeng Sun and Changying Li and Peng W. Chee and Andrew H. Paterson and Cheng Meng and Jingyi Zhang and Ping Ma and Jon S. Robertson and Jeevan Adhikari},
url = {https://www.sciencedirect.com/science/article/pii/S0168169921002933},
doi = {https://doi.org/10.1016/j.compag.2021.106276},
issn = {0168-1699},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
journal = {Computers and Electronics in Agriculture},
volume = {187},
pages = {106276},
abstract = {Dense three-dimensional point clouds provide opportunities to retrieve detailed characteristics of plant organ-level phenotypic traits, which are helpful to better understand plant architecture leading to its improvements via new plant breeding approaches. In this study, a high-resolution terrestrial LiDAR was used to acquire point clouds of plants under field conditions, and a data processing pipeline was developed to detect plant main stalks and nodes, and then to extract two phenotypic traits including node number and main stalk length. The proposed method mainly consisted of three steps: first, extract skeletons from original point clouds using a Laplacian-based contraction algorithm; second, identify the main stalk by converting a plant skeleton point cloud to a graph; and third, detect nodes by finding the intersection between the main stalk and branches. Main stalk length was calculated by accumulating the distance between two adjacent points from the lowest to the highest point of the main stalk. Experimental results based on 26 plants showed that the proposed method could accurately measure plant main stalk length and detect nodes; the average R2 and mean absolute percentage error were 0.94 and 4.3% for the main stalk length measurements and 0.7 and 5.1% for node counting, respectively, for point numbers between 80,000 and 150,000 for each plant. Three-dimensional point cloud-based high throughput phenotyping may expedite breeding technologies to improve crop production.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Xu, Rui; Li, Changying; Bernardes, Sergio
Development and Testing of a UAV-Based Multi-Sensor System for Plant Phenotyping and Precision Agriculture Journal Article
In: Remote Sensing, vol. 13, no. 17, 2021, ISSN: 2072-4292.
@article{Xu2021,
title = {Development and Testing of a UAV-Based Multi-Sensor System for Plant Phenotyping and Precision Agriculture},
author = {Rui Xu and Changying Li and Sergio Bernardes},
url = {https://www.mdpi.com/2072-4292/13/17/3517},
doi = {10.3390/rs13173517},
issn = {2072-4292},
year = {2021},
date = {2021-01-01},
urldate = {2021-01-01},
journal = {Remote Sensing},
volume = {13},
number = {17},
abstract = {Unmanned aerial vehicles have been used widely in plant phenotyping and precision agriculture. Several critical challenges remain, however, such as the lack of cross-platform data acquisition software system, sensor calibration protocols, and data processing methods. This paper developed an unmanned aerial system that integrates three cameras (RGB, multispectral, and thermal) and a LiDAR sensor. Data acquisition software supporting data recording and visualization was implemented to run on the Robot Operating System. The design of the multi-sensor unmanned aerial system was open sourced. A data processing pipeline was proposed to preprocess the raw data and to extract phenotypic traits at the plot level, including morphological traits (canopy height, canopy cover, and canopy volume), canopy vegetation index, and canopy temperature. Protocols for both field and laboratory calibrations were developed for the RGB, multispectral, and thermal cameras. The system was validated using ground data collected in a cotton field. Temperatures derived from thermal images had a mean absolute error of 1.02 °C, and canopy NDVI had a mean relative error of 6.6% compared to ground measurements. The observed error for maximum canopy height was 0.1 m. The results show that the system can be useful for plant breeding and precision crop management.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2018
Sun, S.; Li, C.; Paterson, A. H.; Jiang, Y.; Xu, R.; Roberson, J.; Snider, J.; Chee, P.
In-field high throughput phenotyping and cotton plant growth analysis using LiDAR Journal Article
In: Frontiers in Plant Sciences, 9, 16, 2018.
@article{Sun2018,
title = {In-field high throughput phenotyping and cotton plant growth analysis using LiDAR},
author = {S. Sun and C. Li and A.H. Paterson and Y. Jiang and R. Xu and J. Roberson and J. Snider and P. Chee},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/In-Field-High-Throughput-Phenotyping-of-Cotton-Plant-Height-Using-LiDAR.pdf},
doi = {10.3389/fpls.2018.00016},
year = {2018},
date = {2018-01-30},
urldate = {2018-01-30},
journal = {Frontiers in Plant Sciences, 9, 16},
abstract = {Sun, S., Li, C., Paterson, A. H., Jiang, Y., Xu, R., Robertson, J. S., ... & Chee, P. W. (2018). In-field high throughput phenotyping and cotton plant growth analysis using LiDAR. Frontiers in Plant Science, 9, 16.
A LiDAR-based high-throughput phenotyping (HTP) system was developed for cotton plant phenotyping in the field. The HTP system consists of a 2D LiDAR and an RTK-GPS mounted on a high clearance tractor. The LiDAR scanned three rows of cotton plots simultaneously from the top and the RTK-GPS was used to provide the spatial coordinates of the point cloud during data collection. Configuration parameters of the system were optimized to ensure the best data quality. A height profile for each plot was extracted from the dense three dimensional point clouds; then the maximum height and height distribution of each plot were derived. In lab tests, single plants were scanned by LiDAR using 0.5◦ angular resolution and results showed an R 2 value of 1.00 (RMSE = 3.46 mm) in comparison to manual measurements. In field tests using the same angular resolution; the LiDAR-based HTP system achieved average R2 values of 0.98 (RMSE = 65 mm) for cotton plot height estimation; compared to manual measurements. This HTP system is particularly useful for large field application because it provides highly accurate measurements; and the efficiency is greatly improved compared to similar studies using the side view scan.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A LiDAR-based high-throughput phenotyping (HTP) system was developed for cotton plant phenotyping in the field. The HTP system consists of a 2D LiDAR and an RTK-GPS mounted on a high clearance tractor. The LiDAR scanned three rows of cotton plots simultaneously from the top and the RTK-GPS was used to provide the spatial coordinates of the point cloud during data collection. Configuration parameters of the system were optimized to ensure the best data quality. A height profile for each plot was extracted from the dense three dimensional point clouds; then the maximum height and height distribution of each plot were derived. In lab tests, single plants were scanned by LiDAR using 0.5◦ angular resolution and results showed an R 2 value of 1.00 (RMSE = 3.46 mm) in comparison to manual measurements. In field tests using the same angular resolution; the LiDAR-based HTP system achieved average R2 values of 0.98 (RMSE = 65 mm) for cotton plot height estimation; compared to manual measurements. This HTP system is particularly useful for large field application because it provides highly accurate measurements; and the efficiency is greatly improved compared to similar studies using the side view scan.
2017
Xu, R.; Li, C.; Paterson, A. H.; Jiang, Y.; Sun, S.; Roberson, J.
Aerial Images and Convolutional Neural Network for Cotton Bloom Detection Journal Article
In: Frontiers in Plant Sciences, 8, 2235, 2017.
@article{Xu2018,
title = {Aerial Images and Convolutional Neural Network for Cotton Bloom Detection},
author = {R. Xu and C. Li and A.H. Paterson and Y. Jiang and S. Sun and J. Roberson},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/Aerial-Images-and-Convolutional-Neural-Network-for-Cotton-Bloom-Detection.pdf},
doi = {10.3389/fpls.2017.02235},
year = {2017},
date = {2017-12-19},
urldate = {2017-12-19},
journal = {Frontiers in Plant Sciences, 8, 2235},
abstract = {Xu, R., Li, C., Paterson, A. H., Jiang, Y., Sun, S., & Robertson, J. S. (2018). Aerial images and convolutional neural network for cotton bloom detection. Frontiers in Plant Science, 8, 2235.
Monitoring flower development can provide useful information for production management, estimating yield and selecting specific genotypes of crops. The main goal of this study was to develop a methodology to detect and count cotton flowers, or blooms, using color images acquired by an unmanned aerial system. The aerial images were collected from two test fields in 4 days. A convolutional neural network (CNN) was designed and trained to detect cotton blooms in raw images, and their 3D locations were calculated using the dense point cloud constructed from the aerial images with the structure from motion method. The quality of the dense point cloud was analyzed and plots with poor quality were excluded from data analysis. A constrained clustering algorithm was developed to register the same bloom detected from different images based on the 3D location of the bloom. The accuracy and incompleteness of the dense point cloud were analyzed because they affected the accuracy of the 3D location of the blooms and thus the accuracy of the bloom registration result. The constrained clustering algorithm was validated using simulated data, showing good efficiency and accuracy. The bloom count from the proposed method was comparable with the number counted manually with an error of −4 to 3 blooms for the field with a single plant per plot. However, more plots were underestimated in the field with multiple plants per plot due to hidden blooms that were not captured by the aerial images. The proposed methodology provides a high-throughput method to continuously monitor the flowering progress of cotton.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Monitoring flower development can provide useful information for production management, estimating yield and selecting specific genotypes of crops. The main goal of this study was to develop a methodology to detect and count cotton flowers, or blooms, using color images acquired by an unmanned aerial system. The aerial images were collected from two test fields in 4 days. A convolutional neural network (CNN) was designed and trained to detect cotton blooms in raw images, and their 3D locations were calculated using the dense point cloud constructed from the aerial images with the structure from motion method. The quality of the dense point cloud was analyzed and plots with poor quality were excluded from data analysis. A constrained clustering algorithm was developed to register the same bloom detected from different images based on the 3D location of the bloom. The accuracy and incompleteness of the dense point cloud were analyzed because they affected the accuracy of the 3D location of the blooms and thus the accuracy of the bloom registration result. The constrained clustering algorithm was validated using simulated data, showing good efficiency and accuracy. The bloom count from the proposed method was comparable with the number counted manually with an error of −4 to 3 blooms for the field with a single plant per plot. However, more plots were underestimated in the field with multiple plants per plot due to hidden blooms that were not captured by the aerial images. The proposed methodology provides a high-throughput method to continuously monitor the flowering progress of cotton.
Jiang, Y.; Li, C.; Paterson, A. H.; Sun, S.; Xu, R.; Roberson, J.
Quantitative Analysis of Cotton Canopy Size in Field Conditions Using a Consumer-Grade RGB-D Camera Journal Article
In: Frontiers in Plant Sciences, 8, 2233, 2017.
@article{Jiang2018,
title = {Quantitative Analysis of Cotton Canopy Size in Field Conditions Using a Consumer-Grade RGB-D Camera},
author = {Y. Jiang and C. Li and A.H. Paterson and S. Sun and R. Xu and J. Roberson},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/Quantitative-Analysis-of-Cotton-Canopy-Size-in-Field-Conditions-Using-a-Consumer-Grade-RGB-D-Camera.pdf},
doi = {10.3389/fpls.2017.02233},
year = {2017},
date = {2017-12-19},
urldate = {2017-12-19},
journal = {Frontiers in Plant Sciences, 8, 2233},
abstract = {Jiang, Y., Li, C., Paterson, A. H., Sun, S., Xu, R., & Robertson, J. (2018). Quantitative analysis of cotton canopy size in field conditions using a consumer-grade RGB-D camera. Frontiers in plant science, 8, 2233.
Plant canopy structure can strongly affect crop functions such as yield and stress tolerance, and canopy size is an important aspect of canopy structure. Manual assessment of canopy size is laborious and imprecise, and cannot measure multi-dimensional traits such as projected leaf area and canopy volume. Field-based high throughput phenotyping systems with imaging capabilities can rapidly acquire data about plants in field conditions, making it possible to quantify and monitor plant canopy development. The goal of this study was to develop a 3D imaging approach to quantitatively analyze cotton canopy development in field conditions. A cotton field was planted with 128 plots, including four genotypes of 32 plots each. The field was scanned by GPhenoVision (a customized field-based high throughput phenotyping system) to acquire color and depth images with GPS information in 2016 covering two growth stages: canopy development, and flowering and boll development. A data processing pipeline was developed, consisting of three steps: plot point cloud reconstruction, plant canopy segmentation, and trait extraction. Plot point clouds were reconstructed using color and depth images with GPS information. In colorized point clouds, vegetation was segmented from the background using an excess-green (ExG) color filter, and cotton canopies were further separated from weeds based on height, size, and position information. Static morphological traits were extracted on each day, including univariate traits (maximum and mean canopy height and width, projected canopy area, and concave and convex volumes) and a multivariate trait (cumulative height profile). Growth rates were calculated for univariate static traits, quantifying canopy growth and development. Linear regressions were performed between the traits and fiber yield to identify the best traits and measurement time for yield prediction. The results showed that fiber yield was correlated with static traits after the canopy development stage (R2 = 0.35–0.71) and growth rates in early canopy development stages (R2 = 0.29–0.52). Multi-dimensional traits (e.g., projected canopy area and volume) outperformed one-dimensional traits, and the multivariate trait (cumulative height profile) outperformed univariate traits. The proposed approach would be useful for identification of quantitative trait loci (QTLs) controlling canopy size in genetics/genomics studies or for fiber yield prediction in breeding programs and production environments.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Plant canopy structure can strongly affect crop functions such as yield and stress tolerance, and canopy size is an important aspect of canopy structure. Manual assessment of canopy size is laborious and imprecise, and cannot measure multi-dimensional traits such as projected leaf area and canopy volume. Field-based high throughput phenotyping systems with imaging capabilities can rapidly acquire data about plants in field conditions, making it possible to quantify and monitor plant canopy development. The goal of this study was to develop a 3D imaging approach to quantitatively analyze cotton canopy development in field conditions. A cotton field was planted with 128 plots, including four genotypes of 32 plots each. The field was scanned by GPhenoVision (a customized field-based high throughput phenotyping system) to acquire color and depth images with GPS information in 2016 covering two growth stages: canopy development, and flowering and boll development. A data processing pipeline was developed, consisting of three steps: plot point cloud reconstruction, plant canopy segmentation, and trait extraction. Plot point clouds were reconstructed using color and depth images with GPS information. In colorized point clouds, vegetation was segmented from the background using an excess-green (ExG) color filter, and cotton canopies were further separated from weeds based on height, size, and position information. Static morphological traits were extracted on each day, including univariate traits (maximum and mean canopy height and width, projected canopy area, and concave and convex volumes) and a multivariate trait (cumulative height profile). Growth rates were calculated for univariate static traits, quantifying canopy growth and development. Linear regressions were performed between the traits and fiber yield to identify the best traits and measurement time for yield prediction. The results showed that fiber yield was correlated with static traits after the canopy development stage (R2 = 0.35–0.71) and growth rates in early canopy development stages (R2 = 0.29–0.52). Multi-dimensional traits (e.g., projected canopy area and volume) outperformed one-dimensional traits, and the multivariate trait (cumulative height profile) outperformed univariate traits. The proposed approach would be useful for identification of quantitative trait loci (QTLs) controlling canopy size in genetics/genomics studies or for fiber yield prediction in breeding programs and production environments.
Jiang, Y.; Li, C.; Robertson, J. S.; Sun, S.; Xu, R.; Paterson, A. H.
GPhenoVision: A Ground Mobile System with Multi-modal Imaging for Field-Based High Throughput Phenotyping of Cotton Journal Article
In: Scientific Reports, 8(1), 1213, 2017.
@article{Jiang2018b,
title = {GPhenoVision: A Ground Mobile System with Multi-modal Imaging for Field-Based High Throughput Phenotyping of Cotton},
author = {Y. Jiang and C. Li and J. S. Robertson and S. Sun and R. Xu and A.H. Paterson },
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/GPhenoVision-A-Ground-Mobile-System-with-Multi-modal-Imaging-for-Field-Based-High-Throughput-Phenotyping-of-Cotton.pdf},
doi = {10.1038/s41598-018-19142-2},
year = {2017},
date = {2017-11-30},
urldate = {2017-11-30},
journal = {Scientific Reports, 8(1), 1213},
abstract = {Jiang, Y., Li, C., Robertson, J. S., Sun, S., Xu, R., & Paterson, A. H. (2018). Gphenovision: a ground mobile system with multi-modal imaging for field-based high throughput phenotyping of cotton. Scientific reports, 8(1), 1213.
Imaging sensors can extend phenotyping capability, but they require a system to handle high-volume data. The overall goal of this study was to develop and evaluate a field-based high throughput phenotyping system accommodating high-resolution imagers. The system consisted of a high-clearance tractor and sensing and electrical systems. The sensing system was based on a distributed structure, integrating environmental sensors, real-time kinematic GPS, and multiple imaging sensors including RGB-D, thermal, and hyperspectral cameras. Custom software was developed with a multilayered architecture for system control and data collection. The system was evaluated by scanning a cotton field with 23 genotypes for quantification of canopy growth and development. A data processing pipeline was developed to extract phenotypes at the canopy level, including height, width, projected leaf area, and volume from RGB-D data and temperature from thermal images. Growth rates of morphological traits were accordingly calculated. The traits had strong correlations (r = 0.54–0.74) with fiber yield and good broad sense heritability (H2 = 0.27–0.72), suggesting the potential for conducting quantitative genetic analysis and contributing to yield prediction models. The developed system is a useful tool for a wide range of breeding/genetic, agronomic/physiological, and economic studies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Imaging sensors can extend phenotyping capability, but they require a system to handle high-volume data. The overall goal of this study was to develop and evaluate a field-based high throughput phenotyping system accommodating high-resolution imagers. The system consisted of a high-clearance tractor and sensing and electrical systems. The sensing system was based on a distributed structure, integrating environmental sensors, real-time kinematic GPS, and multiple imaging sensors including RGB-D, thermal, and hyperspectral cameras. Custom software was developed with a multilayered architecture for system control and data collection. The system was evaluated by scanning a cotton field with 23 genotypes for quantification of canopy growth and development. A data processing pipeline was developed to extract phenotypes at the canopy level, including height, width, projected leaf area, and volume from RGB-D data and temperature from thermal images. Growth rates of morphological traits were accordingly calculated. The traits had strong correlations (r = 0.54–0.74) with fiber yield and good broad sense heritability (H2 = 0.27–0.72), suggesting the potential for conducting quantitative genetic analysis and contributing to yield prediction models. The developed system is a useful tool for a wide range of breeding/genetic, agronomic/physiological, and economic studies.
Patrick, A.; Li, C.
High Throughput Phenotyping of Blueberry Bush Morphological Traits Using Unmanned Aerial Systems Journal Article
In: Remote Sensing, 9(12), 1250, 2017.
@article{Patrick2017,
title = {High Throughput Phenotyping of Blueberry Bush Morphological Traits Using Unmanned Aerial Systems},
author = {A. Patrick and C. Li
},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/High-Throughput-Phenotyping-of-Blueberry-Bush-Morphological-Traits-Using-Unmanned-Aerial-Systems.pdf},
doi = {10.3390/rs9121250},
year = {2017},
date = {2017-11-30},
urldate = {2017-11-30},
journal = {Remote Sensing, 9(12), 1250},
abstract = {Patrick, A., & Li, C. (2017). High throughput phenotyping of blueberry bush morphological traits using unmanned aerial systems. Remote Sensing, 9(12), 1250.
Phenotyping morphological traits of blueberry bushes in the field is important for selecting genotypes that are easily harvested by mechanical harvesters. Morphological data can also be used to assess the effects of crop treatments such as plant growth regulators, fertilizers, and environmental conditions. This paper investigates the feasibility and accuracy of an inexpensive unmanned aerial system in determining the morphological characteristics of blueberry bushes. Color images collected by a quadcopter are processed into three-dimensional point clouds via structure from motion algorithms. Bush height, extents, canopy area, and volume, in addition to crown diameter and width, are derived and referenced to ground truth. In an experimental farm, twenty-five bushes were imaged by a quadcopter. Height and width dimensions achieved a mean absolute error of 9.85 cm before and 5.82 cm after systematic under-estimation correction. Strong correlation was found between manual and image derived bush volumes and their traditional growth indices. Hedgerows of three Southern Highbush varieties were imaged at a commercial farm to extract five morphological features (base angle, blockiness, crown percent height, crown ratio, and vegetation ratio) associated with cultivation and machine harvestability. The bushes were found to be partially separable by multivariate analysis. The methodology developed from this study is not only valuable for plant breeders to screen genotypes with bush morphological traits that are suitable for machine harvest, but can also aid producers in crop management such as pruning and plot layout organization.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Phenotyping morphological traits of blueberry bushes in the field is important for selecting genotypes that are easily harvested by mechanical harvesters. Morphological data can also be used to assess the effects of crop treatments such as plant growth regulators, fertilizers, and environmental conditions. This paper investigates the feasibility and accuracy of an inexpensive unmanned aerial system in determining the morphological characteristics of blueberry bushes. Color images collected by a quadcopter are processed into three-dimensional point clouds via structure from motion algorithms. Bush height, extents, canopy area, and volume, in addition to crown diameter and width, are derived and referenced to ground truth. In an experimental farm, twenty-five bushes were imaged by a quadcopter. Height and width dimensions achieved a mean absolute error of 9.85 cm before and 5.82 cm after systematic under-estimation correction. Strong correlation was found between manual and image derived bush volumes and their traditional growth indices. Hedgerows of three Southern Highbush varieties were imaged at a commercial farm to extract five morphological features (base angle, blockiness, crown percent height, crown ratio, and vegetation ratio) associated with cultivation and machine harvestability. The bushes were found to be partially separable by multivariate analysis. The methodology developed from this study is not only valuable for plant breeders to screen genotypes with bush morphological traits that are suitable for machine harvest, but can also aid producers in crop management such as pruning and plot layout organization.
Sun, S.; Li, C.; Paterson, A. H.
In-Field High-Throughput Phenotyping of Cotton Plant Height Using LiDAR Journal Article
In: Remote Sensing, 9(4), 377, 2017.
@article{Sun2017,
title = {In-Field High-Throughput Phenotyping of Cotton Plant Height Using LiDAR},
author = {S. Sun and C. Li and A.H. Paterson},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/In-Field-High-Throughput-Phenotyping-of-Cotton-Plant-Height-Using-LiDAR-1.pdf},
doi = {10.3390/rs9040377},
year = {2017},
date = {2017-04-13},
urldate = {2017-04-13},
journal = {Remote Sensing, 9(4), 377},
abstract = {Sun, S., Li, C., & Paterson, A. (2017). In-field high-throughput phenotyping of cotton plant height using LiDAR. Remote Sensing, 9(4), 377.
A LiDAR-based high-throughput phenotyping (HTP) system was developed for cotton plant phenotyping in the field. The HTP system consists of a 2D LiDAR and an RTK-GPS mounted on a high clearance tractor. The LiDAR scanned three rows of cotton plots simultaneously from the top and the RTK-GPS was used to provide the spatial coordinates of the point cloud during data collection. Configuration parameters of the system were optimized to ensure the best data quality. A height profile for each plot was extracted from the dense three dimensional point clouds; then the maximum height and height distribution of each plot were derived. In lab tests, single plants were scanned by LiDAR using 0.5° angular resolution and results showed an R2 value of 1.00 (RMSE = 3.46 mm) in comparison to manual measurements. In field tests using the same angular resolution; the LiDAR-based HTP system achieved average R2 values of 0.98 (RMSE = 65 mm) for cotton plot height estimation; compared to manual measurements. This HTP system is particularly useful for large field application because it provides highly accurate measurements; and the efficiency is greatly improved compared to similar studies using the side view scan.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A LiDAR-based high-throughput phenotyping (HTP) system was developed for cotton plant phenotyping in the field. The HTP system consists of a 2D LiDAR and an RTK-GPS mounted on a high clearance tractor. The LiDAR scanned three rows of cotton plots simultaneously from the top and the RTK-GPS was used to provide the spatial coordinates of the point cloud during data collection. Configuration parameters of the system were optimized to ensure the best data quality. A height profile for each plot was extracted from the dense three dimensional point clouds; then the maximum height and height distribution of each plot were derived. In lab tests, single plants were scanned by LiDAR using 0.5° angular resolution and results showed an R2 value of 1.00 (RMSE = 3.46 mm) in comparison to manual measurements. In field tests using the same angular resolution; the LiDAR-based HTP system achieved average R2 values of 0.98 (RMSE = 65 mm) for cotton plot height estimation; compared to manual measurements. This HTP system is particularly useful for large field application because it provides highly accurate measurements; and the efficiency is greatly improved compared to similar studies using the side view scan.
Patrick, A.; Pelham, S.; Culbreath, A.; Holbrook, C.; Godoy, I. J. d.; Li, C.
High Throughput Phenotyping of Tomato Spot Wilt Disease in Peanuts Using Unmanned Aerial Systems and Multispectral Imaging Journal Article
In: IEEE Instrumentation & Measurement Magazine, 20(3), 4-12, 2017.
@article{Patrick2017b,
title = {High Throughput Phenotyping of Tomato Spot Wilt Disease in Peanuts Using Unmanned Aerial Systems and Multispectral Imaging},
author = {A. Patrick and S. Pelham and A. Culbreath and C. Holbrook and I.J.d. Godoy and C. Li},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/High-Throughput-Phenotyping-of-Tomato-Spot-Wilt-Disease-in-Peanuts-Using-Unmanned-Aerial-Systems-and-Multispectral-Imaging.pdf},
doi = {10.1109/MIM.2017.7951684},
year = {2017},
date = {2017-02-08},
urldate = {2017-02-08},
journal = {IEEE Instrumentation & Measurement Magazine, 20(3), 4-12},
abstract = {Patrick, A., Pelham, S., Culbreath, A., Holbrook, C. C., De Godoy, I. J., & Li, C. (2017). High throughput phenotyping of tomato spot wilt disease in peanuts using unmanned aerial systems and multispectral imaging. IEEE Instrumentation & Measurement Magazine, 20(3), 4-12.
The amount of visible and near infrared light reflected by plants varies depending on their health. In this study, multispectral images were acquired by a quadcopter for high throughput phenotyping of tomato spot wilt disease resistance among twenty genotypes of peanuts. The plants were visually assessed to acquire ground truth ratings of disease incidence. Multispectral images were processed into several vegetation indices. The vegetation index image of each plot has a unique distribution of pixel intensities. The percentage and number of pixels above and below varying thresholds were extracted. These features were correlated with manually acquired data to develop a model for assessing the percentage of each plot diseased. Ultimately, the best vegetation indices and pixel distribution feature for disease detection were determined and correlated with manual ratings and yield. The relative resistance of each genotype was then compared. Image-based disease ratings effectively ranked genotype resistance as early as 93 days from seeding.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The amount of visible and near infrared light reflected by plants varies depending on their health. In this study, multispectral images were acquired by a quadcopter for high throughput phenotyping of tomato spot wilt disease resistance among twenty genotypes of peanuts. The plants were visually assessed to acquire ground truth ratings of disease incidence. Multispectral images were processed into several vegetation indices. The vegetation index image of each plot has a unique distribution of pixel intensities. The percentage and number of pixels above and below varying thresholds were extracted. These features were correlated with manually acquired data to develop a model for assessing the percentage of each plot diseased. Ultimately, the best vegetation indices and pixel distribution feature for disease detection were determined and correlated with manual ratings and yield. The relative resistance of each genotype was then compared. Image-based disease ratings effectively ranked genotype resistance as early as 93 days from seeding.
2016
Jiang, Y.; Li, C.; Paterson, A. H.
High-throughput phenotyping of cotton plant height using depth images under field conditions Journal Article
In: Computers and Electronics in Agriculture, 130, 57-68, 2016.
@article{Jiang2016b,
title = {High-throughput phenotyping of cotton plant height using depth images under field conditions},
author = {Y. Jiang and C. Li and A.H. Paterson},
url = {http://sensinglab.engr.uga.edu//srv/htdocs/wp-content/uploads/2019/11/High-throughput-phenotyping-of-cotton-plant-height-using-depth-images-under-field-conditions-.pdf},
doi = {10.1016/j.compag.2016.09.017},
year = {2016},
date = {2016-09-26},
urldate = {2016-09-26},
journal = {Computers and Electronics in Agriculture, 130, 57-68},
abstract = {Jiang, Y., Li, C., & Paterson, A. H. (2016). High throughput phenotyping of cotton plant height using depth images under field conditions. Computers and Electronics in Agriculture, 130, 57-68.
Plant height is an important phenotypic trait that can be used not only as an indicator of overall plant growth but also a parameter to calculate advanced traits such as biomass and yield. Currently, cotton plant height is primarily measured manually, which is laborious and has become a bottleneck for cotton research and breeding programs. The goal of this research was to develop and evaluate a high throughput phenotyping (HTP) system using depth images for measuring cotton plant height under field conditions. For this purpose, a Kinect-v2 camera was evaluated in a static configuration to obtain a performance baseline and in a dynamic configuration to measure plant height in the field. In the static configuration, the camera was mounted on a partially covered wooden frame and oriented towards nadir to acquire depth images of potted cotton plants. Regions of interest of plants were manually selected in the depth images to calculate plant height. In the dynamic configuration, the Kinect-v2 camera was installed inside a partially covered metal-frame that was attached to a high-clearance tractor equipped with real time kinematic GPS. A six-step algorithm was developed to measure the maximum and average heights of individual plots by using the depth images acquired by the system. System performance was evaluated on 108 plots of cotton plants. Results showed that the Kinect-v2 camera could acquire valid depth images of cotton plants under field conditions, when a shaded environment was provided. The plot maximum and average heights calculated by the proposed algorithm were strongly correlated (adjusted R2 = 0.922–0.987) with those measured manually with accuracies of over 92%. The average processing time was 0.01 s to calculate the heights of a plot that typically has 34 depth images, indicating that the proposed algorithm was computationally efficient. Therefore, these results confirmed the ability of the HTP system with depth images to measure cotton plant height under field conditions accurately and rapidly. Furthermore, the imaging-based system has great potential for measuring more complicated geometric traits of plants, which can significantly advance field-based HTP system development in general.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Plant height is an important phenotypic trait that can be used not only as an indicator of overall plant growth but also a parameter to calculate advanced traits such as biomass and yield. Currently, cotton plant height is primarily measured manually, which is laborious and has become a bottleneck for cotton research and breeding programs. The goal of this research was to develop and evaluate a high throughput phenotyping (HTP) system using depth images for measuring cotton plant height under field conditions. For this purpose, a Kinect-v2 camera was evaluated in a static configuration to obtain a performance baseline and in a dynamic configuration to measure plant height in the field. In the static configuration, the camera was mounted on a partially covered wooden frame and oriented towards nadir to acquire depth images of potted cotton plants. Regions of interest of plants were manually selected in the depth images to calculate plant height. In the dynamic configuration, the Kinect-v2 camera was installed inside a partially covered metal-frame that was attached to a high-clearance tractor equipped with real time kinematic GPS. A six-step algorithm was developed to measure the maximum and average heights of individual plots by using the depth images acquired by the system. System performance was evaluated on 108 plots of cotton plants. Results showed that the Kinect-v2 camera could acquire valid depth images of cotton plants under field conditions, when a shaded environment was provided. The plot maximum and average heights calculated by the proposed algorithm were strongly correlated (adjusted R2 = 0.922–0.987) with those measured manually with accuracies of over 92%. The average processing time was 0.01 s to calculate the heights of a plot that typically has 34 depth images, indicating that the proposed algorithm was computationally efficient. Therefore, these results confirmed the ability of the HTP system with depth images to measure cotton plant height under field conditions accurately and rapidly. Furthermore, the imaging-based system has great potential for measuring more complicated geometric traits of plants, which can significantly advance field-based HTP system development in general.