Total above-ground biomass at harvest and ear density are two important traits that characterize wheat genotypes.Two experiments were carried out in two different sites where several genotypes were grown under contras...Total above-ground biomass at harvest and ear density are two important traits that characterize wheat genotypes.Two experiments were carried out in two different sites where several genotypes were grown under contrasted irrigation and nitrogen treatments.A high spatial resolution RGB camera was used to capture the residual stems standing straight after the cutting by the combine machine during harvest.It provided a ground spatial resolution better than 0.2 mm.A Faster Regional Convolutional Neural Network(Faster-RCNN)deep-learning model was first trained to identify the stems cross section.Results showed that the identification provided precision and recall close to 95%.Further,the balance between precision and recall allowed getting accurate estimates of the stem density with a relative RMSE close to 7%and robustness across the two experimental sites.The estimated stem density was also compared with the ear density measured in the field with traditional methods.A very high correlation was found with almost no bias,indicating that the stem density could be a good proxy of the ear density.The heritability/repeatability evaluated over 16 genotypes in one of the two experiments was slightly higher(80%)than that of the ear density(78%).The diameter of each stem was computed from the profile of gray values in the extracts of the stem cross section.Results show that the stem diameters follow a gamma distribution over eachmicroplot with an average diameter close to 2.0mm.Finally,the biovolume computed as the product of the average stem diameter,the stem density,and plant height is closely related to the above-ground biomass at harvest with a relative RMSE of 6%.Possible limitations of the findings and future applications are finally discussed.展开更多
Multispectral observations from unmanned aerial vehicles(UAVs)are currently used for precision agriculture and crop phenotyping applications to monitor a series of traits allowing the characterization of the vegetatio...Multispectral observations from unmanned aerial vehicles(UAVs)are currently used for precision agriculture and crop phenotyping applications to monitor a series of traits allowing the characterization of the vegetation status.However,the limited autonomy of UAVs makes the completion of flights difficult when sampling large areas.Increasing the throughput of data acquisition while not degrading the ground sample distance(GSD)is,therefore,a critical issue to be solved.We propose here a new image acquisition configuration based on the combination of two focal length(f)optics:an optics with f=4:2 mm is added to the standard f=8 mm(SS:single swath)of the multispectral camera(DS:double swath,double of the standard one).Two flights were completed consecutively in 2018 over a maize field using the AIRPHEN multispectral camera at 52 m altitude.The DS flight plan was designed to get 80%overlap with the 4.2 mm optics,while the SS one was designed to get 80%overlap with the 8 mm optics.As a result,the time required to cover the same area is halved for the DS as compared to the SS.The georeferencing accuracy was improved for the DS configuration,particularly for the Z dimension due to the larger view angles available with the small focal length optics.Application to plant height estimates demonstrates that the DS configuration provides similar results as the SS one.However,for both the DS and SS configurations,degrading the quality level used to generate the 3D point cloud significantly decreases the plant height estimates.展开更多
Computer vision is increasingly used in farmers'fields and agricultural experiments to quantify important traits.Imaging setups with a sub-millimeter ground sampling distance enable the detection and tracking of p...Computer vision is increasingly used in farmers'fields and agricultural experiments to quantify important traits.Imaging setups with a sub-millimeter ground sampling distance enable the detection and tracking of plant features,including size,shape,and colour.Although today's AI-driven foundation models segment almost any object in an image,they still fail for complex plant canopies.To improve model performance,the global wheat dataset consortium assembled a diverse set of images from experiments around the globe.After the head detection dataset(GWHD),the new dataset targets a full semantic segmentation(GWFSS)of organs(leaves,stems and spikes)covering all developmental stages.Images were collected by 11 institutions using a wide range of imaging setups.Two datasets are provided:ⅰ)a set of 1096 diverse images in which all organs were labelled at the pixel level,and(ⅱ)a dataset of 52,078 images without annotations available for additional training.The labelled set was used to train segmentation models based on DeepLabV3Plus and Segformer.Our Segformer model performed slightly better than DeepLabV3Plus with a mIOU for leaves and spikes of ca.90%.However,the precision for stems with 54%was rather lower.The major advantages over published models are:ⅰ)the exclusion of weeds from the wheat canopy,ⅱ)the detection of all wheat features including necrotic and se-nescent tissues and its separation from crop residues.This facilitates further development in classifying healthy vs.unhealthy tissue to address the increasing need for accurate quantification of senescence and diseases in wheat canopies.展开更多
基金This study was supported by “Programme d'Investissementd'Avenir” PHENOME(ANR-11-INBS-012)with participation of FranceAgriMer and “Fonds de Soutien a I'Obtention Vegetale”.
文摘Total above-ground biomass at harvest and ear density are two important traits that characterize wheat genotypes.Two experiments were carried out in two different sites where several genotypes were grown under contrasted irrigation and nitrogen treatments.A high spatial resolution RGB camera was used to capture the residual stems standing straight after the cutting by the combine machine during harvest.It provided a ground spatial resolution better than 0.2 mm.A Faster Regional Convolutional Neural Network(Faster-RCNN)deep-learning model was first trained to identify the stems cross section.Results showed that the identification provided precision and recall close to 95%.Further,the balance between precision and recall allowed getting accurate estimates of the stem density with a relative RMSE close to 7%and robustness across the two experimental sites.The estimated stem density was also compared with the ear density measured in the field with traditional methods.A very high correlation was found with almost no bias,indicating that the stem density could be a good proxy of the ear density.The heritability/repeatability evaluated over 16 genotypes in one of the two experiments was slightly higher(80%)than that of the ear density(78%).The diameter of each stem was computed from the profile of gray values in the extracts of the stem cross section.Results show that the stem diameters follow a gamma distribution over eachmicroplot with an average diameter close to 2.0mm.Finally,the biovolume computed as the product of the average stem diameter,the stem density,and plant height is closely related to the above-ground biomass at harvest with a relative RMSE of 6%.Possible limitations of the findings and future applications are finally discussed.
文摘Multispectral observations from unmanned aerial vehicles(UAVs)are currently used for precision agriculture and crop phenotyping applications to monitor a series of traits allowing the characterization of the vegetation status.However,the limited autonomy of UAVs makes the completion of flights difficult when sampling large areas.Increasing the throughput of data acquisition while not degrading the ground sample distance(GSD)is,therefore,a critical issue to be solved.We propose here a new image acquisition configuration based on the combination of two focal length(f)optics:an optics with f=4:2 mm is added to the standard f=8 mm(SS:single swath)of the multispectral camera(DS:double swath,double of the standard one).Two flights were completed consecutively in 2018 over a maize field using the AIRPHEN multispectral camera at 52 m altitude.The DS flight plan was designed to get 80%overlap with the 4.2 mm optics,while the SS one was designed to get 80%overlap with the 8 mm optics.As a result,the time required to cover the same area is halved for the DS as compared to the SS.The georeferencing accuracy was improved for the DS configuration,particularly for the Z dimension due to the larger view angles available with the small focal length optics.Application to plant height estimates demonstrates that the DS configuration provides similar results as the SS one.However,for both the DS and SS configurations,degrading the quality level used to generate the 3D point cloud significantly decreases the plant height estimates.
基金Global wheat was directly supported by Analytics for the Australian Grains Industry(AAGI).
文摘Computer vision is increasingly used in farmers'fields and agricultural experiments to quantify important traits.Imaging setups with a sub-millimeter ground sampling distance enable the detection and tracking of plant features,including size,shape,and colour.Although today's AI-driven foundation models segment almost any object in an image,they still fail for complex plant canopies.To improve model performance,the global wheat dataset consortium assembled a diverse set of images from experiments around the globe.After the head detection dataset(GWHD),the new dataset targets a full semantic segmentation(GWFSS)of organs(leaves,stems and spikes)covering all developmental stages.Images were collected by 11 institutions using a wide range of imaging setups.Two datasets are provided:ⅰ)a set of 1096 diverse images in which all organs were labelled at the pixel level,and(ⅱ)a dataset of 52,078 images without annotations available for additional training.The labelled set was used to train segmentation models based on DeepLabV3Plus and Segformer.Our Segformer model performed slightly better than DeepLabV3Plus with a mIOU for leaves and spikes of ca.90%.However,the precision for stems with 54%was rather lower.The major advantages over published models are:ⅰ)the exclusion of weeds from the wheat canopy,ⅱ)the detection of all wheat features including necrotic and se-nescent tissues and its separation from crop residues.This facilitates further development in classifying healthy vs.unhealthy tissue to address the increasing need for accurate quantification of senescence and diseases in wheat canopies.