期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
Easy domain adaptation method for filling the species gap in deep learning-based fruit detection 被引量:7
1
作者 Wenli Zhang Kaizhen Chen +2 位作者 Jiaqi Wang Yun Shi Wei Guo 《Horticulture Research》 SCIE 2021年第1期1730-1742,共13页
Fruit detection and counting are essential tasks for horticulture research.With computer vision technology development,fruit detection techniques based on deep learning have been widely used in modern orchards.However... Fruit detection and counting are essential tasks for horticulture research.With computer vision technology development,fruit detection techniques based on deep learning have been widely used in modern orchards.However,most deep learning-based fruit detection models are generated based on fully supervised approaches,which means a model trained with one domain species may not be transferred to another.There is always a need to recreate and label the relevant training dataset,but such a procedure is time-consuming and labor-intensive.This paper proposed a domain adaptation method that can transfer an existing model trained from one domain to a new domain without extra manual labeling.The method includes three main steps:transform the source fruit image(with labeled information)into the target fruit image(without labeled information)through the CycleGAN network;Automatically label the target fruit image by a pseudo-label process;Improve the labeling accuracy by a pseudo-label self-learning approach.Use a labeled orange image dataset as the source domain,unlabeled apple and tomato image dataset as the target domain,the performance of the proposed method from the perspective of fruit detection has been evaluated.Without manual labeling for target domain image,the mean average precision reached 87.5%for apple detection and 76.9%for tomato detection,which shows that the proposed method can potentially fill the species gap in deep learning-based fruit detection. 展开更多
关键词 image ORANGE consuming
原文传递
Integrating artificial intelligence and high-throughput phenotyping for crop improvement 被引量:1
2
作者 Mansoor Sheikh Farooq Iqra +3 位作者 Hamadani Ambreen Kumar A Pravin Manzoor Ikra Yong Suk Chung 《Journal of Integrative Agriculture》 SCIE CAS CSCD 2024年第6期1787-1802,共16页
Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have rev... Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have revolutionized the field,enabling rapid and accurate assessment of crop traits on a large scale.The integration of AI and machine learning algorithms with HTP data has unlocked new opportunities for crop improvement.AI algorithms can analyze and interpret large datasets,and extract meaningful patterns and correlations between phenotypic traits and genetic factors.These technologies have the potential to revolutionize plant breeding programs by providing breeders with efficient and accurate tools for trait selection,thereby reducing the time and cost required for variety development.However,further research and collaboration are needed to overcome the existing challenges and fully unlock the power of HTP and AI in crop improvement.By leveraging AI algorithms,researchers can efficiently analyze phenotypic data,uncover complex patterns,and establish predictive models that enable precise trait selection and crop breeding.The aim of this review is to explore the transformative potential of integrating HTP and AI in crop improvement.This review will encompass an in-depth analysis of recent advances and applications,highlighting the numerous benefits and challenges associated with HTP and AI. 展开更多
关键词 artificial intelligence crop improvement data analysis high-throughput phenotyping machine learning precision agriculture trait selection
在线阅读 下载PDF
Characterization of peach tree crown by using high-resolution images from an unmanned aerial vehicle 被引量:13
3
作者 Yue Mu Yuichiro Fujii +5 位作者 Daisuke Takata Bangyou Zheng Koji Noshita Kiyoshi Honda Seishi Ninomiya Wei Guo 《Horticulture Research》 SCIE 2018年第1期22-31,共10页
In orchards, measuring crown characteristics is essential for monitoring the dynamics of tree growth and optimizing farm management. However, it lacks a rapid and reliable method of extracting the features of trees wi... In orchards, measuring crown characteristics is essential for monitoring the dynamics of tree growth and optimizing farm management. However, it lacks a rapid and reliable method of extracting the features of trees with an irregular crown shape such as trained peach trees. Here, we propose an efficient method of segmenting the individual trees and measuring the crown width and crown projection area (CPA) of peach trees with time-series information, based on gathered images. The images of peach trees were collected by unmanned aerial vehicles in an orchard in Okayama, Japan, and then the digital surface model was generated by using a Structure from Motion (SfM) and Multi-View Stereo (MVS) based software. After individual trees were identified through the use of an adaptive threshold and marker-controlled watershed segmentation in the digital surface model, the crown widths and CPA were calculated, and the accuracy was evaluated against manual delineation and field measurement, respectively. Taking manual delineation of 12 trees as reference, the root-mean-square errors of the proposed method were 0.08 m (R^(2) = 0.99) and 0.15 m (R^(2) = 0.93) for the two orthogonal crown widths, and 3.87 m2 for CPA (R^(2) = 0.89), while those taking field measurement of 44 trees as reference were 0.47 m (R^(2) = 0.91), 0.51 m (R^(2) = 0.74), and 4.96 m2 (R^(2) = 0.88). The change of growth rate of CPA showed that the peach trees grew faster from May to July than from July to September, with a wide variation in relative growth rates among trees. Not only can this method save labour by replacing field measurement, but also it can allow farmers to monitor the growth of orchard trees dynamically. 展开更多
关键词 CROWN TREE WATERSHED
原文传递
Robust Surface Reconstruction of Plant Leaves from 3D Point Clouds 被引量:6
4
作者 Ryuhei Ando Yuko Ozasa Wei Guo 《Plant Phenomics》 SCIE 2021年第1期28-42,共15页
The automation of plant phenotyping using 3D imaging techniques is indispensable.However,conventional methods for reconstructing the leaf surface from 3D point clouds have a trade-off between the accuracy of leaf surf... The automation of plant phenotyping using 3D imaging techniques is indispensable.However,conventional methods for reconstructing the leaf surface from 3D point clouds have a trade-off between the accuracy of leaf surface reconstruction and the method’s robustness against noise and missing points.To mitigate this trade-off,we developed a leaf surface reconstruction method that reduces the effects of noise and missing points while maintaining surface reconstruction accuracy by capturing two components of the leaf(the shape and distortion of that shape)separately using leaf-specific properties.This separation simplifies leaf surface reconstruction compared with conventional methods while increasing the robustness against noise and missing points.To evaluate the proposed method,we reconstructed the leaf surfaces from 3D point clouds of leaves acquired from two crop species(soybean and sugar beet)and compared the results with those of conventional methods.The result showed that the proposed method robustly reconstructed the leaf surfaces,despite the noise and missing points for two different leaf shapes.To evaluate the stability of the leaf surface reconstructions,we also calculated the leaf surface areas for 14 consecutive days of the target leaves.The result derived from the proposed method showed less variation of values and fewer outliers compared with the conventional methods. 展开更多
关键词 SHAPE SEPARATION SUGAR
原文传递
Estimates of Maize Plant Density from UAV RGB Images Using Faster-RCNN Detection Model:Impact of the Spatial Resolution 被引量:13
5
作者 K.Velumani R.Lopez-Lozano +4 位作者 S.Madec W.Guo J.Gillet A.Comar F.Baret 《Plant Phenomics》 SCIE 2021年第1期181-196,共16页
Early-stage plant density is an essential trait that determines the fate of a genotype under given environmental conditions and management practices.The use of RGB images taken from UAVs may replace the traditional vi... Early-stage plant density is an essential trait that determines the fate of a genotype under given environmental conditions and management practices.The use of RGB images taken from UAVs may replace the traditional visual counting in fields with improved throughput,accuracy,and access to plant localization.However,high-resolution images are required to detect the small plants present at the early stages.This study explores the impact of image ground sampling distance(GSD)on the performances of maize plant detection at three-to-five leaves stage using Faster-RCNN object detection algorithm.Data collected at high resolution(GSD≈0:3 cm)over six contrasted sites were used for model training.Two additional sites with images acquired both at high and low(GSD≈0:6 cm)resolutions were used to evaluate the model performances.Results show that Faster-RCNN achieved very good plant detection and counting(rRMSE=0:08)performances when native high-resolution images are used both for training and validation.Similarly,good performances were observed(rRMSE=0:11)when the model is trained over synthetic low-resolution images obtained by downsampling the native training high-resolution images and applied to the synthetic low-resolution validation images.Conversely,poor performances are obtained when the model is trained on a given spatial resolution and applied to another spatial resolution.Training on a mix of high-and low-resolution images allows to get very good performances on the native high-resolution(rRMSE=0:06)and synthetic low-resolution(rRMSE=0:10)images.However,very low performances are still observed over the native low-resolution images(rRMSE=0:48),mainly due to the poor quality of the native low-resolution images.Finally,an advanced super resolution method based on GAN(generative adversarial network)that introduces additional textural information derived from the native high-resolution images was applied to the native low-resolution validation images.Results show some significant improvement(rRMSE=0:22)compared to bicubic upsampling approach,while still far below the performances achieved over the native high-resolution images. 展开更多
关键词 RCNN FASTER IMAGE
原文传递
Easy MPE:Extraction of Quality Microplot Images for UAV-Based High-Throughput Field Phenotyping 被引量:4
6
作者 Léa Tresch Yue Mu +5 位作者 Atsushi Itoh Akito Kaga Kazunori Taguchi Masayuki Hirafuji Seishi Ninomiya Wei Guo 《Plant Phenomics》 2019年第1期30-38,共9页
Microplot extraction(PE)is a necessary image processing step in unmanned aerial vehicle-(UAV-)based research on breeding fields.At present,it is manually using ArcGIS,QGIS,or other GIS-based software,but achieving the... Microplot extraction(PE)is a necessary image processing step in unmanned aerial vehicle-(UAV-)based research on breeding fields.At present,it is manually using ArcGIS,QGIS,or other GIS-based software,but achieving the desired accuracy is timeconsuming.We therefore developed an intuitive,easy-to-use semiautomatic program for MPE called Easy MPE to enable researchers and others to access reliable plot data UAV images of whole fields under variable field conditions.The program uses four major steps:(1)binary segmentation,(2)microplot extraction,(3)production of∗.shp files to enable further file manipulation,and(4)projection of individual microplots generated from the orthomosaic back onto the raw aerial UAV images to preserve the image quality.Crop rows were successfully identified in all trial fields.The performance of the proposed method was evaluated by calculating the intersection-over-union(IOU)ratio between microplots determined manually and by Easy MPE:the average IOU(±SD)of all trials was 91%(±3). 展开更多
关键词 enable image UNION
原文传递
A Weakly Supervised Deep Learning Framework for Sorghum Head Detection and Counting 被引量:23
7
作者 Sambuddha Ghosa Bangyou Zheng +10 位作者 Scott Chapman Andries B.Potgieter David R.Jordan Xuemin Wang Asheesh K.Singh Arti Singh Masayuki Hirafuji Seishi Ninomiya Baskar Ganapathysubramanian Soumik Sarkar Wei Guo 《Plant Phenomics》 2019年第1期1-14,共14页
The yield of cereal crops such as sorghum(Sorghum bicolor L.Moench)depends on the distribution of crop-heads in varying branching arrangements.Therefore,counting the head number per unit area is critical for plant bre... The yield of cereal crops such as sorghum(Sorghum bicolor L.Moench)depends on the distribution of crop-heads in varying branching arrangements.Therefore,counting the head number per unit area is critical for plant breeders to correlate with the genotypic variation in a specific breeding field.However,measuring such phenotypic traitsmanually is an extremely labor-intensive process and suffers from low efficiency and human errors.Moreover,the process is almost infeasible for large-scale breeding plantations or experiments.Machine learning-based approaches like deep convolutional neural network(CNN)based object detectors are promising tools for efficient object detection and counting.However,a significant limitation of such deep learningbased approaches is that they typically require a massive amount of hand-labeled images for training,which is still a tedious process.Here,we propose an active learning inspired weakly supervised deep learning framework for sorghum head detection and counting from UAV-based images.We demonstrate that it is possible to significantly reduce human labeling effort without compromising final model performance(R^(2)between human count and machine count is 0.88)by using a semitrained CNN model(i.e.,trained with limited labeled data)to perform synthetic annotation.In addition,we also visualize key features that the network learns.This improves trustworthiness by enabling users to better understand and trust the decisions that the trained deep learning model makes. 展开更多
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部