Rice panicle phenotyping is required in rice breeding for high yield and grain quality.To fully evaluate spikelet and kernel traits without threshing and hulling,using X-ray and RGB scanning,we developed an integrated...Rice panicle phenotyping is required in rice breeding for high yield and grain quality.To fully evaluate spikelet and kernel traits without threshing and hulling,using X-ray and RGB scanning,we developed an integrated rice panicle phenotyping system and a corresponding image analysis pipeline.We compared five methods of counting spikelets and found that Faster R-CNN achieved high accuracy(R~2 of 0.99)and speed.Faster R-CNN was also applied to indica and japonica classification and achieved 91%accuracy.The proposed integrated panicle phenotyping method offers benefit for rice functional genetics and breeding.展开更多
Self-occlusions are common in rice canopy images and strongly influence the calculation accuracies of panicle traits. Such interference can be largely eliminated if panicles are phenotyped at the 3 D level.Research on...Self-occlusions are common in rice canopy images and strongly influence the calculation accuracies of panicle traits. Such interference can be largely eliminated if panicles are phenotyped at the 3 D level.Research on 3 D panicle phenotyping has been limited. Given that existing 3 D modeling techniques do not focus on specified parts of a target object, an efficient method for panicle modeling of large numbers of rice plants is lacking. This paper presents an automatic and nondestructive method for 3 D panicle modeling. The proposed method integrates shoot rice reconstruction with shape from silhouette, 2 D panicle segmentation with a deep convolutional neural network, and 3 D panicle segmentation with ray tracing and supervoxel clustering. A multiview imaging system was built to acquire image sequences of rice canopies with an efficiency of approximately 4 min per rice plant. The execution time of panicle modeling per rice plant using 90 images was approximately 26 min. The outputs of the algorithm for a single rice plant are a shoot rice model, surface shoot rice model, panicle model, and surface panicle model, all represented by a list of spatial coordinates. The efficiency and performance were evaluated and compared with the classical structure-from-motion algorithm. The results demonstrated that the proposed method is well qualified to recover the 3 D shapes of rice panicles from multiview images and is readily adaptable to rice plants of diverse accessions and growth stages. The proposed algorithm is superior to the structure-from-motion method in terms of texture preservation and computational efficiency. The sample images and implementation of the algorithm are available online. This automatic, cost-efficient, and nondestructive method of 3 D panicle modeling may be applied to high-throughput 3 D phenotyping of large rice populations.展开更多
Identification of the phenotypes of fruits is critical for understanding complex genetic traits.Computed to-mography(CT)imaging technology enables the noninvasive acquisition of three-dimensional images of fruit inter...Identification of the phenotypes of fruits is critical for understanding complex genetic traits.Computed to-mography(CT)imaging technology enables the noninvasive acquisition of three-dimensional images of fruit interiors,thus providing a robust data foundation for phenotypic analysis.Accurate segmentation of internal fruit tissues is essential,as it directly influences the accuracy and reliability of the results.Current methods are not optimized for the unique features of plant fruit images.This study introduces XFruitSeg,which is a general deep learning model for segmenting plant fruit CT images.The model uses a U-shaped encoder-decoder architecture and integrates multitask learning.A large convolutional kernel network,RepLKNet,expands the receptive field for feature extraction.Multiscale skip connections and a deep supervision mechanism improve the model's ca-pacity to learn features of various sizes,and a contour feature learning branch specifically targets the interor-ganizational boundaries.An optimized composite loss function enhances the model's robustness when applied to imbalanced categories.Additionally,a dataset named XrayFruitData was established,which contains high-resolution images of twelve plant fruit varieties,with accurate annotations for orange,mangosteen,and durian fruits for model evaluation.Compared with four mainstream advanced models,XFruitSeg achieved su-perior segmentation performance on the orange,mangosteen,and durian datasets,with mean Dice coefficients of 95.21%,93.24%,and 94.70%and mean intersection over union(mIoU)scores of 91.09%,87.91%,and 90.35%,respectively.The results of extensive ablation experiments demonstrate the effectiveness of each component.Therefore,the proposed XFruitSeg model has been proven to be beneficial for high-precision analysis of internal fruit phenotyping traits.展开更多
基金supported by the National Key Research and Development Program of China(2016YFD0100101-18)the National Natural Science Foundation of China(31770397,31701317)the Fundamental Research Funds for the Central Universities(2662017PY058)。
文摘Rice panicle phenotyping is required in rice breeding for high yield and grain quality.To fully evaluate spikelet and kernel traits without threshing and hulling,using X-ray and RGB scanning,we developed an integrated rice panicle phenotyping system and a corresponding image analysis pipeline.We compared five methods of counting spikelets and found that Faster R-CNN achieved high accuracy(R~2 of 0.99)and speed.Faster R-CNN was also applied to indica and japonica classification and achieved 91%accuracy.The proposed integrated panicle phenotyping method offers benefit for rice functional genetics and breeding.
基金supported by the National Natural Science Foundation of China (U21A20205)Key Projects of Natural Science Foundation of Hubei Province (2021CFA059)+1 种基金Fundamental Research Funds for the Central Universities (2021ZKPY006)cooperative funding between Huazhong Agricultural University and Shenzhen Institute of Agricultural Genomics (SZYJY2021005,SZYJY2021007)。
文摘Self-occlusions are common in rice canopy images and strongly influence the calculation accuracies of panicle traits. Such interference can be largely eliminated if panicles are phenotyped at the 3 D level.Research on 3 D panicle phenotyping has been limited. Given that existing 3 D modeling techniques do not focus on specified parts of a target object, an efficient method for panicle modeling of large numbers of rice plants is lacking. This paper presents an automatic and nondestructive method for 3 D panicle modeling. The proposed method integrates shoot rice reconstruction with shape from silhouette, 2 D panicle segmentation with a deep convolutional neural network, and 3 D panicle segmentation with ray tracing and supervoxel clustering. A multiview imaging system was built to acquire image sequences of rice canopies with an efficiency of approximately 4 min per rice plant. The execution time of panicle modeling per rice plant using 90 images was approximately 26 min. The outputs of the algorithm for a single rice plant are a shoot rice model, surface shoot rice model, panicle model, and surface panicle model, all represented by a list of spatial coordinates. The efficiency and performance were evaluated and compared with the classical structure-from-motion algorithm. The results demonstrated that the proposed method is well qualified to recover the 3 D shapes of rice panicles from multiview images and is readily adaptable to rice plants of diverse accessions and growth stages. The proposed algorithm is superior to the structure-from-motion method in terms of texture preservation and computational efficiency. The sample images and implementation of the algorithm are available online. This automatic, cost-efficient, and nondestructive method of 3 D panicle modeling may be applied to high-throughput 3 D phenotyping of large rice populations.
基金This work was supported by the National Key R&D Program of China(2023ZD04073)Sanya Yazhou Bay Science and Technology City(SCKJ-JYRC-2023-25)+1 种基金the National Natural Science Foundation of China(32360116)the Research Project of the Collaborative Innovation Center of Hainan University(XTCX2022NYB01).
文摘Identification of the phenotypes of fruits is critical for understanding complex genetic traits.Computed to-mography(CT)imaging technology enables the noninvasive acquisition of three-dimensional images of fruit interiors,thus providing a robust data foundation for phenotypic analysis.Accurate segmentation of internal fruit tissues is essential,as it directly influences the accuracy and reliability of the results.Current methods are not optimized for the unique features of plant fruit images.This study introduces XFruitSeg,which is a general deep learning model for segmenting plant fruit CT images.The model uses a U-shaped encoder-decoder architecture and integrates multitask learning.A large convolutional kernel network,RepLKNet,expands the receptive field for feature extraction.Multiscale skip connections and a deep supervision mechanism improve the model's ca-pacity to learn features of various sizes,and a contour feature learning branch specifically targets the interor-ganizational boundaries.An optimized composite loss function enhances the model's robustness when applied to imbalanced categories.Additionally,a dataset named XrayFruitData was established,which contains high-resolution images of twelve plant fruit varieties,with accurate annotations for orange,mangosteen,and durian fruits for model evaluation.Compared with four mainstream advanced models,XFruitSeg achieved su-perior segmentation performance on the orange,mangosteen,and durian datasets,with mean Dice coefficients of 95.21%,93.24%,and 94.70%and mean intersection over union(mIoU)scores of 91.09%,87.91%,and 90.35%,respectively.The results of extensive ablation experiments demonstrate the effectiveness of each component.Therefore,the proposed XFruitSeg model has been proven to be beneficial for high-precision analysis of internal fruit phenotyping traits.