期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
The blessing of Depth Anything:An almost unsupervised approach to crop segmentation with depth-informed pseudo labeling
1
作者 Songliang Cao Binghui Xu +6 位作者 Wei Zhou Letian Zhou Jiafei Zhang Yuhui Zheng Weijuan Hu Zhiguo Han Hao Lu 《Plant Phenomics》 2025年第1期49-65,共17页
We present Depth-Informed Crop Segmentation(DepthCropSeg),an almost unsupervised crop segmentation approach without manual pixel-level annotations.Crop segmentation is a fundamental vision task in agriculture,which be... We present Depth-Informed Crop Segmentation(DepthCropSeg),an almost unsupervised crop segmentation approach without manual pixel-level annotations.Crop segmentation is a fundamental vision task in agriculture,which benefits a number of downstream applications such as crop growth monitoring and yield estimation.Over the past decade,image-based crop segmentation approaches have shifted from classic color-based paradigms to recent deep learning-based ones.The latter,however,rely heavily on large amounts of data with high-quality manual annotation such that considerable human labor and time are spent.In this work,we leverage Depth Anything V2,a vision foundation model,to produce high-quality pseudo crop masks for training segmentation models.We compile a dataset of 17,199 images from six public plant segmentation sources,generating pseudo masks from depth maps after normalization and thresholding.After a coarse-to-fine manual screening,1378 images with reliable masks are selected.We compare four semantic segmentation models and enhance the top-performing one with depth-informed two-stage self-training and depth-informed post-processing.To evaluate the feasibility and robustness of DepthCropSeg,we benchmark the segmentation performance on 10 public crop segmentation testing sets and a self-collect dataset covering in-field,laboratory,and unmanned aerial vehicle(UAV)scenarios.Experimental results show that our DepthCropSeg approach can achieve crop segmentation performance comparable to the fully supervised model trained with manually annotated data(86.91 vs.87.10).For the first time,we demonstrate almost unsupervised,close-to-full-supervision crop segmentation successfully. 展开更多
关键词 Crop segmentation Plant phenotyping Depth anything Segment anything Efficient labeling
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部