钻井过程中对上返岩屑的监测与识别是感知地层变化、及时发现掉块并减缓井壁失稳风险的关键手段。实现快速、客观、自动化的岩屑识别对保障钻井安全、提高钻井效率具有重要意义。目前,岩屑识别主要依赖人工经验判断,存在主观性强、耗时...钻井过程中对上返岩屑的监测与识别是感知地层变化、及时发现掉块并减缓井壁失稳风险的关键手段。实现快速、客观、自动化的岩屑识别对保障钻井安全、提高钻井效率具有重要意义。目前,岩屑识别主要依赖人工经验判断,存在主观性强、耗时长和工作量大等问题。基于实际采集的岩屑图像,提出一种基于Segment Anything Model 2(SAM2)与KMeans聚类算法的岩屑识别模型,实现对岩屑颗粒的精确分割与自动聚类。同时,设计了交互式选择功能,支持工程师快速挑选目标岩屑块,显著提升岩屑块可视化与识别效率。实验结果表明,SAM2在岩屑图像分割任务中表现优异,分割精度较现有主流方法提升3%~6%。在四川威远构SX井的实际岩屑图像测试中,模型聚类识别准确率达83.9%,与人工标注结果高度一致。在典型井段的应用中,模型识别出4类主要岩屑,各类别占比分布与人工判别结果差异较小。研究结果表明,本文提出的模型方法能够有效划分不同粒径岩屑块并合理预测各类岩性占比,有助于辅助工程师快速判定地层岩性,提升钻井过程监测的客观性与实时性。展开更多
This study investigates the application and performance of the Segment Anything Model 2 (SAM2) in thechallenging task of video camouflaged object segmentation (VCOS). VCOS involves detecting objects that blendseamless...This study investigates the application and performance of the Segment Anything Model 2 (SAM2) in thechallenging task of video camouflaged object segmentation (VCOS). VCOS involves detecting objects that blendseamlessly in the surroundings for videos due to similar colors and textures and poor light conditions. Compared tothe objects in normal scenes, camouflaged objects are much more difficult to detect. SAM2, a video foundationmodel, has shown potential in various tasks. However, its effectiveness in dynamic camouflaged scenarios remainsunder-explored. This study presents a comprehensive study on SAM2’s ability in VCOS. First, we assess SAM2’sperformance on camouflaged video datasets using different models and prompts (click, box, and mask). Second, weexplore the integration of SAM2 with existing multimodal large language models (MLLMs) and VCOS methods. Third,we specifically adapt SAM2 by fine-tuning it on the video camouflaged dataset. Our comprehensive experimentsdemonstrate that SAM2 has the excellent zero-shot ability to detect camouflaged objects in videos. We also showthat this ability could be further improved by specifically adjusting SAM2’s parameters for VCOS.展开更多
为解决传统水泥净浆流动扩展过程表征存在的数据处理依赖人工、操作繁琐和耗时较长等不足,提出了一种基于计算机视觉的流动扩展过程高精度表征方法。以Segment Anything Model 2(SAM2)为核心,采用You Only Look Once v11(YOLOv11)模型...为解决传统水泥净浆流动扩展过程表征存在的数据处理依赖人工、操作繁琐和耗时较长等不足,提出了一种基于计算机视觉的流动扩展过程高精度表征方法。以Segment Anything Model 2(SAM2)为核心,采用You Only Look Once v11(YOLOv11)模型确定提示点,通过透视变换与光线折射的双重几何校正方法降低误差。结果表明,计算的最终扩展度与试验结果高度一致(平均绝对误差<1 mm),同时能够获取扩展度、速率随时间变化的曲线。对这些动态过程信息的分析,有助于更加全面地表征净浆的流动行为,为反演净浆的流变性能提供了数据基础。展开更多
Advanced plant phenotyping technologies are vital for trait improvement and accelerating intelligent breeding.Due to the species diversity of plants,existing methods heavily rely on large-scale high-precision manually...Advanced plant phenotyping technologies are vital for trait improvement and accelerating intelligent breeding.Due to the species diversity of plants,existing methods heavily rely on large-scale high-precision manually annotated data.For self-occluded objects at the grain level,unsupervised methods often prove ineffective.This study proposes IPENS,an interactive unsupervised multi-target point cloud extraction method.It utilizes radi-ance field information to lift 2D masks,segmented by SAM2(Segment Anything Model 2),into 3D space for target point cloud extraction.A multi-target collaborative optimization strategy addresses the challenge of segmenting multiple targets from a single interaction.On a rice dataset,IPENS achieves a grain-level segmen-tation mean Intersection over Union(mIoU)of 63.72%.For phenotypic trait estimation,it achieves a grain voxel volume coefficient of determination R^(2)=0.7697(Root Mean Square Error,RMSE=0.0025),leaf surface area R^(2)=0.84(RMSE=18.93),and leaf length and width prediction accuracies of R^(2)=0.97 and R^(2)=0.87(RMSE=1.49 and 0.21).On a wheat dataset,IPENS further improves segmentation performance to a mIoU of 89.68%,with exceptional phenotypic estimation results:panicle voxel volume R^(2)=0.9956(RMSE=0.0055),leaf surface area R^(2)=1.00(RMSE=0.67),and leaf length and width predictions reaching R^(2)=0.99 and R^(2)=0.92(RMSE=0.23 and 0.15).Without requiring annotated data,IPENS rapidly extracts grain-level point clouds for multiple targets within 3 min using single-round image interactions.These features make IPENS a high-quality,non-invasive phenotypic extraction solution for rice and wheat,offering significant potential to enhance intelligent breeding.展开更多
文摘钻井过程中对上返岩屑的监测与识别是感知地层变化、及时发现掉块并减缓井壁失稳风险的关键手段。实现快速、客观、自动化的岩屑识别对保障钻井安全、提高钻井效率具有重要意义。目前,岩屑识别主要依赖人工经验判断,存在主观性强、耗时长和工作量大等问题。基于实际采集的岩屑图像,提出一种基于Segment Anything Model 2(SAM2)与KMeans聚类算法的岩屑识别模型,实现对岩屑颗粒的精确分割与自动聚类。同时,设计了交互式选择功能,支持工程师快速挑选目标岩屑块,显著提升岩屑块可视化与识别效率。实验结果表明,SAM2在岩屑图像分割任务中表现优异,分割精度较现有主流方法提升3%~6%。在四川威远构SX井的实际岩屑图像测试中,模型聚类识别准确率达83.9%,与人工标注结果高度一致。在典型井段的应用中,模型识别出4类主要岩屑,各类别占比分布与人工判别结果差异较小。研究结果表明,本文提出的模型方法能够有效划分不同粒径岩屑块并合理预测各类岩性占比,有助于辅助工程师快速判定地层岩性,提升钻井过程监测的客观性与实时性。
文摘This study investigates the application and performance of the Segment Anything Model 2 (SAM2) in thechallenging task of video camouflaged object segmentation (VCOS). VCOS involves detecting objects that blendseamlessly in the surroundings for videos due to similar colors and textures and poor light conditions. Compared tothe objects in normal scenes, camouflaged objects are much more difficult to detect. SAM2, a video foundationmodel, has shown potential in various tasks. However, its effectiveness in dynamic camouflaged scenarios remainsunder-explored. This study presents a comprehensive study on SAM2’s ability in VCOS. First, we assess SAM2’sperformance on camouflaged video datasets using different models and prompts (click, box, and mask). Second, weexplore the integration of SAM2 with existing multimodal large language models (MLLMs) and VCOS methods. Third,we specifically adapt SAM2 by fine-tuning it on the video camouflaged dataset. Our comprehensive experimentsdemonstrate that SAM2 has the excellent zero-shot ability to detect camouflaged objects in videos. We also showthat this ability could be further improved by specifically adjusting SAM2’s parameters for VCOS.
文摘为解决传统水泥净浆流动扩展过程表征存在的数据处理依赖人工、操作繁琐和耗时较长等不足,提出了一种基于计算机视觉的流动扩展过程高精度表征方法。以Segment Anything Model 2(SAM2)为核心,采用You Only Look Once v11(YOLOv11)模型确定提示点,通过透视变换与光线折射的双重几何校正方法降低误差。结果表明,计算的最终扩展度与试验结果高度一致(平均绝对误差<1 mm),同时能够获取扩展度、速率随时间变化的曲线。对这些动态过程信息的分析,有助于更加全面地表征净浆的流动行为,为反演净浆的流变性能提供了数据基础。
基金supported by the National Key Research and Development Program of China(Grant Number 2023YFD1901003)the Strategic Priority Research Program of the Chinese Academy of Sciences(Grant XDA28120402).
文摘Advanced plant phenotyping technologies are vital for trait improvement and accelerating intelligent breeding.Due to the species diversity of plants,existing methods heavily rely on large-scale high-precision manually annotated data.For self-occluded objects at the grain level,unsupervised methods often prove ineffective.This study proposes IPENS,an interactive unsupervised multi-target point cloud extraction method.It utilizes radi-ance field information to lift 2D masks,segmented by SAM2(Segment Anything Model 2),into 3D space for target point cloud extraction.A multi-target collaborative optimization strategy addresses the challenge of segmenting multiple targets from a single interaction.On a rice dataset,IPENS achieves a grain-level segmen-tation mean Intersection over Union(mIoU)of 63.72%.For phenotypic trait estimation,it achieves a grain voxel volume coefficient of determination R^(2)=0.7697(Root Mean Square Error,RMSE=0.0025),leaf surface area R^(2)=0.84(RMSE=18.93),and leaf length and width prediction accuracies of R^(2)=0.97 and R^(2)=0.87(RMSE=1.49 and 0.21).On a wheat dataset,IPENS further improves segmentation performance to a mIoU of 89.68%,with exceptional phenotypic estimation results:panicle voxel volume R^(2)=0.9956(RMSE=0.0055),leaf surface area R^(2)=1.00(RMSE=0.67),and leaf length and width predictions reaching R^(2)=0.99 and R^(2)=0.92(RMSE=0.23 and 0.15).Without requiring annotated data,IPENS rapidly extracts grain-level point clouds for multiple targets within 3 min using single-round image interactions.These features make IPENS a high-quality,non-invasive phenotypic extraction solution for rice and wheat,offering significant potential to enhance intelligent breeding.
文摘针对传统的滚动轴承故障诊断方法难以准确高效的实现故障分类,提出了一种融合对称点模式(Symmetrized Dot Pattern,SDP)和改进SAM⁃MobileNetv2的滚动轴承故障分类方法。首先,将轴承振动信号通过SDP算法转化为含有丰富特征信息的二维图像。然后,将二维图像输入到改进SAM⁃MobileNetv2网络模型中,对故障特征信息进行提取和分类。在改进SAM⁃MobileNetv2网络中,使用自适应激活函数ACON(Activate or not)对SAM⁃MobileNetv2中的ReLU6激活函数进行替换,提高模型分类性能。最后,将本模型与多种网络模型做对比。试验结果表明,本模型可以准确高效地实现对滚动轴承故障的分类,使用凯斯西储大学轴承故障数据的准确率为99.5%,使用渥太华大学轴承故障数据的准确率为97.2%。