为解决传统水泥净浆流动扩展过程表征存在的数据处理依赖人工、操作繁琐和耗时较长等不足,提出了一种基于计算机视觉的流动扩展过程高精度表征方法。以Segment Anything Model 2(SAM2)为核心,采用You Only Look Once v11(YOLOv11)模型...为解决传统水泥净浆流动扩展过程表征存在的数据处理依赖人工、操作繁琐和耗时较长等不足,提出了一种基于计算机视觉的流动扩展过程高精度表征方法。以Segment Anything Model 2(SAM2)为核心,采用You Only Look Once v11(YOLOv11)模型确定提示点,通过透视变换与光线折射的双重几何校正方法降低误差。结果表明,计算的最终扩展度与试验结果高度一致(平均绝对误差<1 mm),同时能够获取扩展度、速率随时间变化的曲线。对这些动态过程信息的分析,有助于更加全面地表征净浆的流动行为,为反演净浆的流变性能提供了数据基础。展开更多
This study investigates the application and performance of the Segment Anything Model 2 (SAM2) in thechallenging task of video camouflaged object segmentation (VCOS). VCOS involves detecting objects that blendseamless...This study investigates the application and performance of the Segment Anything Model 2 (SAM2) in thechallenging task of video camouflaged object segmentation (VCOS). VCOS involves detecting objects that blendseamlessly in the surroundings for videos due to similar colors and textures and poor light conditions. Compared tothe objects in normal scenes, camouflaged objects are much more difficult to detect. SAM2, a video foundationmodel, has shown potential in various tasks. However, its effectiveness in dynamic camouflaged scenarios remainsunder-explored. This study presents a comprehensive study on SAM2’s ability in VCOS. First, we assess SAM2’sperformance on camouflaged video datasets using different models and prompts (click, box, and mask). Second, weexplore the integration of SAM2 with existing multimodal large language models (MLLMs) and VCOS methods. Third,we specifically adapt SAM2 by fine-tuning it on the video camouflaged dataset. Our comprehensive experimentsdemonstrate that SAM2 has the excellent zero-shot ability to detect camouflaged objects in videos. We also showthat this ability could be further improved by specifically adjusting SAM2’s parameters for VCOS.展开更多
文摘针对传统的滚动轴承故障诊断方法难以准确高效的实现故障分类,提出了一种融合对称点模式(Symmetrized Dot Pattern,SDP)和改进SAM⁃MobileNetv2的滚动轴承故障分类方法。首先,将轴承振动信号通过SDP算法转化为含有丰富特征信息的二维图像。然后,将二维图像输入到改进SAM⁃MobileNetv2网络模型中,对故障特征信息进行提取和分类。在改进SAM⁃MobileNetv2网络中,使用自适应激活函数ACON(Activate or not)对SAM⁃MobileNetv2中的ReLU6激活函数进行替换,提高模型分类性能。最后,将本模型与多种网络模型做对比。试验结果表明,本模型可以准确高效地实现对滚动轴承故障的分类,使用凯斯西储大学轴承故障数据的准确率为99.5%,使用渥太华大学轴承故障数据的准确率为97.2%。
文摘为解决传统水泥净浆流动扩展过程表征存在的数据处理依赖人工、操作繁琐和耗时较长等不足,提出了一种基于计算机视觉的流动扩展过程高精度表征方法。以Segment Anything Model 2(SAM2)为核心,采用You Only Look Once v11(YOLOv11)模型确定提示点,通过透视变换与光线折射的双重几何校正方法降低误差。结果表明,计算的最终扩展度与试验结果高度一致(平均绝对误差<1 mm),同时能够获取扩展度、速率随时间变化的曲线。对这些动态过程信息的分析,有助于更加全面地表征净浆的流动行为,为反演净浆的流变性能提供了数据基础。
文摘This study investigates the application and performance of the Segment Anything Model 2 (SAM2) in thechallenging task of video camouflaged object segmentation (VCOS). VCOS involves detecting objects that blendseamlessly in the surroundings for videos due to similar colors and textures and poor light conditions. Compared tothe objects in normal scenes, camouflaged objects are much more difficult to detect. SAM2, a video foundationmodel, has shown potential in various tasks. However, its effectiveness in dynamic camouflaged scenarios remainsunder-explored. This study presents a comprehensive study on SAM2’s ability in VCOS. First, we assess SAM2’sperformance on camouflaged video datasets using different models and prompts (click, box, and mask). Second, weexplore the integration of SAM2 with existing multimodal large language models (MLLMs) and VCOS methods. Third,we specifically adapt SAM2 by fine-tuning it on the video camouflaged dataset. Our comprehensive experimentsdemonstrate that SAM2 has the excellent zero-shot ability to detect camouflaged objects in videos. We also showthat this ability could be further improved by specifically adjusting SAM2’s parameters for VCOS.