现有的野生动物识别方法主要依赖于静态数据集,难以适应物种动态迁移和新增类别识别的需求,导致监测效率低下。针对这一问题,提出多粒度提示驱动的野生动物识别方法(multi-granularity prompt-driven for wildlife recognition,MGP-WILD...现有的野生动物识别方法主要依赖于静态数据集,难以适应物种动态迁移和新增类别识别的需求,导致监测效率低下。针对这一问题,提出多粒度提示驱动的野生动物识别方法(multi-granularity prompt-driven for wildlife recognition,MGP-WILD)。通过云端大语言模型生成层次化语义描述(粗粒度生物分类+细粒度形态特征),由边缘节点协同维护动态知识表。具体而言,MGP-WILD利用大语言模型生成多粒度文本提示,相较于传统单粒度提示方法,本工作通过多粒度语义描述生成,实现了粗细粒度特征的深度融合,并结合视觉语言模型的跨模态对齐能力,实现了零样本精准识别。实验结果表明,该方法在多个数据集上均有较大提升,尤其在开放集识别任务中展现了较强的适应性。该系统已成功应用于青海野生动物栖息地保护,构建了基于真实场景的动物图像数据集,为生态脆弱区的生物多样性保护提供了创新技术范式。代码及部分数据集将在GitHub上公开。展开更多
In multimodal learning, Vision-Language Models (VLMs) have become a critical research focus, enabling the integration of textual and visual data. These models have shown significant promise across various natural lang...In multimodal learning, Vision-Language Models (VLMs) have become a critical research focus, enabling the integration of textual and visual data. These models have shown significant promise across various natural language processing tasks, such as visual question answering and computer vision applications, including image captioning and image-text retrieval, highlighting their adaptability for complex, multimodal datasets. In this work, we review the landscape of Bootstrapping Language-Image Pre-training (BLIP) and other VLM techniques. A comparative analysis is conducted to assess VLMs’ strengths, limitations, and applicability across tasks while examining challenges such as scalability, data quality, and fine-tuning complexities. The work concludes by outlining potential future directions in VLM research, focusing on enhancing model interpretability, addressing ethical implications, and advancing multimodal integration in real-world applications.展开更多
利用纳米压痕仪的连续刚度测量模式测试了常温氙离子辐照后Hastelloy N合金的纳米硬度。结果表明,辐照样品的纳米硬度均大于未辐照样品的纳米硬度,且辐照剂量在0.5~3.0 dpa这一范围内时,辐照样品的纳米硬度处于饱和状态。在Nix-Gao模型...利用纳米压痕仪的连续刚度测量模式测试了常温氙离子辐照后Hastelloy N合金的纳米硬度。结果表明,辐照样品的纳米硬度均大于未辐照样品的纳米硬度,且辐照剂量在0.5~3.0 dpa这一范围内时,辐照样品的纳米硬度处于饱和状态。在Nix-Gao模型的基础上,分离出未辐照样品和辐照样品的压痕尺寸效应,并通过VLM(volume law of mixture)模型来模拟实验测得的纳米硬度。由于随着压头压入深度的增加,塑性影响区中将同时包含辐照损伤层与基体,在VLM模型中引入“界面参数”(χ)以修正基体的形变量,改进后的模型能够更好地模拟纳米压痕的实验结果。展开更多
We present a novel framework,CLIPSP,and a novel adaptive prompt method to leverage pre-trained knowledge from CLIP for scene parsing.Our approach addresses the limitations of DenseCLIP,which demonstrates the superior ...We present a novel framework,CLIPSP,and a novel adaptive prompt method to leverage pre-trained knowledge from CLIP for scene parsing.Our approach addresses the limitations of DenseCLIP,which demonstrates the superior image segmentation provided by CLIP pre-trained models over ImageNet pre-trained models,but struggles with rough pixel-text score maps for complex scene parsing.We argue that,as they contain all textual information in a dataset,the pixel-text score maps,i.e.,dense prompts,are inevitably mixed with noise.To overcome this challenge,we propose a two-step method.Firstly,we extract visual and language features and perform multi-label classification to identify the most likely categories in the input images.Secondly,based on the top-k categories and confidence scores,our method generates scene tokens which can be treated as adaptive prompts for implicit modeling of scenes,and incorporates them into the visual features fed into the decoder for segmentation.Our method imposes a constraint on prompts and suppresses the probability of irrelevant categories appearing in the scene parsing results.Our method achieves competitive performance,limited by the available visual-language pre-trained models.Our CLIP-SP performs 1.14%better(in terms of mIoU)than DenseCLIP on ADE20K,using a ResNet-50 backbone.展开更多
文摘现有的野生动物识别方法主要依赖于静态数据集,难以适应物种动态迁移和新增类别识别的需求,导致监测效率低下。针对这一问题,提出多粒度提示驱动的野生动物识别方法(multi-granularity prompt-driven for wildlife recognition,MGP-WILD)。通过云端大语言模型生成层次化语义描述(粗粒度生物分类+细粒度形态特征),由边缘节点协同维护动态知识表。具体而言,MGP-WILD利用大语言模型生成多粒度文本提示,相较于传统单粒度提示方法,本工作通过多粒度语义描述生成,实现了粗细粒度特征的深度融合,并结合视觉语言模型的跨模态对齐能力,实现了零样本精准识别。实验结果表明,该方法在多个数据集上均有较大提升,尤其在开放集识别任务中展现了较强的适应性。该系统已成功应用于青海野生动物栖息地保护,构建了基于真实场景的动物图像数据集,为生态脆弱区的生物多样性保护提供了创新技术范式。代码及部分数据集将在GitHub上公开。
文摘In multimodal learning, Vision-Language Models (VLMs) have become a critical research focus, enabling the integration of textual and visual data. These models have shown significant promise across various natural language processing tasks, such as visual question answering and computer vision applications, including image captioning and image-text retrieval, highlighting their adaptability for complex, multimodal datasets. In this work, we review the landscape of Bootstrapping Language-Image Pre-training (BLIP) and other VLM techniques. A comparative analysis is conducted to assess VLMs’ strengths, limitations, and applicability across tasks while examining challenges such as scalability, data quality, and fine-tuning complexities. The work concludes by outlining potential future directions in VLM research, focusing on enhancing model interpretability, addressing ethical implications, and advancing multimodal integration in real-world applications.
文摘利用纳米压痕仪的连续刚度测量模式测试了常温氙离子辐照后Hastelloy N合金的纳米硬度。结果表明,辐照样品的纳米硬度均大于未辐照样品的纳米硬度,且辐照剂量在0.5~3.0 dpa这一范围内时,辐照样品的纳米硬度处于饱和状态。在Nix-Gao模型的基础上,分离出未辐照样品和辐照样品的压痕尺寸效应,并通过VLM(volume law of mixture)模型来模拟实验测得的纳米硬度。由于随着压头压入深度的增加,塑性影响区中将同时包含辐照损伤层与基体,在VLM模型中引入“界面参数”(χ)以修正基体的形变量,改进后的模型能够更好地模拟纳米压痕的实验结果。
文摘We present a novel framework,CLIPSP,and a novel adaptive prompt method to leverage pre-trained knowledge from CLIP for scene parsing.Our approach addresses the limitations of DenseCLIP,which demonstrates the superior image segmentation provided by CLIP pre-trained models over ImageNet pre-trained models,but struggles with rough pixel-text score maps for complex scene parsing.We argue that,as they contain all textual information in a dataset,the pixel-text score maps,i.e.,dense prompts,are inevitably mixed with noise.To overcome this challenge,we propose a two-step method.Firstly,we extract visual and language features and perform multi-label classification to identify the most likely categories in the input images.Secondly,based on the top-k categories and confidence scores,our method generates scene tokens which can be treated as adaptive prompts for implicit modeling of scenes,and incorporates them into the visual features fed into the decoder for segmentation.Our method imposes a constraint on prompts and suppresses the probability of irrelevant categories appearing in the scene parsing results.Our method achieves competitive performance,limited by the available visual-language pre-trained models.Our CLIP-SP performs 1.14%better(in terms of mIoU)than DenseCLIP on ADE20K,using a ResNet-50 backbone.