Panoramic images, offering a 360-degree view, are essential in virtual reality(VR) and augmented reality(AR), enhancing realism with high-quality textures. However, acquiring complete and high-quality panoramic textur...Panoramic images, offering a 360-degree view, are essential in virtual reality(VR) and augmented reality(AR), enhancing realism with high-quality textures. However, acquiring complete and high-quality panoramic textures is challenging. This paper introduces a method using generative adversarial networks(GANs) and the contrastive language-image pretraining(CLIP) model to restore and control texture in panoramic images. The GAN model captures complex structures and maintains consistency, while CLIP enables fine-grained texture control via semantic text-image associations. GAN inversion optimizes latent codes for precise texture details. The resulting low dynamic range(LDR) images are converted to high dynamic range(HDR) using the Blender engine for seamless texture blending. Experimental results demonstrate the effectiveness and flexibility of this method in panoramic texture restoration and generation.展开更多
当前,全球科技创新呈现高速发展和高度融合的态势。准确识别出颠覆性技术主题以推动全面创新已成为科学技术发展和经济增长的关键动力。然而,传统的颠覆性技术主题识别方法主要依赖于单一模态数据,存在一定的局限性。本文基于CLIP(contr...当前,全球科技创新呈现高速发展和高度融合的态势。准确识别出颠覆性技术主题以推动全面创新已成为科学技术发展和经济增长的关键动力。然而,传统的颠覆性技术主题识别方法主要依赖于单一模态数据,存在一定的局限性。本文基于CLIP(contrastive language-image pre-training)和LDAGV(linear discriminant analysis&global vectors for word representation)模型构建新闻文本与图像特征融合向量,通过k-means聚类迭代并结合3个颠覆性技术主题指标进行筛选,实现了多模态信息的融合以及主题的精准识别。以新能源领域为例,验证了该模型在颠覆性技术主题识别方面的可行性和有效性。与其他单一模态模型相比,多模态信息融合模型在颠覆性技术主题识别方面更具优势。展开更多
Video-text retrieval (VTR) is an essential task in multimodal learning, aiming to bridge the semantic gap between visual and textual data. Effective video frame sampling plays a crucial role in improving retrieval per...Video-text retrieval (VTR) is an essential task in multimodal learning, aiming to bridge the semantic gap between visual and textual data. Effective video frame sampling plays a crucial role in improving retrieval performance, as it determines the quality of the visual content representation. Traditional sampling methods, such as uniform sampling and optical flow-based techniques, often fail to capture the full semantic range of videos, leading to redundancy and inefficiencies. In this work, we propose CLIP4Video-Sampling: Global Semantics-Guided Multi-Granularity Frame Sampling for Video-Text Retrieval, a global semantics-guided multi-granularity frame sampling strategy designed to optimize both computational efficiency and retrieval accuracy. By integrating multi-scale global and local temporal sampling and leveraging the CLIP (Contrastive Language-Image Pre-training) model’s powerful feature extraction capabilities, our method significantly outperforms existing approaches in both zero-shot and fine-tuned video-text retrieval tasks on popular datasets. CLIP4Video-Sampling reduces redundancy, ensures keyframe coverage, and serves as an adaptable pre-processing module for multimodal models.展开更多
We modified a three-dimensional cerebral aneurysm model for surgical simulation and educational demonstration. Novel models are made showing perforating arteries arising around the aneurysm. Information about perforat...We modified a three-dimensional cerebral aneurysm model for surgical simulation and educational demonstration. Novel models are made showing perforating arteries arising around the aneurysm. Information about perforating arteries is difficult to obtain from individual radiological data sets. Perforators are therefore reproduced based on previous anatomical knowledge instead of personal data. Due to their fragility, perforating arteries are attached to the model using hard materials. At the same time, hollow models are useful for practicing clip application. We made a model for practicing the application of fenestrated clips for paraclinoid internal carotid aneurysms. Situating aneurysm models in the fissure of a brain model simulates the real surgical field and is helpful for educational demonstrations.展开更多
视频-文本检索作为一项被广泛应用于现实生活中的多模态检索技术受到越来越多的研究者的关注.近来,大部分视频文本工作通过利用大规模预训练模型中所学到的视觉与语言之间的匹配关系来提升文本视频间跨模态检索效果.然而,这些方法忽略...视频-文本检索作为一项被广泛应用于现实生活中的多模态检索技术受到越来越多的研究者的关注.近来,大部分视频文本工作通过利用大规模预训练模型中所学到的视觉与语言之间的匹配关系来提升文本视频间跨模态检索效果.然而,这些方法忽略了视频、文本数据都是由一个个事件组合而成.倘若能捕捉视频事件与文本事件之间的细粒度相似性关系,将能帮助模型计算出更准确的文本与视频之间的语义相似性关系,进而提升文本视频间跨模态检索效果.因此,提出了一种基于CLIP生成多事件表示的视频文本检索方法(CLIP based multi-event representation generation for video-text retrieval,CLIPMERG).首先,通过利用大规模图文预训练模型CLIP的视频编码器(ViT)以及文本编码器(Tansformer)分别将视频、文本数据转换成视频帧token序列以及文本的单词token序列;然后,通过视频事件生成器(文本事件生成器)将视频帧token序列(单词token序列)转换成k个视频事件表示(k个文本事件表示);最后,通过挖掘视频事件表示与文本事件表示之间的细粒度关系以定义视频、文本间的语义相似性关系.在3个常用的公开视频文本检索数据集MSR-VTT,DiDeMo,LSMDC上的实验结果表明所提的CLIPMERG优于现有的视频文本检索方法.展开更多
文摘Panoramic images, offering a 360-degree view, are essential in virtual reality(VR) and augmented reality(AR), enhancing realism with high-quality textures. However, acquiring complete and high-quality panoramic textures is challenging. This paper introduces a method using generative adversarial networks(GANs) and the contrastive language-image pretraining(CLIP) model to restore and control texture in panoramic images. The GAN model captures complex structures and maintains consistency, while CLIP enables fine-grained texture control via semantic text-image associations. GAN inversion optimizes latent codes for precise texture details. The resulting low dynamic range(LDR) images are converted to high dynamic range(HDR) using the Blender engine for seamless texture blending. Experimental results demonstrate the effectiveness and flexibility of this method in panoramic texture restoration and generation.
文摘当前,全球科技创新呈现高速发展和高度融合的态势。准确识别出颠覆性技术主题以推动全面创新已成为科学技术发展和经济增长的关键动力。然而,传统的颠覆性技术主题识别方法主要依赖于单一模态数据,存在一定的局限性。本文基于CLIP(contrastive language-image pre-training)和LDAGV(linear discriminant analysis&global vectors for word representation)模型构建新闻文本与图像特征融合向量,通过k-means聚类迭代并结合3个颠覆性技术主题指标进行筛选,实现了多模态信息的融合以及主题的精准识别。以新能源领域为例,验证了该模型在颠覆性技术主题识别方面的可行性和有效性。与其他单一模态模型相比,多模态信息融合模型在颠覆性技术主题识别方面更具优势。
文摘Video-text retrieval (VTR) is an essential task in multimodal learning, aiming to bridge the semantic gap between visual and textual data. Effective video frame sampling plays a crucial role in improving retrieval performance, as it determines the quality of the visual content representation. Traditional sampling methods, such as uniform sampling and optical flow-based techniques, often fail to capture the full semantic range of videos, leading to redundancy and inefficiencies. In this work, we propose CLIP4Video-Sampling: Global Semantics-Guided Multi-Granularity Frame Sampling for Video-Text Retrieval, a global semantics-guided multi-granularity frame sampling strategy designed to optimize both computational efficiency and retrieval accuracy. By integrating multi-scale global and local temporal sampling and leveraging the CLIP (Contrastive Language-Image Pre-training) model’s powerful feature extraction capabilities, our method significantly outperforms existing approaches in both zero-shot and fine-tuned video-text retrieval tasks on popular datasets. CLIP4Video-Sampling reduces redundancy, ensures keyframe coverage, and serves as an adaptable pre-processing module for multimodal models.
文摘We modified a three-dimensional cerebral aneurysm model for surgical simulation and educational demonstration. Novel models are made showing perforating arteries arising around the aneurysm. Information about perforating arteries is difficult to obtain from individual radiological data sets. Perforators are therefore reproduced based on previous anatomical knowledge instead of personal data. Due to their fragility, perforating arteries are attached to the model using hard materials. At the same time, hollow models are useful for practicing clip application. We made a model for practicing the application of fenestrated clips for paraclinoid internal carotid aneurysms. Situating aneurysm models in the fissure of a brain model simulates the real surgical field and is helpful for educational demonstrations.
文摘视频-文本检索作为一项被广泛应用于现实生活中的多模态检索技术受到越来越多的研究者的关注.近来,大部分视频文本工作通过利用大规模预训练模型中所学到的视觉与语言之间的匹配关系来提升文本视频间跨模态检索效果.然而,这些方法忽略了视频、文本数据都是由一个个事件组合而成.倘若能捕捉视频事件与文本事件之间的细粒度相似性关系,将能帮助模型计算出更准确的文本与视频之间的语义相似性关系,进而提升文本视频间跨模态检索效果.因此,提出了一种基于CLIP生成多事件表示的视频文本检索方法(CLIP based multi-event representation generation for video-text retrieval,CLIPMERG).首先,通过利用大规模图文预训练模型CLIP的视频编码器(ViT)以及文本编码器(Tansformer)分别将视频、文本数据转换成视频帧token序列以及文本的单词token序列;然后,通过视频事件生成器(文本事件生成器)将视频帧token序列(单词token序列)转换成k个视频事件表示(k个文本事件表示);最后,通过挖掘视频事件表示与文本事件表示之间的细粒度关系以定义视频、文本间的语义相似性关系.在3个常用的公开视频文本检索数据集MSR-VTT,DiDeMo,LSMDC上的实验结果表明所提的CLIPMERG优于现有的视频文本检索方法.