Panoramic images, offering a 360-degree view, are essential in virtual reality(VR) and augmented reality(AR), enhancing realism with high-quality textures. However, acquiring complete and high-quality panoramic textur...Panoramic images, offering a 360-degree view, are essential in virtual reality(VR) and augmented reality(AR), enhancing realism with high-quality textures. However, acquiring complete and high-quality panoramic textures is challenging. This paper introduces a method using generative adversarial networks(GANs) and the contrastive language-image pretraining(CLIP) model to restore and control texture in panoramic images. The GAN model captures complex structures and maintains consistency, while CLIP enables fine-grained texture control via semantic text-image associations. GAN inversion optimizes latent codes for precise texture details. The resulting low dynamic range(LDR) images are converted to high dynamic range(HDR) using the Blender engine for seamless texture blending. Experimental results demonstrate the effectiveness and flexibility of this method in panoramic texture restoration and generation.展开更多
当前,全球科技创新呈现高速发展和高度融合的态势。准确识别出颠覆性技术主题以推动全面创新已成为科学技术发展和经济增长的关键动力。然而,传统的颠覆性技术主题识别方法主要依赖于单一模态数据,存在一定的局限性。本文基于CLIP(contr...当前,全球科技创新呈现高速发展和高度融合的态势。准确识别出颠覆性技术主题以推动全面创新已成为科学技术发展和经济增长的关键动力。然而,传统的颠覆性技术主题识别方法主要依赖于单一模态数据,存在一定的局限性。本文基于CLIP(contrastive language-image pre-training)和LDAGV(linear discriminant analysis&global vectors for word representation)模型构建新闻文本与图像特征融合向量,通过k-means聚类迭代并结合3个颠覆性技术主题指标进行筛选,实现了多模态信息的融合以及主题的精准识别。以新能源领域为例,验证了该模型在颠覆性技术主题识别方面的可行性和有效性。与其他单一模态模型相比,多模态信息融合模型在颠覆性技术主题识别方面更具优势。展开更多
We modified a three-dimensional cerebral aneurysm model for surgical simulation and educational demonstration. Novel models are made showing perforating arteries arising around the aneurysm. Information about perforat...We modified a three-dimensional cerebral aneurysm model for surgical simulation and educational demonstration. Novel models are made showing perforating arteries arising around the aneurysm. Information about perforating arteries is difficult to obtain from individual radiological data sets. Perforators are therefore reproduced based on previous anatomical knowledge instead of personal data. Due to their fragility, perforating arteries are attached to the model using hard materials. At the same time, hollow models are useful for practicing clip application. We made a model for practicing the application of fenestrated clips for paraclinoid internal carotid aneurysms. Situating aneurysm models in the fissure of a brain model simulates the real surgical field and is helpful for educational demonstrations.展开更多
Video-text retrieval (VTR) is an essential task in multimodal learning, aiming to bridge the semantic gap between visual and textual data. Effective video frame sampling plays a crucial role in improving retrieval per...Video-text retrieval (VTR) is an essential task in multimodal learning, aiming to bridge the semantic gap between visual and textual data. Effective video frame sampling plays a crucial role in improving retrieval performance, as it determines the quality of the visual content representation. Traditional sampling methods, such as uniform sampling and optical flow-based techniques, often fail to capture the full semantic range of videos, leading to redundancy and inefficiencies. In this work, we propose CLIP4Video-Sampling: Global Semantics-Guided Multi-Granularity Frame Sampling for Video-Text Retrieval, a global semantics-guided multi-granularity frame sampling strategy designed to optimize both computational efficiency and retrieval accuracy. By integrating multi-scale global and local temporal sampling and leveraging the CLIP (Contrastive Language-Image Pre-training) model’s powerful feature extraction capabilities, our method significantly outperforms existing approaches in both zero-shot and fine-tuned video-text retrieval tasks on popular datasets. CLIP4Video-Sampling reduces redundancy, ensures keyframe coverage, and serves as an adaptable pre-processing module for multimodal models.展开更多
文摘Panoramic images, offering a 360-degree view, are essential in virtual reality(VR) and augmented reality(AR), enhancing realism with high-quality textures. However, acquiring complete and high-quality panoramic textures is challenging. This paper introduces a method using generative adversarial networks(GANs) and the contrastive language-image pretraining(CLIP) model to restore and control texture in panoramic images. The GAN model captures complex structures and maintains consistency, while CLIP enables fine-grained texture control via semantic text-image associations. GAN inversion optimizes latent codes for precise texture details. The resulting low dynamic range(LDR) images are converted to high dynamic range(HDR) using the Blender engine for seamless texture blending. Experimental results demonstrate the effectiveness and flexibility of this method in panoramic texture restoration and generation.
文摘当前,全球科技创新呈现高速发展和高度融合的态势。准确识别出颠覆性技术主题以推动全面创新已成为科学技术发展和经济增长的关键动力。然而,传统的颠覆性技术主题识别方法主要依赖于单一模态数据,存在一定的局限性。本文基于CLIP(contrastive language-image pre-training)和LDAGV(linear discriminant analysis&global vectors for word representation)模型构建新闻文本与图像特征融合向量,通过k-means聚类迭代并结合3个颠覆性技术主题指标进行筛选,实现了多模态信息的融合以及主题的精准识别。以新能源领域为例,验证了该模型在颠覆性技术主题识别方面的可行性和有效性。与其他单一模态模型相比,多模态信息融合模型在颠覆性技术主题识别方面更具优势。
文摘We modified a three-dimensional cerebral aneurysm model for surgical simulation and educational demonstration. Novel models are made showing perforating arteries arising around the aneurysm. Information about perforating arteries is difficult to obtain from individual radiological data sets. Perforators are therefore reproduced based on previous anatomical knowledge instead of personal data. Due to their fragility, perforating arteries are attached to the model using hard materials. At the same time, hollow models are useful for practicing clip application. We made a model for practicing the application of fenestrated clips for paraclinoid internal carotid aneurysms. Situating aneurysm models in the fissure of a brain model simulates the real surgical field and is helpful for educational demonstrations.
文摘Video-text retrieval (VTR) is an essential task in multimodal learning, aiming to bridge the semantic gap between visual and textual data. Effective video frame sampling plays a crucial role in improving retrieval performance, as it determines the quality of the visual content representation. Traditional sampling methods, such as uniform sampling and optical flow-based techniques, often fail to capture the full semantic range of videos, leading to redundancy and inefficiencies. In this work, we propose CLIP4Video-Sampling: Global Semantics-Guided Multi-Granularity Frame Sampling for Video-Text Retrieval, a global semantics-guided multi-granularity frame sampling strategy designed to optimize both computational efficiency and retrieval accuracy. By integrating multi-scale global and local temporal sampling and leveraging the CLIP (Contrastive Language-Image Pre-training) model’s powerful feature extraction capabilities, our method significantly outperforms existing approaches in both zero-shot and fine-tuned video-text retrieval tasks on popular datasets. CLIP4Video-Sampling reduces redundancy, ensures keyframe coverage, and serves as an adaptable pre-processing module for multimodal models.