期刊文献+
共找到57篇文章
< 1 2 3 >
每页显示 20 50 100
Keypoints and Descriptors Based on Cross-Modality Information Fusion for Camera Localization
1
作者 MA Shuo GAO Yongbin+ +4 位作者 TIAN Fangzheng LU Junxin HUANG Bo GU Jia ZHOU Yilong 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2021年第2期128-136,共9页
To address the problem that traditional keypoint detection methods are susceptible to complex backgrounds and local similarity of images resulting in inaccurate descriptor matching and bias in visual localization, key... To address the problem that traditional keypoint detection methods are susceptible to complex backgrounds and local similarity of images resulting in inaccurate descriptor matching and bias in visual localization, keypoints and descriptors based on cross-modality fusion are proposed and applied to the study of camera motion estimation. A convolutional neural network is used to detect the positions of keypoints and generate the corresponding descriptors, and the pyramid convolution is used to extract multi-scale features in the network. The problem of local similarity of images is solved by capturing local and global feature information and fusing the geometric position information of keypoints to generate descriptors. According to our experiments, the repeatability of our method is improved by 3.7%, and the homography estimation is improved by 1.6%. To demonstrate the practicability of the method, the visual odometry part of simultaneous localization and mapping is constructed and our method is 35% higher positioning accuracy than the traditional method. 展开更多
关键词 keypoints DESCRIPTORS cross-modality information global feature visual odometry
原文传递
Review of Visible-Infrared Cross-Modality Person Re-Identification
2
作者 Yinyin Zhang 《Journal of New Media》 2023年第1期23-31,共9页
Person re-identification(ReID)is a sub-problem under image retrieval.It is a technology that uses computer vision to identify a specific pedestrian in a collection of pictures or videos.The pedestrian image under cros... Person re-identification(ReID)is a sub-problem under image retrieval.It is a technology that uses computer vision to identify a specific pedestrian in a collection of pictures or videos.The pedestrian image under cross-device is taken from a monitored pedestrian image.At present,most ReID methods deal with the matching between visible and visible images,but with the continuous improvement of security monitoring system,more and more infrared cameras are used to monitor at night or in dim light.Due to the image differences between infrared camera and RGB camera,there is a huge visual difference between cross-modality images,so the traditional ReID method is difficult to apply in this scene.In view of this situation,studying the pedestrian matching between visible and infrared modalities is particularly crucial.Visible-infrared person re-identification(VI-ReID)was first proposed in 2017,and then attracted more and more attention,and many advanced methods emerged. 展开更多
关键词 Person re-identification cross-modality
在线阅读 下载PDF
Cross-modality transformations in biological microscopy enabled by deep learning
3
作者 Dana Hassan Jesús Domínguez +6 位作者 Benjamin Midtvedt Henrik Klein Moberg Jesús Pineda Christoph Langhammer Giovanni Volpe Antoni Homs Corber Caroline B.Adiels 《Advanced Photonics》 CSCD 2024年第6期13-33,共21页
Recent advancements in deep learning(DL)have propelled the virtual transformation of microscopy images across optical modalities,enabling unprecedented multimodal imaging analysis hitherto impossible.Despite these str... Recent advancements in deep learning(DL)have propelled the virtual transformation of microscopy images across optical modalities,enabling unprecedented multimodal imaging analysis hitherto impossible.Despite these strides,the integration of such algorithms into scientists’daily routines and clinical trials remains limited,largely due to a lack of recognition within their respective fields and the plethora of available transformation methods.To address this,we present a structured overview of cross-modality transformations,encompassing applications,data sets,and implementations,aimed at unifying this evolving field.Our review focuses on DL solutions for two key applications:contrast enhancement of targeted features within images and resolution enhancements.We recognize cross-modality transformations as a valuable resource for biologists seeking a deeper understanding of the field,as well as for technology developers aiming to better grasp sample limitations and potential applications.Notably,they enable high-contrast,high-specificity imaging akin to fluorescence microscopy without the need for laborious,costly,and disruptive physical-staining procedures.In addition,they facilitate the realization of imaging with properties that would typically require costly or complex physical modifications,such as achieving superresolution capabilities.By consolidating the current state of research in this review,we aim to catalyze further investigation and development,ultimately bringing the potential of cross-modality transformations into the hands of researchers and clinicians alike. 展开更多
关键词 cross-modality transformations virtual staining SUPERRESOLUTION deep learning fluorescence bright-field phase contrast
原文传递
Cross-Modal Simplex Center Learning for Speech-Face Association
4
作者 Qiming Ma Fanliang Bu +3 位作者 Rong Wang Lingbin Bu Yifan Wang Zhiyuan Li 《Computers, Materials & Continua》 2025年第3期5169-5184,共16页
Speech-face association aims to achieve identity matching between facial images and voice segments by aligning cross-modal features.Existing research primarily focuses on learning shared-space representations and comp... Speech-face association aims to achieve identity matching between facial images and voice segments by aligning cross-modal features.Existing research primarily focuses on learning shared-space representations and computing one-to-one similarities between cross-modal sample pairs to establish their correlation.However,these approaches do not fully account for intra-class variations between the modalities or the many-to-many relationships among cross-modal samples,which are crucial for robust association modeling.To address these challenges,we propose a novel framework that leverages global information to align voice and face embeddings while effectively correlating identity information embedded in both modalities.First,we jointly pre-train face recognition and speaker recognition networks to encode discriminative features from facial images and voice segments.This shared pre-training step ensures the extraction of complementary identity information across modalities.Subsequently,we introduce a cross-modal simplex center loss,which aligns samples with identity centers located at the vertices of a regular simplex inscribed on a hypersphere.This design enforces an equidistant and balanced distribution of identity embeddings,reducing intra-class variations.Furthermore,we employ an improved triplet center loss that emphasizes hard sample mining and optimizes inter-class separability,enhancing the model’s ability to generalize across challenging scenarios.Extensive experiments validate the effectiveness of our framework,demonstrating superior performance across various speech-face association tasks,including matching,verification,and retrieval.Notably,in the challenging gender-constrained matching task,our method achieves a remarkable accuracy of 79.22%,significantly outperforming existing approaches.These results highlight the potential of the proposed framework to advance the state of the art in cross-modal identity association. 展开更多
关键词 Speech-face association cross-modal learning cross-modal matching cross-modal retrieval
在线阅读 下载PDF
Robust Audio-Visual Fusion for Emotion Recognition Based on Cross-Modal Learning under Noisy Conditions
5
作者 A-Seong Moon Seungyeon Jeong +3 位作者 Donghee Kim Mohd Asyraf Zulkifley Bong-Soo Sohn Jaesung Lee 《Computers, Materials & Continua》 2025年第11期2851-2872,共22页
Emotion recognition under uncontrolled and noisy environments presents persistent challenges in the design of emotionally responsive systems.The current study introduces an audio-visual recognition framework designed ... Emotion recognition under uncontrolled and noisy environments presents persistent challenges in the design of emotionally responsive systems.The current study introduces an audio-visual recognition framework designed to address performance degradation caused by environmental interference,such as background noise,overlapping speech,and visual obstructions.The proposed framework employs a structured fusion approach,combining early-stage feature-level integration with decision-level coordination guided by temporal attention mechanisms.Audio data are transformed into mel-spectrogram representations,and visual data are represented as raw frame sequences.Spatial and temporal features are extracted through convolutional and transformer-based encoders,allowing the framework to capture complementary and hierarchical information fromboth sources.Across-modal attentionmodule enables selective emphasis on relevant signals while suppressing modality-specific noise.Performance is validated on a modified version of the AFEW dataset,in which controlled noise is introduced to emulate realistic conditions.The framework achieves higher classification accuracy than comparative baselines,confirming increased robustness under conditions of cross-modal disruption.This result demonstrates the suitability of the proposed method for deployment in practical emotion-aware technologies operating outside controlled environments.The study also contributes a systematic approach to fusion design and supports further exploration in the direction of resilientmultimodal emotion analysis frameworks.The source code is publicly available at https://github.com/asmoon002/AVER(accessed on 18 August 2025). 展开更多
关键词 Multimodal learning emotion recognition cross-modal attention robust representation learning
在线阅读 下载PDF
MSCM-Net:Rail Surface Defect Detection Based on a Multi-Scale Cross-Modal Network
6
作者 Xin Wen Xiao Zheng Yu He 《Computers, Materials & Continua》 2025年第3期4371-4388,共18页
Detecting surface defects on unused rails is crucial for evaluating rail quality and durability to ensure the safety of rail transportation.However,existing detection methods often struggle with challenges such as com... Detecting surface defects on unused rails is crucial for evaluating rail quality and durability to ensure the safety of rail transportation.However,existing detection methods often struggle with challenges such as complex defect morphology,texture similarity,and fuzzy edges,leading to poor accuracy and missed detections.In order to resolve these problems,we propose MSCM-Net(Multi-Scale Cross-Modal Network),a multiscale cross-modal framework focused on detecting rail surface defects.MSCM-Net introduces an attention mechanism to dynamically weight the fusion of RGB and depth maps,effectively capturing and enhancing features at different scales for each modality.To further enrich feature representation and improve edge detection in blurred areas,we propose a multi-scale void fusion module that integrates multi-scale feature information.To improve cross-modal feature fusion,we develop a cross-enhanced fusion module that transfers fused features between layers to incorporate interlayer information.We also introduce a multimodal feature integration module,which merges modality-specific features from separate decoders into a shared decoder,enhancing detection by leveraging richer complementary information.Finally,we validate MSCM-Net on the NEU RSDDS-AUG RGB-depth dataset,comparing it against 12 leading methods,and the results show that MSCM-Net achieves superior performance on all metrics. 展开更多
关键词 Surface defect detection multiscale framework cross-modal fusion edge detection
在线阅读 下载PDF
Fake News Detection Based on Cross-Modal Ambiguity Computation and Multi-Scale Feature Fusion
7
作者 Jianxiang Cao Jinyang Wu +5 位作者 Wenqian Shang Chunhua Wang Kang Song Tong Yi Jiajun Cai Haibin Zhu 《Computers, Materials & Continua》 2025年第5期2659-2675,共17页
With the rapid growth of socialmedia,the spread of fake news has become a growing problem,misleading the public and causing significant harm.As social media content is often composed of both images and text,the use of... With the rapid growth of socialmedia,the spread of fake news has become a growing problem,misleading the public and causing significant harm.As social media content is often composed of both images and text,the use of multimodal approaches for fake news detection has gained significant attention.To solve the problems existing in previous multi-modal fake news detection algorithms,such as insufficient feature extraction and insufficient use of semantic relations between modes,this paper proposes the MFFFND-Co(Multimodal Feature Fusion Fake News Detection with Co-Attention Block)model.First,the model deeply explores the textual content,image content,and frequency domain features.Then,it employs a Co-Attention mechanism for cross-modal fusion.Additionally,a semantic consistency detectionmodule is designed to quantify semantic deviations,thereby enhancing the performance of fake news detection.Experimentally verified on two commonly used datasets,Twitter and Weibo,the model achieved F1 scores of 90.0% and 94.0%,respectively,significantly outperforming the pre-modified MFFFND(Multimodal Feature Fusion Fake News Detection with Attention Block)model and surpassing other baseline models.This improves the accuracy of detecting fake information in artificial intelligence detection and engineering software detection. 展开更多
关键词 Fake news detection MULTIMODAL cross-modal ambiguity computation multi-scale feature fusion
在线阅读 下载PDF
Cross-Modal Consistency with Aesthetic Similarity for Multimodal False Information Detection 被引量:1
8
作者 Weijian Fan Ziwei Shi 《Computers, Materials & Continua》 SCIE EI 2024年第5期2723-2741,共19页
With the explosive growth of false information on social media platforms, the automatic detection of multimodalfalse information has received increasing attention. Recent research has significantly contributed to mult... With the explosive growth of false information on social media platforms, the automatic detection of multimodalfalse information has received increasing attention. Recent research has significantly contributed to multimodalinformation exchange and fusion, with many methods attempting to integrate unimodal features to generatemultimodal news representations. However, they still need to fully explore the hierarchical and complex semanticcorrelations between different modal contents, severely limiting their performance detecting multimodal falseinformation. This work proposes a two-stage detection framework for multimodal false information detection,called ASMFD, which is based on image aesthetic similarity to segment and explores the consistency andinconsistency features of images and texts. Specifically, we first use the Contrastive Language-Image Pre-training(CLIP) model to learn the relationship between text and images through label awareness and train an imageaesthetic attribute scorer using an aesthetic attribute dataset. Then, we calculate the aesthetic similarity betweenthe image and related images and use this similarity as a threshold to divide the multimodal correlation matrixinto consistency and inconsistencymatrices. Finally, the fusionmodule is designed to identify essential features fordetectingmultimodal false information. In extensive experiments on four datasets, the performance of the ASMFDis superior to state-of-the-art baseline methods. 展开更多
关键词 Social media false information detection image aesthetic assessment cross-modal consistency
在线阅读 下载PDF
Multimodal Sentiment Analysis Based on a Cross-Modal Multihead Attention Mechanism 被引量:1
9
作者 Lujuan Deng Boyi Liu Zuhe Li 《Computers, Materials & Continua》 SCIE EI 2024年第1期1157-1170,共14页
Multimodal sentiment analysis aims to understand people’s emotions and opinions from diverse data.Concate-nating or multiplying various modalities is a traditional multi-modal sentiment analysis fusion method.This fu... Multimodal sentiment analysis aims to understand people’s emotions and opinions from diverse data.Concate-nating or multiplying various modalities is a traditional multi-modal sentiment analysis fusion method.This fusion method does not utilize the correlation information between modalities.To solve this problem,this paper proposes amodel based on amulti-head attention mechanism.First,after preprocessing the original data.Then,the feature representation is converted into a sequence of word vectors and positional encoding is introduced to better understand the semantic and sequential information in the input sequence.Next,the input coding sequence is fed into the transformer model for further processing and learning.At the transformer layer,a cross-modal attention consisting of a pair of multi-head attention modules is employed to reflect the correlation between modalities.Finally,the processed results are input into the feedforward neural network to obtain the emotional output through the classification layer.Through the above processing flow,the model can capture semantic information and contextual relationships and achieve good results in various natural language processing tasks.Our model was tested on the CMU Multimodal Opinion Sentiment and Emotion Intensity(CMU-MOSEI)and Multimodal EmotionLines Dataset(MELD),achieving an accuracy of 82.04% and F1 parameters reached 80.59% on the former dataset. 展开更多
关键词 Emotion analysis deep learning cross-modal attention mechanism
在线阅读 下载PDF
Visible-Infrared Person Re-Identification via Quadratic Graph Matching and Block Reasoning
10
作者 Junfeng Lin Jialin Ma +3 位作者 Wei Chen Hao Wang Weiguo Ding Mingyao Tang 《Computers, Materials & Continua》 2025年第7期1013-1029,共17页
The cross-modal person re-identification task aims to match visible and infrared images of the same individual.The main challenges in this field arise from significant modality differences between individuals and the ... The cross-modal person re-identification task aims to match visible and infrared images of the same individual.The main challenges in this field arise from significant modality differences between individuals and the lack of high-quality cross-modal correspondence methods.Existing approaches often attempt to establish modality correspondence by extracting shared features across different modalities.However,these methods tend to focus on local information extraction and fail to fully leverage the global identity information in the cross-modal features,resulting in limited correspondence accuracy and suboptimal matching performance.To address this issue,we propose a quadratic graph matching method designed to overcome the challenges posed by modality differences through precise cross-modal relationship alignment.This method transforms the cross-modal correspondence problem into a graph matching task and minimizes the matching cost using a center search mechanism.Building on this approach,we further design a block reasoning module to uncover latent relationships between person identities and optimize the modality correspondence results.The block strategy not only improves the efficiency of updating gallery images but also enhances matching accuracy while reducing computational load.Experimental results demonstrate that our proposed method outperforms the state-of-the-art methods on the SYSU-MM01,RegDB,and RGBNT201 datasets,achieving excellent matching accuracy and robustness,thereby validating its effectiveness in cross-modal person re-identification. 展开更多
关键词 cross-modal person re-identification modal correspondence quadratic graph matching block reasoning
在线阅读 下载PDF
An Overlapped Multihead Self-Attention-Based Feature Enhancement Approach for Ocular Disease Image Recognition
11
作者 Peng Xiao Haiyu Xu +3 位作者 Peng Xu Zhiwei Guo Amr Tolba Osama Alfarraj 《Computers, Materials & Continua》 2025年第11期2999-3022,共24页
Medical image analysis based on deep learning has become an important technical requirement in the field of smart healthcare.In view of the difficulties in collaborative modeling of local details and global features i... Medical image analysis based on deep learning has become an important technical requirement in the field of smart healthcare.In view of the difficulties in collaborative modeling of local details and global features in multimodal image analysis of ophthalmology,as well as the existence of information redundancy in cross-modal data fusion,this paper proposes amultimodal fusion framework based on cross-modal collaboration and weighted attention mechanism.In terms of feature extraction,the framework collaboratively extracts local fine-grained features and global structural dependencies through a parallel dual-branch architecture,overcoming the limitations of traditional single-modality models in capturing either local or global information;in terms of fusion strategy,the framework innovatively designs a cross-modal dynamic fusion strategy,combining overlappingmulti-head self-attention modules with a bidirectional feature alignment mechanism,addressing the bottlenecks of low feature interaction efficiency and excessive attention fusion computations in traditional parallel fusion,and further introduces cross-domain local integration technology,which enhances the representation ability of the lesion area through pixel-level feature recalibration and optimizes the diagnostic robustness of complex cases.Experiments show that the framework exhibits excellent feature expression and generalization performance in cross-domain scenarios of ophthalmic medical images and natural images,providing a high-precision,low-redundancy fusion paradigm for multimodal medical image analysis,and promoting the upgrade of intelligent diagnosis and treatment fromsingle-modal static analysis to dynamic decision-making. 展开更多
关键词 Overlapping multi-head self-attention deep learning cross-modal dynamic fusion multi-level fusion
在线阅读 下载PDF
Efficient Reconstruction of Spatial Features for Remote Sensing Image-Text Retrieval
12
作者 ZHANG Weihang CHEN Jialiang +3 位作者 ZHANG Wenkai LI Xinming GAO Xin SUN Xian 《Transactions of Nanjing University of Aeronautics and Astronautics》 2025年第1期101-111,共11页
Remote sensing cross-modal image-text retrieval(RSCIR)can flexibly and subjectively retrieve remote sensing images utilizing query text,which has received more researchers’attention recently.However,with the increasi... Remote sensing cross-modal image-text retrieval(RSCIR)can flexibly and subjectively retrieve remote sensing images utilizing query text,which has received more researchers’attention recently.However,with the increasing volume of visual-language pre-training model parameters,direct transfer learning consumes a substantial amount of computational and storage resources.Moreover,recently proposed parameter-efficient transfer learning methods mainly focus on the reconstruction of channel features,ignoring the spatial features which are vital for modeling key entity relationships.To address these issues,we design an efficient transfer learning framework for RSCIR,which is based on spatial feature efficient reconstruction(SPER).A concise and efficient spatial adapter is introduced to enhance the extraction of spatial relationships.The spatial adapter is able to spatially reconstruct the features in the backbone with few parameters while incorporating the prior information from the channel dimension.We conduct quantitative and qualitative experiments on two different commonly used RSCIR datasets.Compared with traditional methods,our approach achieves an improvement of 3%-11% in sumR metric.Compared with methods finetuning all parameters,our proposed method only trains less than 1% of the parameters,while maintaining an overall performance of about 96%. 展开更多
关键词 remote sensing cross-modal image-text retrieval(RSCIR) spatial features channel features contrastive learning parameter effective transfer learning
在线阅读 下载PDF
A Multi-Level Circulant Cross-Modal Transformer for Multimodal Speech Emotion Recognition 被引量:1
13
作者 Peizhu Gong Jin Liu +3 位作者 Zhongdai Wu Bing Han YKenWang Huihua He 《Computers, Materials & Continua》 SCIE EI 2023年第2期4203-4220,共18页
Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due... Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due to its inclusion of the semantic features of two different modalities,i.e.,audio and text.However,existing methods often fail in effectively represent features and capture correlations.This paper presents a multi-level circulant cross-modal Transformer(MLCCT)formultimodal speech emotion recognition.The proposed model can be divided into three steps,feature extraction,interaction and fusion.Self-supervised embedding models are introduced for feature extraction,which give a more powerful representation of the original data than those using spectrograms or audio features such as Mel-frequency cepstral coefficients(MFCCs)and low-level descriptors(LLDs).In particular,MLCCT contains two types of feature interaction processes,where a bidirectional Long Short-term Memory(Bi-LSTM)with circulant interaction mechanism is proposed for low-level features,while a two-stream residual cross-modal Transformer block is appliedwhen high-level features are involved.Finally,we choose self-attention blocks for fusion and a fully connected layer to make predictions.To evaluate the performance of our proposed model,comprehensive experiments are conducted on three widely used benchmark datasets including IEMOCAP,MELD and CMU-MOSEI.The competitive results verify the effectiveness of our approach. 展开更多
关键词 Speech emotion recognition self-supervised embedding model cross-modal transformer self-attention
在线阅读 下载PDF
Mechanism of Cross-modal Information Influencing Taste 被引量:1
14
作者 Pei LIANG Jia-yu JIANG +2 位作者 Qiang LIU Su-lin ZHANG Hua-jing YANG 《Current Medical Science》 SCIE CAS 2020年第3期474-479,共6页
Studies on the integration of cross-modal information with taste perception has been mostly limited to uni-modal level.The cross-modal sensory interaction and the neural network of information processing and its contr... Studies on the integration of cross-modal information with taste perception has been mostly limited to uni-modal level.The cross-modal sensory interaction and the neural network of information processing and its control were not fully explored and the mechanisms remain poorly understood.This mini review investigated the impact of uni-modal and multi-modal information on the taste perception,from the perspective of cognitive status,such as emotion,expectation and attention,and discussed the hypothesis that the cognitive status is the key step for visual sense to exert influence on taste.This work may help researchers better understand the mechanism of cross-modal information processing and further develop neutrally-based artificial intelligent(AI)system. 展开更多
关键词 cross-modal information integration cognitive status taste perception
暂未订购
ViT2CMH:Vision Transformer Cross-Modal Hashing for Fine-Grained Vision-Text Retrieval 被引量:1
15
作者 Mingyong Li Qiqi Li +1 位作者 Zheng Jiang Yan Ma 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1401-1414,共14页
In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)... In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance. 展开更多
关键词 Hash learning cross-modal retrieval fine-grained matching TRANSFORMER
在线阅读 下载PDF
CSMCCVA:Framework of cross-modal semantic mapping based on cognitive computing of visual and auditory sensations 被引量:1
16
作者 刘扬 Zheng Fengbin Zuo Xianyu 《High Technology Letters》 EI CAS 2016年第1期90-98,共9页
Cross-modal semantic mapping and cross-media retrieval are key problems of the multimedia search engine.This study analyzes the hierarchy,the functionality,and the structure in the visual and auditory sensations of co... Cross-modal semantic mapping and cross-media retrieval are key problems of the multimedia search engine.This study analyzes the hierarchy,the functionality,and the structure in the visual and auditory sensations of cognitive system,and establishes a brain-like cross-modal semantic mapping framework based on cognitive computing of visual and auditory sensations.The mechanism of visual-auditory multisensory integration,selective attention in thalamo-cortical,emotional control in limbic system and the memory-enhancing in hippocampal were considered in the framework.Then,the algorithms of cross-modal semantic mapping were given.Experimental results show that the framework can be effectively applied to the cross-modal semantic mapping,and also provides an important significance for brain-like computing of non-von Neumann structure. 展开更多
关键词 multimedia neural cognitive computing (MNCC) brain-like computing cross-modal semantic mapping (CSM) selective attention limbic system multisensory integration memory-enhancing mechanism
在线阅读 下载PDF
Use of sensory substitution devices as a model system for investigating cross-modal neuroplasticity in humans 被引量:1
17
作者 Amy C.Nau Matthew C.Murphy Kevin C.Chan 《Neural Regeneration Research》 SCIE CAS CSCD 2015年第11期1717-1719,共3页
Blindness provides an unparalleled opportunity to study plasticity of the nervous system in humans.Seminal work in this area examined the often dramatic modifications to the visual cortex that result when visual input... Blindness provides an unparalleled opportunity to study plasticity of the nervous system in humans.Seminal work in this area examined the often dramatic modifications to the visual cortex that result when visual input is completely absent from birth or very early in life(Kupers and Ptito,2014).More recent studies explored what happens to the visual pathways in the context of acquired blindness.This is particularly relevant as the majority of diseases that cause vision loss occur in the elderly. 展开更多
关键词 Use of sensory substitution devices as a model system for investigating cross-modal neuroplasticity in humans BOLD
暂未订购
TECMH:Transformer-Based Cross-Modal Hashing For Fine-Grained Image-Text Retrieval
18
作者 Qiqi Li Longfei Ma +2 位作者 Zheng Jiang Mingyong Li Bo Jin 《Computers, Materials & Continua》 SCIE EI 2023年第5期3713-3728,共16页
In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalm... In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalmedical processing,etc.The existing main method is to use amulti-label matching paradigm to finish the retrieval tasks.However,such methods do not use fine-grained information in the multi-modal data,which may lead to suboptimal results.To avoid cross-modal matching turning into label matching,this paper proposes an end-to-end fine-grained cross-modal hash retrieval method,which can focus more on the fine-grained semantic information of multi-modal data.First,the method refines the image features and no longer uses multiple labels to represent text features but uses BERT for processing.Second,this method uses the inference capabilities of the transformer encoder to generate global fine-grained features.Finally,in order to better judge the effect of the fine-grained model,this paper uses the datasets in the image text matching field instead of the traditional label-matching datasets.This article experiment on Microsoft COCO(MS-COCO)and Flickr30K datasets and compare it with the previous classicalmethods.The experimental results show that this method can obtain more advanced results in the cross-modal hash retrieval field. 展开更多
关键词 Deep learning cross-modal retrieval hash learning TRANSFORMER
在线阅读 下载PDF
Cross-Modal Entity Resolution for Image and Text Integrating Global and Fine-Grained Joint Attention Mechanism
19
作者 曾志贤 曹建军 +2 位作者 翁年凤 袁震 余旭 《Journal of Shanghai Jiaotong university(Science)》 EI 2023年第6期728-737,共10页
In order to solve the problem that the existing cross-modal entity resolution methods easily ignore the high-level semantic informational correlations between cross-modal data,we propose a novel cross-modal entity res... In order to solve the problem that the existing cross-modal entity resolution methods easily ignore the high-level semantic informational correlations between cross-modal data,we propose a novel cross-modal entity resolution for image and text integrating global and fine-grained joint attention mechanism method.First,we map the cross-modal data to a common embedding space utilizing a feature extraction network.Then,we integrate global joint attention mechanism and fine-grained joint attention mechanism,making the model have the ability to learn the global semantic characteristics and the local fine-grained semantic characteristics of the cross-modal data,which is used to fully exploit the cross-modal semantic correlation and boost the performance of cross-modal entity resolution.Moreover,experiments on Flickr-30K and MS-COCO datasets show that the overall performance of R@sum outperforms by 4.30%and 4.54%compared with 5 state-of-the-art methods,respectively,which can fully demonstrate the superiority of our proposed method. 展开更多
关键词 cross-modal entity resolution joint attention mechanism deep learning feature extraction semantic correlation
原文传递
Cross-Modal Hashing Retrieval Based on Deep Residual Network
20
作者 Zhiyi Li Xiaomian Xu +1 位作者 Du Zhang Peng Zhang 《Computer Systems Science & Engineering》 SCIE EI 2021年第2期383-405,共23页
In the era of big data rich inWe Media,the single mode retrieval system has been unable to meet people’s demand for information retrieval.This paper proposes a new solution to the problem of feature extraction and un... In the era of big data rich inWe Media,the single mode retrieval system has been unable to meet people’s demand for information retrieval.This paper proposes a new solution to the problem of feature extraction and unified mapping of different modes:A Cross-Modal Hashing retrieval algorithm based on Deep Residual Network(CMHR-DRN).The model construction is divided into two stages:The first stage is the feature extraction of different modal data,including the use of Deep Residual Network(DRN)to extract the image features,using the method of combining TF-IDF with the full connection network to extract the text features,and the obtained image and text features used as the input of the second stage.In the second stage,the image and text features are mapped into Hash functions by supervised learning,and the image and text features are mapped to the common binary Hamming space.In the process of mapping,the distance measurement of the original distance measurement and the common feature space are kept unchanged as far as possible to improve the accuracy of Cross-Modal Retrieval.In training the model,adaptive moment estimation(Adam)is used to calculate the adaptive learning rate of each parameter,and the stochastic gradient descent(SGD)is calculated to obtain the minimum loss function.The whole training process is completed on Caffe deep learning framework.Experiments show that the proposed algorithm CMHR-DRN based on Deep Residual Network has better retrieval performance and stronger advantages than other Cross-Modal algorithms CMFH,CMDN and CMSSH. 展开更多
关键词 Deep residual network cross-modal retrieval HASHING cross-modal hashing retrieval based on deep residual network
在线阅读 下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部