In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)...In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance.展开更多
In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalm...In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalmedical processing,etc.The existing main method is to use amulti-label matching paradigm to finish the retrieval tasks.However,such methods do not use fine-grained information in the multi-modal data,which may lead to suboptimal results.To avoid cross-modal matching turning into label matching,this paper proposes an end-to-end fine-grained cross-modal hash retrieval method,which can focus more on the fine-grained semantic information of multi-modal data.First,the method refines the image features and no longer uses multiple labels to represent text features but uses BERT for processing.Second,this method uses the inference capabilities of the transformer encoder to generate global fine-grained features.Finally,in order to better judge the effect of the fine-grained model,this paper uses the datasets in the image text matching field instead of the traditional label-matching datasets.This article experiment on Microsoft COCO(MS-COCO)and Flickr30K datasets and compare it with the previous classicalmethods.The experimental results show that this method can obtain more advanced results in the cross-modal hash retrieval field.展开更多
In the Internet of Things(IoT)environment,user-service interaction data are often stored in multiple distributed platforms.In this situation,recommender systems need to integrate the distributed user-service interacti...In the Internet of Things(IoT)environment,user-service interaction data are often stored in multiple distributed platforms.In this situation,recommender systems need to integrate the distributed user-service interaction data across different platforms for making a comprehensive recommendation decision,during which user privacy is probably disclosed.Moreover,as user-service interaction records accumulate over time,they significantly reduce the efficiency of recommendations.To tackle these issues,we propose a lightweight and privacy-preserving service recommendation approach named SerRec_(L2H).In SerRec_(L2H),we employ Learning to Hash(L2H)to encapsulate sensitive user-service interaction data into less-sensitive user indices,which facilitates identifying users with similar preferences efficiently for accurate recommendations.We then validate the feasibility of our proposed SerRec_(L2H)approach through massive experiments conducted on the popular WS-DREAM dataset.The comparative analysis with other competitive approaches demonstrates that our proposal surpasses other approaches in terms ofrecommendation accuracy and efficiency while protecting user privacy.展开更多
Existing speech retrieval systems are frequently confronted with expanding volumes of speech data.The dynamic updating strategy applied to construct the index can timely process to add or remove unnecessary speech dat...Existing speech retrieval systems are frequently confronted with expanding volumes of speech data.The dynamic updating strategy applied to construct the index can timely process to add or remove unnecessary speech data to meet users’real-time retrieval requirements.This study proposes an efficient method for retrieving encryption speech,using unsupervised deep hashing and B+ tree dynamic index,which avoid privacy leak-age of speech data and enhance the accuracy and efficiency of retrieval.The cloud’s encryption speech library is constructed by using the multi-threaded Dijk-Gentry-Halevi-Vaikuntanathan(DGHV)Fully Homomorphic Encryption(FHE)technique,which encrypts the original speech.In addition,this research employs Residual Neural Network18-Gated Recurrent Unit(ResNet18-GRU),which is used to learn the compact binary hash codes,store binary hash codes in the designed B+tree index table,and create a mapping relation of one to one between the binary hash codes and the corresponding encrypted speech.External B+tree index technology is applied to achieve dynamic index updating of the B+tree index table,thereby satisfying users’needs for real-time retrieval.The experimental results on THCHS-30 and TIMIT showed that the retrieval accuracy of the proposed method is more than 95.84%compared to the existing unsupervised hashing methods.The retrieval efficiency is greatly improved.Compared to the method of using hash index tables,and the speech data’s security is effectively guaranteed.展开更多
基金This work was partially supported by Science and Technology Project of Chongqing Education Commission of China(KJZD-K202200513)National Natural Science Foundation of China(61370205)+1 种基金Chongqing Normal University Fund(22XLB003)Chongqing Education Science Planning Project(2021-GX-320).
文摘In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance.
基金This work was partially supported by Chongqing Natural Science Foundation of China(Grant No.CSTB2022NSCQ-MSX1417)the Science and Technology Research Program of Chongqing Municipal Education Commission(Grant No.KJZD-K202200513)+2 种基金Chongqing Normal University Fund(Grant No.22XLB003)Chongqing Education Science Planning Project(Grant No.2021-GX-320)Humanities and Social Sciences Project of Chongqing Education Commission of China(Grant No.22SKGH100).
文摘In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalmedical processing,etc.The existing main method is to use amulti-label matching paradigm to finish the retrieval tasks.However,such methods do not use fine-grained information in the multi-modal data,which may lead to suboptimal results.To avoid cross-modal matching turning into label matching,this paper proposes an end-to-end fine-grained cross-modal hash retrieval method,which can focus more on the fine-grained semantic information of multi-modal data.First,the method refines the image features and no longer uses multiple labels to represent text features but uses BERT for processing.Second,this method uses the inference capabilities of the transformer encoder to generate global fine-grained features.Finally,in order to better judge the effect of the fine-grained model,this paper uses the datasets in the image text matching field instead of the traditional label-matching datasets.This article experiment on Microsoft COCO(MS-COCO)and Flickr30K datasets and compare it with the previous classicalmethods.The experimental results show that this method can obtain more advanced results in the cross-modal hash retrieval field.
基金supported by Zhejiang Provincial Natural Science Foundation of China(No.LZ22F020002).
文摘In the Internet of Things(IoT)environment,user-service interaction data are often stored in multiple distributed platforms.In this situation,recommender systems need to integrate the distributed user-service interaction data across different platforms for making a comprehensive recommendation decision,during which user privacy is probably disclosed.Moreover,as user-service interaction records accumulate over time,they significantly reduce the efficiency of recommendations.To tackle these issues,we propose a lightweight and privacy-preserving service recommendation approach named SerRec_(L2H).In SerRec_(L2H),we employ Learning to Hash(L2H)to encapsulate sensitive user-service interaction data into less-sensitive user indices,which facilitates identifying users with similar preferences efficiently for accurate recommendations.We then validate the feasibility of our proposed SerRec_(L2H)approach through massive experiments conducted on the popular WS-DREAM dataset.The comparative analysis with other competitive approaches demonstrates that our proposal surpasses other approaches in terms ofrecommendation accuracy and efficiency while protecting user privacy.
基金supported by the NationalNatural Science Foundation of China(No.61862041).
文摘Existing speech retrieval systems are frequently confronted with expanding volumes of speech data.The dynamic updating strategy applied to construct the index can timely process to add or remove unnecessary speech data to meet users’real-time retrieval requirements.This study proposes an efficient method for retrieving encryption speech,using unsupervised deep hashing and B+ tree dynamic index,which avoid privacy leak-age of speech data and enhance the accuracy and efficiency of retrieval.The cloud’s encryption speech library is constructed by using the multi-threaded Dijk-Gentry-Halevi-Vaikuntanathan(DGHV)Fully Homomorphic Encryption(FHE)technique,which encrypts the original speech.In addition,this research employs Residual Neural Network18-Gated Recurrent Unit(ResNet18-GRU),which is used to learn the compact binary hash codes,store binary hash codes in the designed B+tree index table,and create a mapping relation of one to one between the binary hash codes and the corresponding encrypted speech.External B+tree index technology is applied to achieve dynamic index updating of the B+tree index table,thereby satisfying users’needs for real-time retrieval.The experimental results on THCHS-30 and TIMIT showed that the retrieval accuracy of the proposed method is more than 95.84%compared to the existing unsupervised hashing methods.The retrieval efficiency is greatly improved.Compared to the method of using hash index tables,and the speech data’s security is effectively guaranteed.