In recent years, the nearest neighbor search (NNS) problem has been widely used in various interesting applications. Locality-sensitive hashing (LSH), a popular algorithm for the approximate nearest neighbor probl...In recent years, the nearest neighbor search (NNS) problem has been widely used in various interesting applications. Locality-sensitive hashing (LSH), a popular algorithm for the approximate nearest neighbor problem, is proved to be an efficient method to solve the NNS problem in the high-dimensional and large-scale databases. Based on the scheme of p-stable LSH, this paper introduces a novel improvement algorithm called randomness-based locality-sensitive hashing (RLSH) based on p-stable LSH. Our proposed algorithm modifies the query strategy that it randomly selects a certain hash table to project the query point instead of mapping the query point into all hash tables in the period of the nearest neighbor query and reconstructs the candidate points for finding the nearest neighbors. This improvement strategy ensures that RLSH spends less time searching for the nearest neighbors than the p-stable LSH algorithm to keep a high recall. Besides, this strategy is proved to promote the diversity of the candidate points even with fewer hash tables. Experiments are executed on the synthetic dataset and open dataset. The results show that our method can cost less time consumption and less space requirements than the p-stable LSH while balancing the same recall.展开更多
This paper introduces a novel trust-aware hybrid recommendation framework that combines Locality-Sensitive Hashing(LSH)with the trust information in social networks,aiming to provide efficient and effective recommenda...This paper introduces a novel trust-aware hybrid recommendation framework that combines Locality-Sensitive Hashing(LSH)with the trust information in social networks,aiming to provide efficient and effective recommendations.Unlike traditional recommender systems which often overlook the critical influence of user trust,our proposed approach infuses trust metrics to better approximate user preferences.The LSH,with its intrinsic advantage in handling high-dimensional data and computational efficiency,is applied to expedite the process of finding similar items or users.We innovatively adapt LSH to form trust-aware buckets,encapsulating both trust and similarity information.These enhancements mitigate the sparsity and scalability issues usually found in existing recommender systems.Experimental results on a real-world dataset confirm the superiority of our approach in terms of recommendation quality and computational performance.The paper further discusses potential applications and future directions of the trust-aware hybrid recommendation with LSH.展开更多
With the growing penetration of wind power in power systems, more accurate prediction of wind speed and wind power is required for real-time scheduling and operation. In this paper, a novel forecast model for shortter...With the growing penetration of wind power in power systems, more accurate prediction of wind speed and wind power is required for real-time scheduling and operation. In this paper, a novel forecast model for shortterm prediction of wind speed and wind power is proposed,which is based on singular spectrum analysis(SSA) and locality-sensitive hashing(LSH). To deal with the impact of high volatility of the original time series, SSA is applied to decompose it into two components: the mean trend,which represents the mean tendency of the original time series, and the fluctuation component, which reveals the stochastic characteristics. Both components are reconstructed in a phase space to obtain mean trend segments and fluctuation component segments. After that, LSH is utilized to select similar segments of the mean trend segments, which are then employed in local forecasting, so that the accuracy and efficiency of prediction can be enhanced. Finally, support vector regression is adopted forprediction, where the training input is the synthesis of the similar mean trend segments and the corresponding fluctuation component segments. Simulation studies are conducted on wind speed and wind power time series from four databases, and the final results demonstrate that the proposed model is more accurate and stable in comparison with other models.展开更多
Medical institutions frequently utilize cloud servers for storing digital medical imaging data, aiming to lower both storage expenses and computational expenses. Nevertheless, the reliability of cloud servers as third...Medical institutions frequently utilize cloud servers for storing digital medical imaging data, aiming to lower both storage expenses and computational expenses. Nevertheless, the reliability of cloud servers as third-party providers is not always guaranteed. To safeguard against the exposure and misuse of personal privacy information, and achieve secure and efficient retrieval, a secure medical image retrieval based on a multi-attention mechanism and triplet deep hashing is proposed in this paper (abbreviated as MATDH). Specifically, this method first utilizes the contrast-limited adaptive histogram equalization method applicable to color images to enhance chest X-ray images. Next, a designed multi-attention mechanism focuses on important local features during the feature extraction stage. Moreover, a triplet loss function is utilized to learn discriminative hash codes to construct a compact and efficient triplet deep hashing. Finally, upsampling is used to restore the original resolution of the images during retrieval, thereby enabling more accurate matching. To ensure the security of medical image data, a lightweight image encryption method based on frequency domain encryption is designed to encrypt the chest X-ray images. The findings of the experiment indicate that, in comparison to various advanced image retrieval techniques, the suggested approach improves the precision of feature extraction and retrieval using the COVIDx dataset. Additionally, it offers enhanced protection for the confidentiality of medical images stored in cloud settings and demonstrates strong practicality.展开更多
The easy generation, storage, transmission and reproduction of digital images have caused serious abuse and security problems. Assurance of the rightful ownership, integrity, and authenticity is a major concern to the...The easy generation, storage, transmission and reproduction of digital images have caused serious abuse and security problems. Assurance of the rightful ownership, integrity, and authenticity is a major concern to the academia as well as the industry. On the other hand, efficient search of the huge amount of images has become a great challenge. Image hashing is a technique suitable for use in image authentication and content based image retrieval (CBIR). In this article, we review some representative image hashing techniques proposed in the recent years, with emphases on how to meet the conflicting requirements of perceptual robustness and security. Following a brief introduction to some earlier methods, we focus on a typical two-stage structure and some geometric-distortion resilient techniques. We then introduce two image hashing approaches developed in our own research, and reveal security problems in some existing methods due to the absence of secret keys in certain stage of the image feature extraction, or availability of a large quantity of images, keys, or the hash function to the adversary. More research efforts are needed in developing truly robust and secure image hashing techniques.展开更多
There is a steep increase in data encoded as symmetric positive definite(SPD)matrix in the past decade.The set of SPD matrices forms a Riemannian manifold that constitutes a half convex cone in the vector space of mat...There is a steep increase in data encoded as symmetric positive definite(SPD)matrix in the past decade.The set of SPD matrices forms a Riemannian manifold that constitutes a half convex cone in the vector space of matrices,which we sometimes call SPD manifold.One of the fundamental problems in the application of SPD manifold is to find the nearest neighbor of a queried SPD matrix.Hashing is a popular method that can be used for the nearest neighbor search.However,hashing cannot be directly applied to SPD manifold due to its non-Euclidean intrinsic geometry.Inspired by the idea of kernel trick,a new hashing scheme for SPD manifold by random projection and quantization in expanded data space is proposed in this paper.Experimental results in large scale nearduplicate image detection show the effectiveness and efficiency of the proposed method.展开更多
Image hashing is a useful multimedia technology for many applications,such as image authentication,image retrieval,image copy detection and image forensics.In this paper,we propose a robust image hashing based on rand...Image hashing is a useful multimedia technology for many applications,such as image authentication,image retrieval,image copy detection and image forensics.In this paper,we propose a robust image hashing based on random Gabor filtering and discrete wavelet transform(DWT).Specifically,robust and secure image features are first extracted from the normalized image by Gabor filtering and a chaotic map called Skew tent map,and then are compressed via a single-level 2-D DWT.Image hash is finally obtained by concatenating DWT coefficients in the LL sub-band.Many experiments with open image datasets are carried out and the results illustrate that our hashing is robust,discriminative and secure.Receiver operating characteristic(ROC)curve comparisons show that our hashing is better than some popular image hashing algorithms in classification performance between robustness and discrimination.展开更多
Hashing technology has the advantages of reducing data storage and improving the efficiency of the learning system,making it more and more widely used in image retrieval.Multi-view data describes image information mor...Hashing technology has the advantages of reducing data storage and improving the efficiency of the learning system,making it more and more widely used in image retrieval.Multi-view data describes image information more comprehensively than traditional methods using a single-view.How to use hashing to combine multi-view data for image retrieval is still a challenge.In this paper,a multi-view fusion hashing method based on RKCCA(Random Kernel Canonical Correlation Analysis)is proposed.In order to describe image content more accurately,we use deep learning dense convolutional network feature DenseNet to construct multi-view by combining GIST feature or BoW_SIFT(Bag-of-Words model+SIFT feature)feature.This algorithm uses RKCCA method to fuse multi-view features to construct association features and apply them to image retrieval.The algorithm generates binary hash code with minimal distortion error by designing quantization regularization terms.A large number of experiments on benchmark datasets show that this method is superior to other multi-view hashing methods.展开更多
In this paper, we propose a new online system that can quickly detect malicious spam emails and adapt to the changes in the email contents and the Uniform Resource Locator (URL) links leading to malicious websites by ...In this paper, we propose a new online system that can quickly detect malicious spam emails and adapt to the changes in the email contents and the Uniform Resource Locator (URL) links leading to malicious websites by updating the system daily. We introduce an autonomous function for a server to generate training examples, in which double-bounce emails are automatically collected and their class labels are given by a crawler-type software to analyze the website maliciousness called SPIKE. In general, since spammers use botnets to spread numerous malicious emails within a short time, such distributed spam emails often have the same or similar contents. Therefore, it is not necessary for all spam emails to be learned. To adapt to new malicious campaigns quickly, only new types of spam emails should be selected for learning and this can be realized by introducing an active learning scheme into a classifier model. For this purpose, we adopt Resource Allocating Network with Locality Sensitive Hashing (RAN-LSH) as a classifier model with a data selection function. In RAN-LSH, the same or similar spam emails that have already been learned are quickly searched for a hash table in Locally Sensitive Hashing (LSH), in which the matched similar emails located in “well-learned” are discarded without being used as training data. To analyze email contents, we adopt the Bag of Words (BoW) approach and generate feature vectors whose attributes are transformed based on the normalized term frequency-inverse document frequency (TF-IDF). We use a data set of double-bounce spam emails collected at National Institute of Information and Communications Technology (NICT) in Japan from March 1st, 2013 until May 10th, 2013 to evaluate the performance of the proposed system. The results confirm that the proposed spam email detection system has capability of detecting with high detection rate.展开更多
In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)...In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance.展开更多
Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a s...Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a significant problem.The development of secure communication methods that keep recipient-only data transmissions secret has always been an area of interest.Therefore,several approaches,including steganography,have been developed by researchers over time to enable safe data transit.In this review,we have discussed image steganography based on Discrete Cosine Transform(DCT)algorithm,etc.We have also discussed image steganography based on multiple hashing algorithms like the Rivest–Shamir–Adleman(RSA)method,the Blowfish technique,and the hash-least significant bit(LSB)approach.In this review,a novel method of hiding information in images has been developed with minimal variance in image bits,making our method secure and effective.A cryptography mechanism was also used in this strategy.Before encoding the data and embedding it into a carry image,this review verifies that it has been encrypted.Usually,embedded text in photos conveys crucial signals about the content.This review employs hash table encryption on the message before hiding it within the picture to provide a more secure method of data transport.If the message is ever intercepted by a third party,there are several ways to stop this operation.A second level of security process implementation involves encrypting and decrypting steganography images using different hashing algorithms.展开更多
Lung medical image retrieval based on content similarity plays an important role in computer-aided diagnosis of lung cancer.In recent years,binary hashing has become a hot topic in this field due to its compressed sto...Lung medical image retrieval based on content similarity plays an important role in computer-aided diagnosis of lung cancer.In recent years,binary hashing has become a hot topic in this field due to its compressed storage and fast query speed.Traditional hashing methods often rely on highdimensional features based hand-crafted methods,which might not be optimally compatible with lung nodule images.Also,different hashing bits contribute to the image retrieval differently,and therefore treating the hashing bits equally affects the retrieval accuracy.Hence,an image retrieval method of lung nodule images is proposed with the basis on convolutional neural networks and hashing.First,apre-trained and fine-tuned convolutional neural network is employed to learn multilevel semantic features of the lung nodules.Principal components analysis is utilized to remove redundant information and preserve informative semantic features of the lung nodules.Second,the proposed method relies on nine sign labels of lung nodules for the training set,and the semantic feature is combined to construct hashing functions.Finally,returned lung nodule images can be easily ranked with the query-adaptive search method based on weighted Hamming distance.Extensive experiments and evaluations on the dataset demonstrate that the proposed method can significantly improve the expression ability of lung nodule images,which further validates the effectiveness of the proposed method.展开更多
基金Project supported by the National Natural Science Foundation of China(Grant No.61173143)the Special Public Sector Research Program of China(Grant No.GYHY201206030)the Deanship of Scientific Research at King Saud University for funding this work through research group No.RGP-VPP-264
文摘In recent years, the nearest neighbor search (NNS) problem has been widely used in various interesting applications. Locality-sensitive hashing (LSH), a popular algorithm for the approximate nearest neighbor problem, is proved to be an efficient method to solve the NNS problem in the high-dimensional and large-scale databases. Based on the scheme of p-stable LSH, this paper introduces a novel improvement algorithm called randomness-based locality-sensitive hashing (RLSH) based on p-stable LSH. Our proposed algorithm modifies the query strategy that it randomly selects a certain hash table to project the query point instead of mapping the query point into all hash tables in the period of the nearest neighbor query and reconstructs the candidate points for finding the nearest neighbors. This improvement strategy ensures that RLSH spends less time searching for the nearest neighbors than the p-stable LSH algorithm to keep a high recall. Besides, this strategy is proved to promote the diversity of the candidate points even with fewer hash tables. Experiments are executed on the synthetic dataset and open dataset. The results show that our method can cost less time consumption and less space requirements than the p-stable LSH while balancing the same recall.
文摘This paper introduces a novel trust-aware hybrid recommendation framework that combines Locality-Sensitive Hashing(LSH)with the trust information in social networks,aiming to provide efficient and effective recommendations.Unlike traditional recommender systems which often overlook the critical influence of user trust,our proposed approach infuses trust metrics to better approximate user preferences.The LSH,with its intrinsic advantage in handling high-dimensional data and computational efficiency,is applied to expedite the process of finding similar items or users.We innovatively adapt LSH to form trust-aware buckets,encapsulating both trust and similarity information.These enhancements mitigate the sparsity and scalability issues usually found in existing recommender systems.Experimental results on a real-world dataset confirm the superiority of our approach in terms of recommendation quality and computational performance.The paper further discusses potential applications and future directions of the trust-aware hybrid recommendation with LSH.
基金supported by the Guangdong Innovative Research Team Program(No.201001N0104744201)the State Key Program of the National Natural Science Foundation of China(No.51437006)
文摘With the growing penetration of wind power in power systems, more accurate prediction of wind speed and wind power is required for real-time scheduling and operation. In this paper, a novel forecast model for shortterm prediction of wind speed and wind power is proposed,which is based on singular spectrum analysis(SSA) and locality-sensitive hashing(LSH). To deal with the impact of high volatility of the original time series, SSA is applied to decompose it into two components: the mean trend,which represents the mean tendency of the original time series, and the fluctuation component, which reveals the stochastic characteristics. Both components are reconstructed in a phase space to obtain mean trend segments and fluctuation component segments. After that, LSH is utilized to select similar segments of the mean trend segments, which are then employed in local forecasting, so that the accuracy and efficiency of prediction can be enhanced. Finally, support vector regression is adopted forprediction, where the training input is the synthesis of the similar mean trend segments and the corresponding fluctuation component segments. Simulation studies are conducted on wind speed and wind power time series from four databases, and the final results demonstrate that the proposed model is more accurate and stable in comparison with other models.
基金supported by the NationalNatural Science Foundation of China(No.61862041).
文摘Medical institutions frequently utilize cloud servers for storing digital medical imaging data, aiming to lower both storage expenses and computational expenses. Nevertheless, the reliability of cloud servers as third-party providers is not always guaranteed. To safeguard against the exposure and misuse of personal privacy information, and achieve secure and efficient retrieval, a secure medical image retrieval based on a multi-attention mechanism and triplet deep hashing is proposed in this paper (abbreviated as MATDH). Specifically, this method first utilizes the contrast-limited adaptive histogram equalization method applicable to color images to enhance chest X-ray images. Next, a designed multi-attention mechanism focuses on important local features during the feature extraction stage. Moreover, a triplet loss function is utilized to learn discriminative hash codes to construct a compact and efficient triplet deep hashing. Finally, upsampling is used to restore the original resolution of the images during retrieval, thereby enabling more accurate matching. To ensure the security of medical image data, a lightweight image encryption method based on frequency domain encryption is designed to encrypt the chest X-ray images. The findings of the experiment indicate that, in comparison to various advanced image retrieval techniques, the suggested approach improves the precision of feature extraction and retrieval using the COVIDx dataset. Additionally, it offers enhanced protection for the confidentiality of medical images stored in cloud settings and demonstrates strong practicality.
基金supported by the National Natural Science Foundation of China(Grant No.60502039),the Shanghai Rising-Star Program(Grant No.06QA14022),and the Key project of Shanghai Municipality for Basic Research (Grant No.04JC14037)
文摘The easy generation, storage, transmission and reproduction of digital images have caused serious abuse and security problems. Assurance of the rightful ownership, integrity, and authenticity is a major concern to the academia as well as the industry. On the other hand, efficient search of the huge amount of images has become a great challenge. Image hashing is a technique suitable for use in image authentication and content based image retrieval (CBIR). In this article, we review some representative image hashing techniques proposed in the recent years, with emphases on how to meet the conflicting requirements of perceptual robustness and security. Following a brief introduction to some earlier methods, we focus on a typical two-stage structure and some geometric-distortion resilient techniques. We then introduce two image hashing approaches developed in our own research, and reveal security problems in some existing methods due to the absence of secret keys in certain stage of the image feature extraction, or availability of a large quantity of images, keys, or the hash function to the adversary. More research efforts are needed in developing truly robust and secure image hashing techniques.
文摘There is a steep increase in data encoded as symmetric positive definite(SPD)matrix in the past decade.The set of SPD matrices forms a Riemannian manifold that constitutes a half convex cone in the vector space of matrices,which we sometimes call SPD manifold.One of the fundamental problems in the application of SPD manifold is to find the nearest neighbor of a queried SPD matrix.Hashing is a popular method that can be used for the nearest neighbor search.However,hashing cannot be directly applied to SPD manifold due to its non-Euclidean intrinsic geometry.Inspired by the idea of kernel trick,a new hashing scheme for SPD manifold by random projection and quantization in expanded data space is proposed in this paper.Experimental results in large scale nearduplicate image detection show the effectiveness and efficiency of the proposed method.
基金This work is partially supported by the National Natural Science Foundation of China(Nos.61562007,61762017,61702332)National Key R&D Plan of China(2018YFB1003701)+3 种基金Guangxi“Bagui Scholar”Teams for Innovation and Research,the Guangxi Natural Science Foundation(Nos.2017GXNSFAA198222,2015GXNSFDA139040)the Project of Guangxi Science and Technology(Nos.GuiKeAD17195062)the Project of the Guangxi Key Lab of Multi-source Information Mining&Security(Nos.16-A-02-02,15-A-02-02)the Guangxi Collaborative Innovation Center of Multi-source Information Integration and Intelligent Processing,and the Innovation Project of Guangxi Graduate Education(No.XYCSZ 2018076).
文摘Image hashing is a useful multimedia technology for many applications,such as image authentication,image retrieval,image copy detection and image forensics.In this paper,we propose a robust image hashing based on random Gabor filtering and discrete wavelet transform(DWT).Specifically,robust and secure image features are first extracted from the normalized image by Gabor filtering and a chaotic map called Skew tent map,and then are compressed via a single-level 2-D DWT.Image hash is finally obtained by concatenating DWT coefficients in the LL sub-band.Many experiments with open image datasets are carried out and the results illustrate that our hashing is robust,discriminative and secure.Receiver operating characteristic(ROC)curve comparisons show that our hashing is better than some popular image hashing algorithms in classification performance between robustness and discrimination.
基金This work is supported by the National Natural Science Foundation of China(No.61772561)the Key Research&Development Plan of Hunan Province(No.2018NK2012)+1 种基金the Science Research Projects of Hunan Provincial Education Department(Nos.18A174,18C0262)the Science&Technology Innovation Platform and Talent Plan of Hunan Province(2017TP1022).
文摘Hashing technology has the advantages of reducing data storage and improving the efficiency of the learning system,making it more and more widely used in image retrieval.Multi-view data describes image information more comprehensively than traditional methods using a single-view.How to use hashing to combine multi-view data for image retrieval is still a challenge.In this paper,a multi-view fusion hashing method based on RKCCA(Random Kernel Canonical Correlation Analysis)is proposed.In order to describe image content more accurately,we use deep learning dense convolutional network feature DenseNet to construct multi-view by combining GIST feature or BoW_SIFT(Bag-of-Words model+SIFT feature)feature.This algorithm uses RKCCA method to fuse multi-view features to construct association features and apply them to image retrieval.The algorithm generates binary hash code with minimal distortion error by designing quantization regularization terms.A large number of experiments on benchmark datasets show that this method is superior to other multi-view hashing methods.
文摘In this paper, we propose a new online system that can quickly detect malicious spam emails and adapt to the changes in the email contents and the Uniform Resource Locator (URL) links leading to malicious websites by updating the system daily. We introduce an autonomous function for a server to generate training examples, in which double-bounce emails are automatically collected and their class labels are given by a crawler-type software to analyze the website maliciousness called SPIKE. In general, since spammers use botnets to spread numerous malicious emails within a short time, such distributed spam emails often have the same or similar contents. Therefore, it is not necessary for all spam emails to be learned. To adapt to new malicious campaigns quickly, only new types of spam emails should be selected for learning and this can be realized by introducing an active learning scheme into a classifier model. For this purpose, we adopt Resource Allocating Network with Locality Sensitive Hashing (RAN-LSH) as a classifier model with a data selection function. In RAN-LSH, the same or similar spam emails that have already been learned are quickly searched for a hash table in Locally Sensitive Hashing (LSH), in which the matched similar emails located in “well-learned” are discarded without being used as training data. To analyze email contents, we adopt the Bag of Words (BoW) approach and generate feature vectors whose attributes are transformed based on the normalized term frequency-inverse document frequency (TF-IDF). We use a data set of double-bounce spam emails collected at National Institute of Information and Communications Technology (NICT) in Japan from March 1st, 2013 until May 10th, 2013 to evaluate the performance of the proposed system. The results confirm that the proposed spam email detection system has capability of detecting with high detection rate.
基金This work was partially supported by Science and Technology Project of Chongqing Education Commission of China(KJZD-K202200513)National Natural Science Foundation of China(61370205)+1 种基金Chongqing Normal University Fund(22XLB003)Chongqing Education Science Planning Project(2021-GX-320).
文摘In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance.
文摘Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a significant problem.The development of secure communication methods that keep recipient-only data transmissions secret has always been an area of interest.Therefore,several approaches,including steganography,have been developed by researchers over time to enable safe data transit.In this review,we have discussed image steganography based on Discrete Cosine Transform(DCT)algorithm,etc.We have also discussed image steganography based on multiple hashing algorithms like the Rivest–Shamir–Adleman(RSA)method,the Blowfish technique,and the hash-least significant bit(LSB)approach.In this review,a novel method of hiding information in images has been developed with minimal variance in image bits,making our method secure and effective.A cryptography mechanism was also used in this strategy.Before encoding the data and embedding it into a carry image,this review verifies that it has been encrypted.Usually,embedded text in photos conveys crucial signals about the content.This review employs hash table encryption on the message before hiding it within the picture to provide a more secure method of data transport.If the message is ever intercepted by a third party,there are several ways to stop this operation.A second level of security process implementation involves encrypting and decrypting steganography images using different hashing algorithms.
基金Supported by the National Natural Science Foundation of China(61373100)the Open Funding Project of State Key Laboratory of Virtual Reality Technology and Systems(BUAA-VR-16KF-13,BUAA-VR-17KF-14,BUAA-VR-17KF-15)the Research Project Supported by Shanxi Scholarship Council of China(2016-038)
文摘Lung medical image retrieval based on content similarity plays an important role in computer-aided diagnosis of lung cancer.In recent years,binary hashing has become a hot topic in this field due to its compressed storage and fast query speed.Traditional hashing methods often rely on highdimensional features based hand-crafted methods,which might not be optimally compatible with lung nodule images.Also,different hashing bits contribute to the image retrieval differently,and therefore treating the hashing bits equally affects the retrieval accuracy.Hence,an image retrieval method of lung nodule images is proposed with the basis on convolutional neural networks and hashing.First,apre-trained and fine-tuned convolutional neural network is employed to learn multilevel semantic features of the lung nodules.Principal components analysis is utilized to remove redundant information and preserve informative semantic features of the lung nodules.Second,the proposed method relies on nine sign labels of lung nodules for the training set,and the semantic feature is combined to construct hashing functions.Finally,returned lung nodule images can be easily ranked with the query-adaptive search method based on weighted Hamming distance.Extensive experiments and evaluations on the dataset demonstrate that the proposed method can significantly improve the expression ability of lung nodule images,which further validates the effectiveness of the proposed method.