Deep learning networks are widely used in various systems that require classification.However,deep learning networks are vulnerable to adversarial attacks.The study on adversarial attacks plays an important role in de...Deep learning networks are widely used in various systems that require classification.However,deep learning networks are vulnerable to adversarial attacks.The study on adversarial attacks plays an important role in defense.Black-box attacks require less knowledge about target models than white-box attacks do,which means black-box attacks are easier to launch and more valuable.However,the state-of-arts black-box attacks still suffer in low success rates and large visual distances between generative adversarial images and original images.This paper proposes a kind of fast black-box attack based on the cross-correlation(FBACC)method.The attack is carried out in two stages.In the first stage,an adversarial image,which would be missclassified as the target label,is generated by using gradient descending learning.By far the image may look a lot different than the original one.Then,in the second stage,visual quality keeps getting improved on the condition that the label keeps being missclassified.By using the cross-correlation method,the error of the smooth region is ignored,and the number of iterations is reduced.Compared with the proposed black-box adversarial attack methods,FBACC achieves a better fooling rate and fewer iterations.When attacking LeNet5 and AlexNet respectively,the fooling rates are 100%and 89.56%.When attacking them at the same time,the fooling rate is 69.78%.FBACC method also provides a new adversarial attack method for the study of defense against adversarial attacks.展开更多
This article proposes an innovative adversarial attack method,AMA(Adaptive Multimodal Attack),which introduces an adaptive feedback mechanism by dynamically adjusting the perturbation strength.Specifically,AMA adjusts...This article proposes an innovative adversarial attack method,AMA(Adaptive Multimodal Attack),which introduces an adaptive feedback mechanism by dynamically adjusting the perturbation strength.Specifically,AMA adjusts perturbation amplitude based on task complexity and optimizes the perturbation direction based on the gradient direction in real time to enhance attack efficiency.Experimental results demonstrate that AMA elevates attack success rates from approximately 78.95%to 89.56%on visual question answering and from78.82%to 84.96%on visual reasoning tasks across representative vision-language benchmarks.These findings demonstrate AMA’s superior attack efficiency and reveal the vulnerability of current visual language models to carefully crafted adversarial examples,underscoring the need to enhance their robustness.展开更多
Recently developed fault classification methods for industrial processes are mainly data-driven.Notably,models based on deep neural networks have significantly improved fault classification accuracy owing to the inclu...Recently developed fault classification methods for industrial processes are mainly data-driven.Notably,models based on deep neural networks have significantly improved fault classification accuracy owing to the inclusion of a large number of data patterns.However,these data-driven models are vulnerable to adversarial attacks;thus,small perturbations on the samples can cause the models to provide incorrect fault predictions.Several recent studies have demonstrated the vulnerability of machine learning methods and the existence of adversarial samples.This paper proposes a black-box attack method with an extreme constraint for a safe-critical industrial fault classification system:Only one variable can be perturbed to craft adversarial samples.Moreover,to hide the adversarial samples in the visualization space,a Jacobian matrix is used to guide the perturbed variable selection,making the adversarial samples in the dimensional reduction space invisible to the human eye.Using the one-variable attack(OVA)method,we explore the vulnerability of industrial variables and fault types,which can help understand the geometric characteristics of fault classification systems.Based on the attack method,a corresponding adversarial training defense method is also proposed,which efficiently defends against an OVA and improves the prediction accuracy of the classifiers.In experiments,the proposed method was tested on two datasets from the Tennessee–Eastman process(TEP)and steel plates(SP).We explore the vulnerability and correlation within variables and faults and verify the effectiveness of OVAs and defenses for various classifiers and datasets.For industrial fault classification systems,the attack success rate of our method is close to(on TEP)or even higher than(on SP)the current most effective first-order white-box attack method,which requires perturbation of all variables.展开更多
In order to protect the privacy of query user and database,some QKD-based quantum private query(QPQ)protocols were proposed.Unfortunately some of them cannot resist internal attack from database perfectly;some others ...In order to protect the privacy of query user and database,some QKD-based quantum private query(QPQ)protocols were proposed.Unfortunately some of them cannot resist internal attack from database perfectly;some others can ensure better user privacy but require a reduction of database privacy.In this paper,a novel two-way QPQ protocol is proposed to ensure the privacy of both sides of communication.In our protocol,user makes initial quantum states and derives the key bit by comparing initial quantum state and outcome state returned from database by ctrl or shift mode instead of announcing two non-orthogonal qubits as others which may leak part secret information.In this way,not only the privacy of database be ensured but also user privacy is strengthened.Furthermore,our protocol can also realize the security of loss-tolerance,cheat-sensitive,and resisting JM attack etc.展开更多
Deep neural networks,especially face recognition models,have been shown to be vulnerable to adversarial examples.However,existing attack methods for face recognition systems either cannot attack black-box models,are n...Deep neural networks,especially face recognition models,have been shown to be vulnerable to adversarial examples.However,existing attack methods for face recognition systems either cannot attack black-box models,are not universal,have cumbersome deployment processes,or lack camouflage and are easily detected by the human eye.In this paper,we propose an adversarial pattern generation method for face recognition and achieve universal black-box attacks by pasting the pattern on the frame of goggles.To achieve visual camouflage,we use a generative adversarial network(GAN).The scale of the generative network of GAN is increased to balance the performance conflict between concealment and adversarial behavior,the perceptual loss function based on VGG19 is used to constrain the color style and enhance GAN’s learning ability,and the fine-grained meta-learning adversarial attack strategy is used to carry out black-box attacks.Sufficient visualization results demonstrate that compared with existing methods,the proposed method can generate samples with camouflage and adversarial characteristics.Meanwhile,extensive quantitative experiments show that the generated samples have a high attack success rate against black-box models.展开更多
In recent years,universal adversarial per-turbation(UAP)has attracted the attention of many re-searchers due to its good generalization.However,in order to generate an appropriate UAP,current methods usually require e...In recent years,universal adversarial per-turbation(UAP)has attracted the attention of many re-searchers due to its good generalization.However,in order to generate an appropriate UAP,current methods usually require either accessing the original dataset or meticulously constructing optimization functions and proxy datasets.In this paper,we aim to elimi-nate any dependency on proxy datasets and explore a method for generating Universal Adversarial Pertur-bations(UAP)on a single image.After revisiting re-search on UAP,we discovered that the key to gener-ating UAP lies in the accumulation of Individual Ad-versarial Perturbation(IAP)gradient,which prompted us to study the method of accumulating gradients from an IAP.We designed a simple and effective process to generate UAP,which only includes three steps:pre-cessing,generating an IAP and scaling the perturba-tions.Through our proposed process,any IAP gener-ated on an image can be constructed into a UAP with comparable performance,indicating that UAP can be generated free of data.Extensive experiments on var-ious classifiers and attack approaches demonstrate the superiority of our method on efficiency and aggressiveness.展开更多
Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they re...Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they retain their features if applied to other models due to their good transferability.However,adversarial examples often exhibit overfitting,as they are tailored to exploit the particular architecture and feature representation of source models.Consequently,when attempting black-box transfer attacks on different target models,their effectiveness is decreased.To solve this problem,this study proposes an approach based on a Regularized Constrained Feature Layer(RCFL).The proposed method first uses regularization constraints to attenuate the initial examples of low-frequency components.Perturbations are then added to a pre-specified layer of the source model using the back-propagation technique,in order to modify the original adversarial examples.Afterward,a regularized loss function is used to enhance the black-box transferability between different target models.The proposed method is finally tested on the ImageNet,CIFAR-100,and Stanford Car datasets with various target models,The obtained results demonstrate that it achieves a significantly higher transfer-based adversarial attack success rate compared with baseline techniques.展开更多
Quantum private query(QPQ)protocols based on quantum key distribution(QKD)have gained significant attention due to their practical implementation advantages.However,joint-measurement attacks pose a serious threat to t...Quantum private query(QPQ)protocols based on quantum key distribution(QKD)have gained significant attention due to their practical implementation advantages.However,joint-measurement attacks pose a serious threat to their security,especially in real-world multi-query scenarios.Most existing QKD-based QPQ protocols become highly vulnerable when users make repeated database queries.Attackers can exploit strategies like minimum error discrimination(MED)and unambiguous state discrimination(USD)to completely break database security.This work thoroughly analyzes joint-measurement attacks in multi-round QPQ systems.We demonstrate that these attacks make current protocols practically unusable.To address this critical issue,we propose an effective defense method using classical post-processing.Our solution not only reveals fundamental flaws in existing approaches but also provides a reliable way to build secure QPQ systems.These findings enable the development of robust protocols that can withstand real-world usage patterns,moving QPQ technology from theory to practical application.展开更多
k-匿名机制是LBS(location based service)中保证查询隐私性的重要手段.已有文献指出,现有的k-匿名机制不能有效保护连续性查询的隐私性.提出一种连续查询发送模型,该模型融合了查询发送时间的间隔模型和连续性模型,针对此模型下的两种k...k-匿名机制是LBS(location based service)中保证查询隐私性的重要手段.已有文献指出,现有的k-匿名机制不能有效保护连续性查询的隐私性.提出一种连续查询发送模型,该模型融合了查询发送时间的间隔模型和连续性模型,针对此模型下的两种k-匿名算法Clique Cloaking和Non-clique Cloaking,分别提出了一种连续查询攻击算法.在此攻击算法下,匿名集的势不再适合作为查询匿名性的度量,因此提出一种基于熵理论的度量方式AD(anonymityd egree).实验结果表明,对连续性很强的查询,攻击算法重识别用户身份的成功率极高;AD比匿名集的势更能反映查询的匿名性.展开更多
位置隐私保护与基于位置的服务(location based service,LBS)的查询服务质量是一对矛盾,在连续查询(continuous query)和实际路网环境下,位置隐私保护问题需考虑更多限制因素.如何在路网连续查询过程中有效保护用户位置隐私的同时获取...位置隐私保护与基于位置的服务(location based service,LBS)的查询服务质量是一对矛盾,在连续查询(continuous query)和实际路网环境下,位置隐私保护问题需考虑更多限制因素.如何在路网连续查询过程中有效保护用户位置隐私的同时获取精确的兴趣点(place of interest,POI)查询结果是目前的研究热点.利用假位置的思想,提出了路网环境下以交叉路口作为锚点的连续查询算法,在保护位置隐私的同时获取精确的K邻近查询(K nearest neighbor,KNN)结果;基于注入假查询和构造查询匿名组的方法,提出了抗查询内容关联攻击和抗运动模式推断攻击的轨迹隐私保护方法,并在分析中给出了位置隐私保护和查询服务质量平衡方法的讨论.性能分析及实验表明,该方法能够在连续查询中提供较强的位置隐私保护,并具有良好的实效性和均衡的数据通信量.展开更多
基金This work is supported by the National Key R&D Program of China(2017YFB0802703)Research on the education mode for complicate skill students in new media with cross specialty integration(22150117092)+3 种基金Major Scientific and Technological Special Project of Guizhou Province(20183001)Open Foundation of Guizhou Provincial Key Laboratory of Public Big Data(2018BDKFJJ014)Open Foundation of Guizhou Provincial Key Laboratory of Public Big Data(2018BDKFJJ019)Open Foundation of Guizhou Provincial Key Laboratory of Public Big Data(2018BDKFJJ022).
文摘Deep learning networks are widely used in various systems that require classification.However,deep learning networks are vulnerable to adversarial attacks.The study on adversarial attacks plays an important role in defense.Black-box attacks require less knowledge about target models than white-box attacks do,which means black-box attacks are easier to launch and more valuable.However,the state-of-arts black-box attacks still suffer in low success rates and large visual distances between generative adversarial images and original images.This paper proposes a kind of fast black-box attack based on the cross-correlation(FBACC)method.The attack is carried out in two stages.In the first stage,an adversarial image,which would be missclassified as the target label,is generated by using gradient descending learning.By far the image may look a lot different than the original one.Then,in the second stage,visual quality keeps getting improved on the condition that the label keeps being missclassified.By using the cross-correlation method,the error of the smooth region is ignored,and the number of iterations is reduced.Compared with the proposed black-box adversarial attack methods,FBACC achieves a better fooling rate and fewer iterations.When attacking LeNet5 and AlexNet respectively,the fooling rates are 100%and 89.56%.When attacking them at the same time,the fooling rate is 69.78%.FBACC method also provides a new adversarial attack method for the study of defense against adversarial attacks.
基金funded by the Natural Science Foundation of Jiangsu Province(Program BK20240699)National Natural Science Foundation of China(Program 62402228).
文摘This article proposes an innovative adversarial attack method,AMA(Adaptive Multimodal Attack),which introduces an adaptive feedback mechanism by dynamically adjusting the perturbation strength.Specifically,AMA adjusts perturbation amplitude based on task complexity and optimizes the perturbation direction based on the gradient direction in real time to enhance attack efficiency.Experimental results demonstrate that AMA elevates attack success rates from approximately 78.95%to 89.56%on visual question answering and from78.82%to 84.96%on visual reasoning tasks across representative vision-language benchmarks.These findings demonstrate AMA’s superior attack efficiency and reveal the vulnerability of current visual language models to carefully crafted adversarial examples,underscoring the need to enhance their robustness.
基金This work was supported in part by the National Natural Science Foundation of China(NSFC)(92167106,62103362,and 61833014)the Natural Science Foundation of Zhejiang Province(LR18F030001).
文摘Recently developed fault classification methods for industrial processes are mainly data-driven.Notably,models based on deep neural networks have significantly improved fault classification accuracy owing to the inclusion of a large number of data patterns.However,these data-driven models are vulnerable to adversarial attacks;thus,small perturbations on the samples can cause the models to provide incorrect fault predictions.Several recent studies have demonstrated the vulnerability of machine learning methods and the existence of adversarial samples.This paper proposes a black-box attack method with an extreme constraint for a safe-critical industrial fault classification system:Only one variable can be perturbed to craft adversarial samples.Moreover,to hide the adversarial samples in the visualization space,a Jacobian matrix is used to guide the perturbed variable selection,making the adversarial samples in the dimensional reduction space invisible to the human eye.Using the one-variable attack(OVA)method,we explore the vulnerability of industrial variables and fault types,which can help understand the geometric characteristics of fault classification systems.Based on the attack method,a corresponding adversarial training defense method is also proposed,which efficiently defends against an OVA and improves the prediction accuracy of the classifiers.In experiments,the proposed method was tested on two datasets from the Tennessee–Eastman process(TEP)and steel plates(SP).We explore the vulnerability and correlation within variables and faults and verify the effectiveness of OVAs and defenses for various classifiers and datasets.For industrial fault classification systems,the attack success rate of our method is close to(on TEP)or even higher than(on SP)the current most effective first-order white-box attack method,which requires perturbation of all variables.
基金Supported by National Natural Science Foundation of China under Grant Nos.U1636106,61572053,61472048,61602019,61502016Beijing Natural Science Foundation under Grant Nos.4152038,4162005+1 种基金Basic Research Fund of Beijing University of Technology(No.X4007999201501)The Scientific Research Common Program of Beijing Municipal Commission of Education under Grant No.KM201510005016
文摘In order to protect the privacy of query user and database,some QKD-based quantum private query(QPQ)protocols were proposed.Unfortunately some of them cannot resist internal attack from database perfectly;some others can ensure better user privacy but require a reduction of database privacy.In this paper,a novel two-way QPQ protocol is proposed to ensure the privacy of both sides of communication.In our protocol,user makes initial quantum states and derives the key bit by comparing initial quantum state and outcome state returned from database by ctrl or shift mode instead of announcing two non-orthogonal qubits as others which may leak part secret information.In this way,not only the privacy of database be ensured but also user privacy is strengthened.Furthermore,our protocol can also realize the security of loss-tolerance,cheat-sensitive,and resisting JM attack etc.
基金the National Key Research and Development Program of China(No.2022ZD0210500)the National Natural Science Foundation of China(Nos.61972067,U21A20491,and 62103437)the Dalian Outstanding Youth Science Foundation(No.2022RJ01)。
文摘Deep neural networks,especially face recognition models,have been shown to be vulnerable to adversarial examples.However,existing attack methods for face recognition systems either cannot attack black-box models,are not universal,have cumbersome deployment processes,or lack camouflage and are easily detected by the human eye.In this paper,we propose an adversarial pattern generation method for face recognition and achieve universal black-box attacks by pasting the pattern on the frame of goggles.To achieve visual camouflage,we use a generative adversarial network(GAN).The scale of the generative network of GAN is increased to balance the performance conflict between concealment and adversarial behavior,the perceptual loss function based on VGG19 is used to constrain the color style and enhance GAN’s learning ability,and the fine-grained meta-learning adversarial attack strategy is used to carry out black-box attacks.Sufficient visualization results demonstrate that compared with existing methods,the proposed method can generate samples with camouflage and adversarial characteristics.Meanwhile,extensive quantitative experiments show that the generated samples have a high attack success rate against black-box models.
基金supported in part by the Natural Science Foundation of China under Grant 62372395in part by the Research Foundation of Education Bureau of Hunan Province under Grant No.24A0105in part by the Postgraduate Scientific Research Innovation Project of Hunan Province(Grant No.CX20230546).
文摘In recent years,universal adversarial per-turbation(UAP)has attracted the attention of many re-searchers due to its good generalization.However,in order to generate an appropriate UAP,current methods usually require either accessing the original dataset or meticulously constructing optimization functions and proxy datasets.In this paper,we aim to elimi-nate any dependency on proxy datasets and explore a method for generating Universal Adversarial Pertur-bations(UAP)on a single image.After revisiting re-search on UAP,we discovered that the key to gener-ating UAP lies in the accumulation of Individual Ad-versarial Perturbation(IAP)gradient,which prompted us to study the method of accumulating gradients from an IAP.We designed a simple and effective process to generate UAP,which only includes three steps:pre-cessing,generating an IAP and scaling the perturba-tions.Through our proposed process,any IAP gener-ated on an image can be constructed into a UAP with comparable performance,indicating that UAP can be generated free of data.Extensive experiments on var-ious classifiers and attack approaches demonstrate the superiority of our method on efficiency and aggressiveness.
基金supported by the Intelligent Policing Key Laboratory of Sichuan Province(No.ZNJW2022KFZD002)This work was supported by the Scientific and Technological Research Program of Chongqing Municipal Education Commission(Grant Nos.KJQN202302403,KJQN202303111).
文摘Transfer-based Adversarial Attacks(TAAs)can deceive a victim model even without prior knowledge.This is achieved by leveraging the property of adversarial examples.That is,when generated from a surrogate model,they retain their features if applied to other models due to their good transferability.However,adversarial examples often exhibit overfitting,as they are tailored to exploit the particular architecture and feature representation of source models.Consequently,when attempting black-box transfer attacks on different target models,their effectiveness is decreased.To solve this problem,this study proposes an approach based on a Regularized Constrained Feature Layer(RCFL).The proposed method first uses regularization constraints to attenuate the initial examples of low-frequency components.Perturbations are then added to a pre-specified layer of the source model using the back-propagation technique,in order to modify the original adversarial examples.Afterward,a regularized loss function is used to enhance the black-box transferability between different target models.The proposed method is finally tested on the ImageNet,CIFAR-100,and Stanford Car datasets with various target models,The obtained results demonstrate that it achieves a significantly higher transfer-based adversarial attack success rate compared with baseline techniques.
基金supported by the National Key Research and Development Program of China(Grant No.2022YFC3801700)the National Natural Science Foundation of China(Grant Nos.62472052,62272073,and 62171418)+1 种基金the Sichuan Science and Technology Program(Grant No.2023JDRC0017)the Xinjiang Production and Construction Corps Key Laboratory of Computing Intelligence and Network Information Security(Grant No.CZ002702-3)。
文摘Quantum private query(QPQ)protocols based on quantum key distribution(QKD)have gained significant attention due to their practical implementation advantages.However,joint-measurement attacks pose a serious threat to their security,especially in real-world multi-query scenarios.Most existing QKD-based QPQ protocols become highly vulnerable when users make repeated database queries.Attackers can exploit strategies like minimum error discrimination(MED)and unambiguous state discrimination(USD)to completely break database security.This work thoroughly analyzes joint-measurement attacks in multi-round QPQ systems.We demonstrate that these attacks make current protocols practically unusable.To address this critical issue,we propose an effective defense method using classical post-processing.Our solution not only reveals fundamental flaws in existing approaches but also provides a reliable way to build secure QPQ systems.These findings enable the development of robust protocols that can withstand real-world usage patterns,moving QPQ technology from theory to practical application.
文摘位置隐私保护与基于位置的服务(location based service,LBS)的查询服务质量是一对矛盾,在连续查询(continuous query)和实际路网环境下,位置隐私保护问题需考虑更多限制因素.如何在路网连续查询过程中有效保护用户位置隐私的同时获取精确的兴趣点(place of interest,POI)查询结果是目前的研究热点.利用假位置的思想,提出了路网环境下以交叉路口作为锚点的连续查询算法,在保护位置隐私的同时获取精确的K邻近查询(K nearest neighbor,KNN)结果;基于注入假查询和构造查询匿名组的方法,提出了抗查询内容关联攻击和抗运动模式推断攻击的轨迹隐私保护方法,并在分析中给出了位置隐私保护和查询服务质量平衡方法的讨论.性能分析及实验表明,该方法能够在连续查询中提供较强的位置隐私保护,并具有良好的实效性和均衡的数据通信量.