1 Introduction Although deep neural networks(DNNs)have made groundbreaking progress in various machine learning domains,their practical implementation is still significantly impeded by adversarial vulnerability[1].Adv...1 Introduction Although deep neural networks(DNNs)have made groundbreaking progress in various machine learning domains,their practical implementation is still significantly impeded by adversarial vulnerability[1].Adversarial training,the primary approach to enhance the adversarial robustness of DNNs,augments the training set with adversarial examples and applies adversarial regularization loss to improve robustness[2].However,finding models that achieve a reasonable trade-off between accuracy and robustness remains an unresolved challenge.In this paper,we propose the adoption of global probability constraints to stabilize model decision-making.Our contributions can be summarized as follows.展开更多
基金partially supported by the National Natural Science Foundation of China(Grant Nos.61772284 and 62476137)the Jiangsu Province Excellent Postdoctoral Program,and the Natural Science Research Start-up Foundation of Recruiting Talents of Nanjing University of Posts and Telecommunications(No.NY223213).
文摘1 Introduction Although deep neural networks(DNNs)have made groundbreaking progress in various machine learning domains,their practical implementation is still significantly impeded by adversarial vulnerability[1].Adversarial training,the primary approach to enhance the adversarial robustness of DNNs,augments the training set with adversarial examples and applies adversarial regularization loss to improve robustness[2].However,finding models that achieve a reasonable trade-off between accuracy and robustness remains an unresolved challenge.In this paper,we propose the adoption of global probability constraints to stabilize model decision-making.Our contributions can be summarized as follows.