Deep neural networks are commonly used in computer vision tasks,but they are vulnerable to adversarial samples,resulting in poor recognition accuracy.Although traditional algorithms that craft adversarial samples have...Deep neural networks are commonly used in computer vision tasks,but they are vulnerable to adversarial samples,resulting in poor recognition accuracy.Although traditional algorithms that craft adversarial samples have been effective in attacking classification models,the attacking performance degrades when facing object detection models with more complex structures.To address this issue better,in this paper we first analyze the mechanism of multi-scale feature extraction of object detection models,and then by constructing the object feature-wise attention module and the perturbation extraction module,a novel adversarial sample generation algorithm for attacking detection models is proposed.Specifically,in the first module,based on the multi-scale feature map,we reduce the range of perturbation and improve the stealthiness of adversarial samples by computing the noise distribution in the object region.Then in the second module,we feed the noise distribution into the generative adversarial networks to generate adversarial perturbation with strong attack transferability.By doing so,the proposed approach possesses the ability to better confuse the judgment of detection models.Experiments carried out on the DroneVehicle dataset show that our method is computationally efficient and works well in attacking detection models measured by qualitative analysis and quantitative analysis.展开更多
基金supported in part by the Natural Science Foundation of the Anhui Higher Education Institutions of China(Nos.2023AH040149 and 2022AH050310)the Anhui Provincial Natural Science Foundation(No.2208085MF168)+1 种基金the Science and Technology Innovation Program of Maanshan,China(No.2021a120009)the National Natural Science Foundation of China(Nos.52205548,62206006,and 62306007).
文摘Deep neural networks are commonly used in computer vision tasks,but they are vulnerable to adversarial samples,resulting in poor recognition accuracy.Although traditional algorithms that craft adversarial samples have been effective in attacking classification models,the attacking performance degrades when facing object detection models with more complex structures.To address this issue better,in this paper we first analyze the mechanism of multi-scale feature extraction of object detection models,and then by constructing the object feature-wise attention module and the perturbation extraction module,a novel adversarial sample generation algorithm for attacking detection models is proposed.Specifically,in the first module,based on the multi-scale feature map,we reduce the range of perturbation and improve the stealthiness of adversarial samples by computing the noise distribution in the object region.Then in the second module,we feed the noise distribution into the generative adversarial networks to generate adversarial perturbation with strong attack transferability.By doing so,the proposed approach possesses the ability to better confuse the judgment of detection models.Experiments carried out on the DroneVehicle dataset show that our method is computationally efficient and works well in attacking detection models measured by qualitative analysis and quantitative analysis.