In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free...In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions.展开更多
Deep neural networks are commonly used in computer vision tasks,but they are vulnerable to adversarial samples,resulting in poor recognition accuracy.Although traditional algorithms that craft adversarial samples have...Deep neural networks are commonly used in computer vision tasks,but they are vulnerable to adversarial samples,resulting in poor recognition accuracy.Although traditional algorithms that craft adversarial samples have been effective in attacking classification models,the attacking performance degrades when facing object detection models with more complex structures.To address this issue better,in this paper we first analyze the mechanism of multi-scale feature extraction of object detection models,and then by constructing the object feature-wise attention module and the perturbation extraction module,a novel adversarial sample generation algorithm for attacking detection models is proposed.Specifically,in the first module,based on the multi-scale feature map,we reduce the range of perturbation and improve the stealthiness of adversarial samples by computing the noise distribution in the object region.Then in the second module,we feed the noise distribution into the generative adversarial networks to generate adversarial perturbation with strong attack transferability.By doing so,the proposed approach possesses the ability to better confuse the judgment of detection models.Experiments carried out on the DroneVehicle dataset show that our method is computationally efficient and works well in attacking detection models measured by qualitative analysis and quantitative analysis.展开更多
文摘In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions.
基金supported in part by the Natural Science Foundation of the Anhui Higher Education Institutions of China(Nos.2023AH040149 and 2022AH050310)the Anhui Provincial Natural Science Foundation(No.2208085MF168)+1 种基金the Science and Technology Innovation Program of Maanshan,China(No.2021a120009)the National Natural Science Foundation of China(Nos.52205548,62206006,and 62306007).
文摘Deep neural networks are commonly used in computer vision tasks,but they are vulnerable to adversarial samples,resulting in poor recognition accuracy.Although traditional algorithms that craft adversarial samples have been effective in attacking classification models,the attacking performance degrades when facing object detection models with more complex structures.To address this issue better,in this paper we first analyze the mechanism of multi-scale feature extraction of object detection models,and then by constructing the object feature-wise attention module and the perturbation extraction module,a novel adversarial sample generation algorithm for attacking detection models is proposed.Specifically,in the first module,based on the multi-scale feature map,we reduce the range of perturbation and improve the stealthiness of adversarial samples by computing the noise distribution in the object region.Then in the second module,we feed the noise distribution into the generative adversarial networks to generate adversarial perturbation with strong attack transferability.By doing so,the proposed approach possesses the ability to better confuse the judgment of detection models.Experiments carried out on the DroneVehicle dataset show that our method is computationally efficient and works well in attacking detection models measured by qualitative analysis and quantitative analysis.