In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free...In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions.展开更多
基于深度学习的目标检测算法已广泛应用,与此同时最近的一系列研究表明现有的目标检测算法容易受到对抗性攻击的威胁,造成检测器失效.然而,聚焦于自动驾驶场景下对抗攻击的迁移性研究较少,并且鲜有研究关注该场景下对抗攻击的隐蔽性.针...基于深度学习的目标检测算法已广泛应用,与此同时最近的一系列研究表明现有的目标检测算法容易受到对抗性攻击的威胁,造成检测器失效.然而,聚焦于自动驾驶场景下对抗攻击的迁移性研究较少,并且鲜有研究关注该场景下对抗攻击的隐蔽性.针对现有研究的不足,将对抗样本的优化类比于机器学习模型的训练过程,设计了提升攻击迁移性的算法模块.并且通过风格迁移的方式和神经渲染(neural rendering)技术,提出并实现了迁移隐蔽攻击(transferable and stealthy attack,TSA)方法.具体来说,首先将对抗样本进行重复拼接,结合掩膜生成最终纹理,并将其应用于整个车辆表面.为了模拟真实的环境条件,使用物理变换函数将渲染的伪装车辆嵌入逼真的场景中.最后,通过设计的损失函数优化对抗样本.仿真实验表明,TSA方法在攻击迁移能力上超过了现有方法,并在外观上具有一定的隐蔽性.此外,通过物理域实验进一步证明了TSA方法在现实世界中能够保持有效的攻击性能.展开更多
文摘In recent years,with the rapid advancement of artificial intelligence,object detection algorithms have made significant strides in accuracy and computational efficiency.Notably,research and applications of Anchor-Free models have opened new avenues for real-time target detection in optical remote sensing images(ORSIs).However,in the realmof adversarial attacks,developing adversarial techniques tailored to Anchor-Freemodels remains challenging.Adversarial examples generated based on Anchor-Based models often exhibit poor transferability to these new model architectures.Furthermore,the growing diversity of Anchor-Free models poses additional hurdles to achieving robust transferability of adversarial attacks.This study presents an improved cross-conv-block feature fusion You Only Look Once(YOLO)architecture,meticulously engineered to facilitate the extraction ofmore comprehensive semantic features during the backpropagation process.To address the asymmetry between densely distributed objects in ORSIs and the corresponding detector outputs,a novel dense bounding box attack strategy is proposed.This approach leverages dense target bounding boxes loss in the calculation of adversarial loss functions.Furthermore,by integrating translation-invariant(TI)and momentum-iteration(MI)adversarial methodologies,the proposed framework significantly improves the transferability of adversarial attacks.Experimental results demonstrate that our method achieves superior adversarial attack performance,with adversarial transferability rates(ATR)of 67.53%on the NWPU VHR-10 dataset and 90.71%on the HRSC2016 dataset.Compared to ensemble adversarial attack and cascaded adversarial attack approaches,our method generates adversarial examples in an average of 0.64 s,representing an approximately 14.5%improvement in efficiency under equivalent conditions.
文摘基于深度学习的目标检测算法已广泛应用,与此同时最近的一系列研究表明现有的目标检测算法容易受到对抗性攻击的威胁,造成检测器失效.然而,聚焦于自动驾驶场景下对抗攻击的迁移性研究较少,并且鲜有研究关注该场景下对抗攻击的隐蔽性.针对现有研究的不足,将对抗样本的优化类比于机器学习模型的训练过程,设计了提升攻击迁移性的算法模块.并且通过风格迁移的方式和神经渲染(neural rendering)技术,提出并实现了迁移隐蔽攻击(transferable and stealthy attack,TSA)方法.具体来说,首先将对抗样本进行重复拼接,结合掩膜生成最终纹理,并将其应用于整个车辆表面.为了模拟真实的环境条件,使用物理变换函数将渲染的伪装车辆嵌入逼真的场景中.最后,通过设计的损失函数优化对抗样本.仿真实验表明,TSA方法在攻击迁移能力上超过了现有方法,并在外观上具有一定的隐蔽性.此外,通过物理域实验进一步证明了TSA方法在现实世界中能够保持有效的攻击性能.