期刊文献+

面向AUC优化的高效对抗训练 被引量:1

Efficient Adversarial Training for AUC Optimization
在线阅读 下载PDF
导出
摘要 鉴于ROC曲线下面积(Area Under the ROC Curve,AUC)对数据分布的不敏感特性,面向AUC的对抗训练(AdAUC)近来已成为机器学习领域中抵御长尾分布下对抗攻击的有效范式之一。当前主流方法大多遵循基于平方替代损失的AUC对抗训练框架,并将成对比较形式的AUC对抗损失重构为一个逐样本的随机鞍点优化问题,克服端到端的计算瓶颈。然而,面向复杂的实际应用场景,基于平方损失设计的AUC对抗训练框架恐难以适应多样的下游任务需求。此外,与传统对抗训练范式类似,面向AUC的对抗训练方法在提高模型对抗鲁棒性的同时,也会降低模型在正常样本上的AUC性能,而目前鲜有针对该问题的有效解决方案。鉴于此,本文对如何构建一般化的高效AUC对抗机器学习范式展开系统研究。首先,提出了一种基于标准化分数扰动的通用AUC对抗训练框架(NSAdAUC),在相对温和的条件下,该框架可通过直接扰动模型对样本的预测得分实现对AUC指标的攻击,且不依赖于特定的AUC替代损失。在此基础上,本文进一步指出鲁棒AUC误差可分解为标准AUC误差和边界AUC误差两项之和,并据此设计了一种基于排序感知对抗正则化的AUC对抗训练框架(RARAdAUC),同时兼顾模型的标准AUC和鲁棒AUC性能。为验证所提框架的有效性,在5个长尾基准数据集上进行了大量实验,结果表明所提NSAdAUC和RARAdAUC框架在多种对抗攻击下的鲁棒性均优于现有方法,可在平均意义上分别产生0.94%、5.52%的标准AUC和5.69%、5.41%的鲁棒AUC性能提升。 The Area Under the ROC Curve(AUC)is widely recognized as an essential metric for evaluating classification performance,particularly in imbalanced data scenarios,due to its insensitivity to underlying data distribution.Motivated by its promising property,AUC-Oriented Adversarial Training(AT),abbreviated as AdAUC,has recently gained prominence as an effective paradigm for defending against adversarial attacks in real-world long-tail security challenges.The core idea of AdAUC is to improve model robustness against adversarial perturbations by optimizing a minimax framework with AUC-inspired AT objectives.To achieve this,existing AdAUC methods typically rely on a squared surrogate loss to approximate and reformulate the pairwise AUC adversarial loss into an instance-wise stochastic saddle point problem(SPP).This transformation alleviates the computational bottlenecks arising from pairwise comparisons in AdAUC.However,despite its advantages,this approach has several limitations.First of all,given that different surrogate optimization methods often lead to varying AUC performances,the current square-base surrogate AdAUC paradigm may lack the flexibility needed to accommodate the diverse robustness requirements of real-world applications.In addition,akin to the traditional AT paradigm,improving adversarial robustness in terms of AUC typically comes at the cost of degraded AUC performance on clean data-An issue commonly referred to as the clean-robustness trade-off in the AT community.Unfortunately,this trade-off between standard AUC and robust AUC remains an open challenge in the current literature,with limited exploration of effective solutions,thereby restricting the practical applicability of AdAUC.To address these issues,this paper systematically investigates a more generalized and efficient AdAUC framework.Specifically,we introduce a novel approach called the Normalized Score-based AdAUC(NSAdAUC),which takes a fundamentally different approach from existing methods.Instead of relying on specific surrogate loss functions,NSAdAUC directly perturbs the model's predicted scores across different samples to optimize adversarial AUC.This direct perturbation strategy allows for a more flexible and effective AT process,free from the constraints of traditional surrogate losses.Taking a step further,we provide a theoretical analysis that decomposes the robust AUC error into two key components:the standard AUC error and the boundary AUC error.This decomposition offers deeper insights into the fundamental trade-offs of AdAUC and serves as a guiding principle for designing more balanced training strategies.Building upon these insights,we propose a Ranking-aware Adversarial Regularization algorithm(RARAdAUC),explicitly designed to balance standard and robust AUC performance.More concretely,RARAdAUC introduces a ranking-based regularization term to mitigate the negative impact of AdAUC on clean data while still enhancing adversarial robustness.Finally,to evaluate the effectiveness of our proposed methods,we conduct extensive experiments on five benchmark datasets with long-tail distributions.The experimental results demonstrate that NSAdAUC and RARAdAUC consistently outperform existing AdAUC approaches.In particular,NSAdAUC achieves an average improvement of 0.94%in standard AUC and 5.69%in robust AUC,while RARAdAUC yields even greater improvements of 5.52%in standard AUC and 5.41%in robust AUC.Our study not only provides new insights into the adversarial robustness of AdAUC but also paves the way for future research into balancing standard and robust performance in adversarial settings.
作者 包世龙 许倩倩 杨智勇 华聪 韩博宇 操晓春 黄庆明 BAO Shi-Long;XU Qian-Qian;YANG Zhi-Yong;HUA Cong;HAN Bo-Yu;CAO Xiao-Chun;HUANG Qing Ming(School of Computer Science and Technology,University of Chinese Academy of Sciences,Beijing 101408;Key Laboratory of Itelligent Information Processing,Institute of Compuing Technology,Chinese Academy of Sciences,Beiing 100190;School of Cyber Science and Technology,Shenzhen Campus,Sun Yat-sen University,Shenzhen,Guangdong 518107)
出处 《计算机学报》 北大核心 2025年第7期1551-1571,共21页 Chinese Journal of Computers
基金 新一代人工智能国家科技重大专项(2018AAA0102000) 国家自然科学基金项目(62236008,62441232,U21B2038,U23B2051,62122075,62206264,92370102) 中国科学院青年促进会优秀会员项目 中国科学院战略性先导科技专项(XDB06801201) 国家资助博士后研究人员计划(GZB20240729)资助。
关键词 AUC优化 对抗训练 对抗鲁棒性 长尾学习 机器学习 AUC optimization adversarial training adversarial robustness long-tail learning machine learning
  • 相关文献

同被引文献6

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部