Adversarial training has been widely considered the most effective defense against adversarial attacks.However,recent studies have demonstrated that a large discrepancy exists in the class-wise robustness of adversari...Adversarial training has been widely considered the most effective defense against adversarial attacks.However,recent studies have demonstrated that a large discrepancy exists in the class-wise robustness of adversarial training,leading to two potential issues:firstly,the overall robustness of a model is compromised due to the weakest class;and secondly,ethical concerns arising from unequal protection and biases,where certain societal demographic groups receive less robustness in defense mechanisms.Despite these issues,solutions to address the discrepancy remain largely underexplored.In this paper,we advance beyond existing methods that focus on class-level solutions.Our investigation reveals that hard examples,identified by higher cross-entropy values,can provide more fine-grained information about the discrepancy.Furthermore,we find that enhancing the diversity of hard examples can effectively reduce the robustness gap between classes.Motivated by these observations,we propose Fair Adversarial Training(FairAT)to mitigate the discrepancy of class-wise robustness.Extensive experiments on various benchmark datasets and adversarial attacks demonstrate that FairAT outperforms state-of-the-art methods in terms of both overall robustness and fairness.For a WRN-28-10 model trained on CIFAR10,FairAT improves the average and worst-class robustness by 2.13%and 4.50%,respectively.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.U20B2049,U21B2018 and 62302344).
文摘Adversarial training has been widely considered the most effective defense against adversarial attacks.However,recent studies have demonstrated that a large discrepancy exists in the class-wise robustness of adversarial training,leading to two potential issues:firstly,the overall robustness of a model is compromised due to the weakest class;and secondly,ethical concerns arising from unequal protection and biases,where certain societal demographic groups receive less robustness in defense mechanisms.Despite these issues,solutions to address the discrepancy remain largely underexplored.In this paper,we advance beyond existing methods that focus on class-level solutions.Our investigation reveals that hard examples,identified by higher cross-entropy values,can provide more fine-grained information about the discrepancy.Furthermore,we find that enhancing the diversity of hard examples can effectively reduce the robustness gap between classes.Motivated by these observations,we propose Fair Adversarial Training(FairAT)to mitigate the discrepancy of class-wise robustness.Extensive experiments on various benchmark datasets and adversarial attacks demonstrate that FairAT outperforms state-of-the-art methods in terms of both overall robustness and fairness.For a WRN-28-10 model trained on CIFAR10,FairAT improves the average and worst-class robustness by 2.13%and 4.50%,respectively.