期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Ensuring User Privacy and Model Security via Machine Unlearning: A Review
1
作者 Yonghao Tang Zhiping Cai +2 位作者 Qiang Liu Tongqing Zhou Qiang Ni 《Computers, Materials & Continua》 SCIE EI 2023年第11期2645-2656,共12页
As an emerging discipline,machine learning has been widely used in artificial intelligence,education,meteorology and other fields.In the training of machine learning models,trainers need to use a large amount of pract... As an emerging discipline,machine learning has been widely used in artificial intelligence,education,meteorology and other fields.In the training of machine learning models,trainers need to use a large amount of practical data,which inevitably involves user privacy.Besides,by polluting the training data,a malicious adversary can poison the model,thus compromising model security.The data provider hopes that the model trainer can prove to them the confidentiality of the model.Trainer will be required to withdraw data when the trust collapses.In the meantime,trainers hope to forget the injected data to regain security when finding crafted poisoned data after the model training.Therefore,we focus on forgetting systems,the process of which we call machine unlearning,capable of forgetting specific data entirely and efficiently.In this paper,we present the first comprehensive survey of this realm.We summarize and categorize existing machine unlearning methods based on their characteristics and analyze the relation between machine unlearning and relevant fields(e.g.,inference attacks and data poisoning attacks).Finally,we briefly conclude the existing research directions. 展开更多
关键词 machine learning machine unlearning privacy protection trusted data deletion
在线阅读 下载PDF
The gains do not make up for the losses:a comprehensive evaluation for safety alignment of large language models via machine unlearning
2
作者 Weixiang ZHAO Yulin HU +5 位作者 Xingyu SUI Zhuojun LI Yang DENG Yanyan ZHAO Bing QIN Wanxiang CHE 《Frontiers of Computer Science》 2026年第2期125-149,共25页
Machine Unlearning(MU)has emerged as a promising technique for aligning large language models(LLMs)with safety requirements to steer them forgetting specific harmful contents.Despite the significant progress in previo... Machine Unlearning(MU)has emerged as a promising technique for aligning large language models(LLMs)with safety requirements to steer them forgetting specific harmful contents.Despite the significant progress in previous studies,we argue that the current evaluation criteria,which solely focus on safety evaluation,are actually impractical and biased,leading to concerns about the true effectiveness of MU techniques.To address this,we propose to comprehensively evaluate LLMs after MU from three aspects:safety,over-safety,and general utility.Specifically,a novel benchmark MUBENCH with 18 related datasets is first constructed,where the safety is measured with both vanilla harmful inputs and 10 types of jailbreak attacks.Furthermore,we examine whether MU introduces side effects,focusing on over-safety and utility-loss.Extensive experiments are performed on 3 popular LLMs with 7 recent MU methods.The results highlight a challenging trilemma in safety alignment without side effects,indicating that there is still considerable room for further exploration.MUBENCH serves as a comprehensive benchmark,fostering future research on MU for safety alignment of LLMs. 展开更多
关键词 machine unlearning safety alignment large language models
原文传递
An overview of machine unlearning
3
作者 Chunxiao Li Haipeng Jiang +4 位作者 Jiankang Chen Yu Zhao Shuxuan Fu Fangming Jing Yu Guo 《High-Confidence Computing》 2025年第2期122-130,共9页
Nowadays,machine learning is widely used in various applications.Training a model requires huge amounts of data,but it can pose a threat to user privacy.With the growing concern for privacy,the“Right to be Forgotten... Nowadays,machine learning is widely used in various applications.Training a model requires huge amounts of data,but it can pose a threat to user privacy.With the growing concern for privacy,the“Right to be Forgotten”has been proposed,which means that users have the right to request that their personal information be removed from machine learning models.The emergence of machine unlearning is a response to this need.Implementing machine unlearning is not easy because simply deleting samples from a database does not allow the model to“forget”the data.Therefore,this paper summarises the definition of the machine unlearning formulation,process,deletion requests,design requirements and validation,algorithms,applications,and future perspectives,in the hope that it will help future researchers in machine unlearning. 展开更多
关键词 machine unlearning unlearning definition unlearning requirements and validation unlearning algorithms
在线阅读 下载PDF
Defending Federated Learning System from Poisoning Attacks via Efficient Unlearning
4
作者 Long Cai Ke Gu Jiaqi Lei 《Computers, Materials & Continua》 2025年第4期239-258,共20页
Large-scale neural networks-based federated learning(FL)has gained public recognition for its effective capabilities in distributed training.Nonetheless,the open system architecture inherent to federated learning syst... Large-scale neural networks-based federated learning(FL)has gained public recognition for its effective capabilities in distributed training.Nonetheless,the open system architecture inherent to federated learning systems raises concerns regarding their vulnerability to potential attacks.Poisoning attacks turn into a major menace to federated learning on account of their concealed property and potent destructive force.By altering the local model during routine machine learning training,attackers can easily contaminate the global model.Traditional detection and aggregation solutions mitigate certain threats,but they are still insufficient to completely eliminate the influence generated by attackers.Therefore,federated unlearning that can remove unreliable models while maintaining the accuracy of the global model has become a solution.Unfortunately some existing federated unlearning approaches are rather difficult to be applied in large neural network models because of their high computational expenses.Hence,we propose SlideFU,an efficient anti-poisoning attack federated unlearning framework.The primary concept of SlideFU is to employ sliding window to construct the training process,where all operations are confined within the window.We design a malicious detection scheme based on principal component analysis(PCA),which calculates the trust factors between compressed models in a low-cost way to eliminate unreliable models.After confirming that the global model is under attack,the system activates the federated unlearning process,calibrates the gradients based on the updated direction of the calibration gradients.Experiments on two public datasets demonstrate that our scheme can recover a robust model with extremely high efficiency. 展开更多
关键词 Federated learning malicious client detection model recovery machine unlearning
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部