期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Ensuring User Privacy and Model Security via Machine Unlearning: A Review
1
作者 Yonghao Tang Zhiping Cai +2 位作者 Qiang Liu Tongqing Zhou Qiang Ni 《Computers, Materials & Continua》 SCIE EI 2023年第11期2645-2656,共12页
As an emerging discipline,machine learning has been widely used in artificial intelligence,education,meteorology and other fields.In the training of machine learning models,trainers need to use a large amount of pract... As an emerging discipline,machine learning has been widely used in artificial intelligence,education,meteorology and other fields.In the training of machine learning models,trainers need to use a large amount of practical data,which inevitably involves user privacy.Besides,by polluting the training data,a malicious adversary can poison the model,thus compromising model security.The data provider hopes that the model trainer can prove to them the confidentiality of the model.Trainer will be required to withdraw data when the trust collapses.In the meantime,trainers hope to forget the injected data to regain security when finding crafted poisoned data after the model training.Therefore,we focus on forgetting systems,the process of which we call machine unlearning,capable of forgetting specific data entirely and efficiently.In this paper,we present the first comprehensive survey of this realm.We summarize and categorize existing machine unlearning methods based on their characteristics and analyze the relation between machine unlearning and relevant fields(e.g.,inference attacks and data poisoning attacks).Finally,we briefly conclude the existing research directions. 展开更多
关键词 machine learning machine unlearning privacy protection trusted data deletion
在线阅读 下载PDF
An overview of machine unlearning
2
作者 Chunxiao Li Haipeng Jiang +4 位作者 Jiankang Chen Yu Zhao Shuxuan Fu Fangming Jing Yu Guo 《High-Confidence Computing》 2025年第2期122-130,共9页
Nowadays,machine learning is widely used in various applications.Training a model requires huge amounts of data,but it can pose a threat to user privacy.With the growing concern for privacy,the“Right to be Forgotten... Nowadays,machine learning is widely used in various applications.Training a model requires huge amounts of data,but it can pose a threat to user privacy.With the growing concern for privacy,the“Right to be Forgotten”has been proposed,which means that users have the right to request that their personal information be removed from machine learning models.The emergence of machine unlearning is a response to this need.Implementing machine unlearning is not easy because simply deleting samples from a database does not allow the model to“forget”the data.Therefore,this paper summarises the definition of the machine unlearning formulation,process,deletion requests,design requirements and validation,algorithms,applications,and future perspectives,in the hope that it will help future researchers in machine unlearning. 展开更多
关键词 machine unlearning unlearning definition unlearning requirements and validation unlearning algorithms
在线阅读 下载PDF
Defending Federated Learning System from Poisoning Attacks via Efficient Unlearning
3
作者 Long Cai Ke Gu Jiaqi Lei 《Computers, Materials & Continua》 2025年第4期239-258,共20页
Large-scale neural networks-based federated learning(FL)has gained public recognition for its effective capabilities in distributed training.Nonetheless,the open system architecture inherent to federated learning syst... Large-scale neural networks-based federated learning(FL)has gained public recognition for its effective capabilities in distributed training.Nonetheless,the open system architecture inherent to federated learning systems raises concerns regarding their vulnerability to potential attacks.Poisoning attacks turn into a major menace to federated learning on account of their concealed property and potent destructive force.By altering the local model during routine machine learning training,attackers can easily contaminate the global model.Traditional detection and aggregation solutions mitigate certain threats,but they are still insufficient to completely eliminate the influence generated by attackers.Therefore,federated unlearning that can remove unreliable models while maintaining the accuracy of the global model has become a solution.Unfortunately some existing federated unlearning approaches are rather difficult to be applied in large neural network models because of their high computational expenses.Hence,we propose SlideFU,an efficient anti-poisoning attack federated unlearning framework.The primary concept of SlideFU is to employ sliding window to construct the training process,where all operations are confined within the window.We design a malicious detection scheme based on principal component analysis(PCA),which calculates the trust factors between compressed models in a low-cost way to eliminate unreliable models.After confirming that the global model is under attack,the system activates the federated unlearning process,calibrates the gradients based on the updated direction of the calibration gradients.Experiments on two public datasets demonstrate that our scheme can recover a robust model with extremely high efficiency. 展开更多
关键词 Federated learning malicious client detection model recovery machine unlearning
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部