Large-scale neural networks-based federated learning(FL)has gained public recognition for its effective capabilities in distributed training.Nonetheless,the open system architecture inherent to federated learning syst...Large-scale neural networks-based federated learning(FL)has gained public recognition for its effective capabilities in distributed training.Nonetheless,the open system architecture inherent to federated learning systems raises concerns regarding their vulnerability to potential attacks.Poisoning attacks turn into a major menace to federated learning on account of their concealed property and potent destructive force.By altering the local model during routine machine learning training,attackers can easily contaminate the global model.Traditional detection and aggregation solutions mitigate certain threats,but they are still insufficient to completely eliminate the influence generated by attackers.Therefore,federated unlearning that can remove unreliable models while maintaining the accuracy of the global model has become a solution.Unfortunately some existing federated unlearning approaches are rather difficult to be applied in large neural network models because of their high computational expenses.Hence,we propose SlideFU,an efficient anti-poisoning attack federated unlearning framework.The primary concept of SlideFU is to employ sliding window to construct the training process,where all operations are confined within the window.We design a malicious detection scheme based on principal component analysis(PCA),which calculates the trust factors between compressed models in a low-cost way to eliminate unreliable models.After confirming that the global model is under attack,the system activates the federated unlearning process,calibrates the gradients based on the updated direction of the calibration gradients.Experiments on two public datasets demonstrate that our scheme can recover a robust model with extremely high efficiency.展开更多
Nowadays,machine learning is widely used in various applications.Training a model requires huge amounts of data,but it can pose a threat to user privacy.With the growing concern for privacy,the“Right to be Forgotten...Nowadays,machine learning is widely used in various applications.Training a model requires huge amounts of data,but it can pose a threat to user privacy.With the growing concern for privacy,the“Right to be Forgotten”has been proposed,which means that users have the right to request that their personal information be removed from machine learning models.The emergence of machine unlearning is a response to this need.Implementing machine unlearning is not easy because simply deleting samples from a database does not allow the model to“forget”the data.Therefore,this paper summarises the definition of the machine unlearning formulation,process,deletion requests,design requirements and validation,algorithms,applications,and future perspectives,in the hope that it will help future researchers in machine unlearning.展开更多
As an emerging discipline,machine learning has been widely used in artificial intelligence,education,meteorology and other fields.In the training of machine learning models,trainers need to use a large amount of pract...As an emerging discipline,machine learning has been widely used in artificial intelligence,education,meteorology and other fields.In the training of machine learning models,trainers need to use a large amount of practical data,which inevitably involves user privacy.Besides,by polluting the training data,a malicious adversary can poison the model,thus compromising model security.The data provider hopes that the model trainer can prove to them the confidentiality of the model.Trainer will be required to withdraw data when the trust collapses.In the meantime,trainers hope to forget the injected data to regain security when finding crafted poisoned data after the model training.Therefore,we focus on forgetting systems,the process of which we call machine unlearning,capable of forgetting specific data entirely and efficiently.In this paper,we present the first comprehensive survey of this realm.We summarize and categorize existing machine unlearning methods based on their characteristics and analyze the relation between machine unlearning and relevant fields(e.g.,inference attacks and data poisoning attacks).Finally,we briefly conclude the existing research directions.展开更多
The aim of the article is to explore the relation among capitalism,creative economy,and the end of rest in Gustavo Vinagre’s movie Unlearning to Sleep.The main argument indicates that,in the context of the imperative...The aim of the article is to explore the relation among capitalism,creative economy,and the end of rest in Gustavo Vinagre’s movie Unlearning to Sleep.The main argument indicates that,in the context of the imperatives within the inhumane temporalities of the 24/7 society,sleep and rest may represent an inevitable and anomalous resistance to the demands of the capitalist order in which creative economy is immersed and exposed in the movie.展开更多
1 Introduction Large Language Models(LLMs)possess massive parameters and are trained on vast datasets,demonstrating exceptional proficiency in various tasks.The remarkable advancements in LLMs also inspire the explora...1 Introduction Large Language Models(LLMs)possess massive parameters and are trained on vast datasets,demonstrating exceptional proficiency in various tasks.The remarkable advancements in LLMs also inspire the exploration of leveraging LLMs as recommenders(LLMRec),whose effectiveness stems from extensive open-world knowledge and reasoning ability in LLMs[1].LLMRec obtains the recommendation ability through instruction tuning on the user interaction data.But in many cases,it is also crucial for LLMRec to forget specific user data,which is referred to as recommendation unlearning[2],as shown in Fig.1.展开更多
The present work corresponds to a reflection about several theories and approaches of the learning and their relevance in the training of social agents,reflection that emerges from the practice of the training,and att...The present work corresponds to a reflection about several theories and approaches of the learning and their relevance in the training of social agents,reflection that emerges from the practice of the training,and attending to the fact that the meaning of what it implies to learn and how it is that is learned is not at the center of the discussion in universities in Latin America,which emphasizes more what to teach rather than how to teach.展开更多
基金supported in part by the National Social Science Foundation of China under Grant 20BTQ058in part by the Natural Science Foundation of Hunan Province under Grant 2023JJ50033.
文摘Large-scale neural networks-based federated learning(FL)has gained public recognition for its effective capabilities in distributed training.Nonetheless,the open system architecture inherent to federated learning systems raises concerns regarding their vulnerability to potential attacks.Poisoning attacks turn into a major menace to federated learning on account of their concealed property and potent destructive force.By altering the local model during routine machine learning training,attackers can easily contaminate the global model.Traditional detection and aggregation solutions mitigate certain threats,but they are still insufficient to completely eliminate the influence generated by attackers.Therefore,federated unlearning that can remove unreliable models while maintaining the accuracy of the global model has become a solution.Unfortunately some existing federated unlearning approaches are rather difficult to be applied in large neural network models because of their high computational expenses.Hence,we propose SlideFU,an efficient anti-poisoning attack federated unlearning framework.The primary concept of SlideFU is to employ sliding window to construct the training process,where all operations are confined within the window.We design a malicious detection scheme based on principal component analysis(PCA),which calculates the trust factors between compressed models in a low-cost way to eliminate unreliable models.After confirming that the global model is under attack,the system activates the federated unlearning process,calibrates the gradients based on the updated direction of the calibration gradients.Experiments on two public datasets demonstrate that our scheme can recover a robust model with extremely high efficiency.
基金supported by the National Natural ScienceFoundation of China(62102035)the National Key Researchand Development Program of China(2022ZD0115901).
文摘Nowadays,machine learning is widely used in various applications.Training a model requires huge amounts of data,but it can pose a threat to user privacy.With the growing concern for privacy,the“Right to be Forgotten”has been proposed,which means that users have the right to request that their personal information be removed from machine learning models.The emergence of machine unlearning is a response to this need.Implementing machine unlearning is not easy because simply deleting samples from a database does not allow the model to“forget”the data.Therefore,this paper summarises the definition of the machine unlearning formulation,process,deletion requests,design requirements and validation,algorithms,applications,and future perspectives,in the hope that it will help future researchers in machine unlearning.
基金supported by the National Key Research and Development Program of China(2020YFC2003404)the National Natura Science Foundation of China(No.62072465,62172155,62102425,62102429)+1 种基金the Science and Technology Innovation Program of Hunan Province(Nos.2022RC3061,2021RC2071)the Natural Science Foundation of Hunan Province(No.2022JJ40564).
文摘As an emerging discipline,machine learning has been widely used in artificial intelligence,education,meteorology and other fields.In the training of machine learning models,trainers need to use a large amount of practical data,which inevitably involves user privacy.Besides,by polluting the training data,a malicious adversary can poison the model,thus compromising model security.The data provider hopes that the model trainer can prove to them the confidentiality of the model.Trainer will be required to withdraw data when the trust collapses.In the meantime,trainers hope to forget the injected data to regain security when finding crafted poisoned data after the model training.Therefore,we focus on forgetting systems,the process of which we call machine unlearning,capable of forgetting specific data entirely and efficiently.In this paper,we present the first comprehensive survey of this realm.We summarize and categorize existing machine unlearning methods based on their characteristics and analyze the relation between machine unlearning and relevant fields(e.g.,inference attacks and data poisoning attacks).Finally,we briefly conclude the existing research directions.
文摘The aim of the article is to explore the relation among capitalism,creative economy,and the end of rest in Gustavo Vinagre’s movie Unlearning to Sleep.The main argument indicates that,in the context of the imperatives within the inhumane temporalities of the 24/7 society,sleep and rest may represent an inevitable and anomalous resistance to the demands of the capitalist order in which creative economy is immersed and exposed in the movie.
基金supported by the National Natural Science Foundation of China(Grant No.62177033)sponsored by the Huawei Innovation Research Program.
文摘1 Introduction Large Language Models(LLMs)possess massive parameters and are trained on vast datasets,demonstrating exceptional proficiency in various tasks.The remarkable advancements in LLMs also inspire the exploration of leveraging LLMs as recommenders(LLMRec),whose effectiveness stems from extensive open-world knowledge and reasoning ability in LLMs[1].LLMRec obtains the recommendation ability through instruction tuning on the user interaction data.But in many cases,it is also crucial for LLMRec to forget specific user data,which is referred to as recommendation unlearning[2],as shown in Fig.1.
文摘The present work corresponds to a reflection about several theories and approaches of the learning and their relevance in the training of social agents,reflection that emerges from the practice of the training,and attending to the fact that the meaning of what it implies to learn and how it is that is learned is not at the center of the discussion in universities in Latin America,which emphasizes more what to teach rather than how to teach.