期刊文献+
共找到1,032篇文章
< 1 2 52 >
每页显示 20 50 100
DPIL-Traj: Differential Privacy Trajectory Generation Framework with Imitation Learning
1
作者 Huaxiong Liao Xiangxuan Zhong +4 位作者 Xueqi Chen Yirui Huang Yuwei Lin Jing Zhang Bruce Gu 《Computers, Materials & Continua》 2026年第1期1530-1550,共21页
The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location re... The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location reidentification and correlation attacks.To address these challenges,privacy-preserving trajectory generation methods are critical for applications relying on sensitive location data.This paper introduces DPIL-Traj,an advanced framework designed to generate synthetic trajectories while achieving a superior balance between data utility and privacy preservation.Firstly,the framework incorporates Differential Privacy Clustering,which anonymizes trajectory data by applying differential privacy techniques that add noise,ensuring the protection of sensitive user information.Secondly,Imitation Learning is used to replicate decision-making behaviors observed in real-world trajectories.By learning from expert trajectories,this component generates synthetic data that closely mimics real-world decision-making processes while optimizing the quality of the generated trajectories.Finally,Markov-based Trajectory Generation is employed to capture and maintain the inherent temporal dynamics of movement patterns.Extensive experiments conducted on the GeoLife trajectory dataset show that DPIL-Traj improves utility performance by an average of 19.85%,and in terms of privacy performance by an average of 12.51%,compared to state-of-the-art approaches.Ablation studies further reveal that DP clustering effectively safeguards privacy,imitation learning enhances utility under noise,and the Markov module strengthens temporal coherence. 展开更多
关键词 privacy-PRESERVING trajectory generation differential privacy imitation learning Markov chain
在线阅读 下载PDF
Layer-Level Adaptive Gradient Perturbation Protecting Deep Learning Based on Differential Privacy
2
作者 Zhang Xiangfei Zhang Qingchen Jiang Liming 《CAAI Transactions on Intelligence Technology》 2025年第3期929-944,共16页
Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garner... Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training.However,classic differentially private learning introduces the same level of noise into the gradients across training iterations,which affects the trade-off between model utility and privacy guarantees.To address this issue,an adaptive differential privacy mechanism was proposed in this paper,which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks.Specifically,an equal privacy budget is initially allocated to each layer.Subsequently,as training advances,the privacy budget for layers closer to the output is reduced(adding more noise),while the budget for layers closer to the input is increased.The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count.This dynamic allocation provides a simple process for adjusting privacy budgets,alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress.Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks. 展开更多
关键词 deep learning differential privacy information security privacy protection
在线阅读 下载PDF
Differential Privacy Federated Learning Based on Adaptive Adjustment
3
作者 Yanjin Cheng Wenmin Li +1 位作者 Sujuan Qin Tengfei Tu 《Computers, Materials & Continua》 2025年第3期4777-4795,共19页
Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can com... Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can compromise user privacy and security by stealing model parameters.Therefore,differential privacy is applied in federated learning to further address malicious issues.However,the addition of noise and the update clipping mechanism in differential privacy jointly limit the further development of federated learning in privacy protection and performance optimization.Therefore,we propose an adaptive adjusted differential privacy federated learning method.First,a dynamic adaptive privacy budget allocation strategy is proposed,which flexibly adjusts the privacy budget within a given range based on the client’s data volume and training requirements,thereby alleviating the loss of privacy budget and the magnitude of model noise.Second,a longitudinal clipping differential privacy strategy is proposed,which based on the differences in factors that affect parameter updates,uses sparse methods to trim local updates,thereby reducing the impact of privacy pruning steps on model accuracy.The two strategies work together to ensure user privacy while the effect of differential privacy on model accuracy is reduced.To evaluate the effectiveness of our method,we conducted extensive experiments on benchmark datasets,and the results showed that our proposed method performed well in terms of performance and privacy protection. 展开更多
关键词 Federated learning privacy protection differential privacy deep learning
在线阅读 下载PDF
Differential Privacy-Enabled TextCNN for MOOCs Fake Review Detection
4
作者 Caiyun Chen 《Journal of Electronic Research and Application》 2025年第1期191-201,共11页
The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform ... The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform has seriously affected the authenticity of course evaluations and user trust,requiring effective anomaly detection techniques for screening.The textual characteristics of MOOCs reviews,such as varying lengths and diverse emotional tendencies,have brought complexity to text analysis.Traditional rule-based analysis methods are often inadequate in dealing with such unstructured data.We propose a Differential Privacy-Enabled Text Convolutional Neural Network(DP-TextCNN)framework,aiming to achieve high-precision identification of outliers in MOOCs course reviews and ratings while protecting user privacy.This framework leverages the advantages of Convolutional Neural Networks(CNN)in text feature extraction and combines differential privacy techniques.It balances data privacy protection with model performance by introducing controlled random noise during the data preprocessing stage.By embedding differential privacy into the model training process,we ensure the privacy security of the framework when handling sensitive data,while maintaining a high recognition accuracy.Experimental results indicate that the DP-TextCNN framework achieves an exceptional accuracy of over 95%in identifying fake reviews on the dataset,this outcome not only verifies the applicability of differential privacy techniques in TextCNN but also underscores its potential in handling sensitive educational data.Additionally,we analyze the specific impact of differential privacy parameters on framework performance,offering theoretical support and empirical analysis to strike an optimal balance between privacy protection and framework efficiency. 展开更多
关键词 dp-TextCNN differential privacy Fake review MOOCs
在线阅读 下载PDF
Differential Privacy Integrated Federated Learning for Power Systems:An Explainability-Driven Approach
5
作者 Zekun Liu Junwei Ma +3 位作者 Xin Gong Xiu Liu Bingbing Liu Long An 《Computers, Materials & Continua》 2025年第10期983-999,共17页
With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Neve... With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Nevertheless,power data often contains sensitive information,making it a critical industry challenge to efficiently utilize this data while ensuring privacy.Traditional Federated Learning(FL)methods can mitigate data leakage by training models locally instead of transmitting raw data.Despite this,FL still has privacy concerns,especially gradient leakage,which might expose users’sensitive information.Therefore,integrating Differential Privacy(DP)techniques is essential for stronger privacy protection.Even so,the noise from DP may reduce the performance of federated learning models.To address this challenge,this paper presents an explainability-driven power data privacy federated learning framework.It incorporates DP technology and,based on model explainability,adaptively adjusts privacy budget allocation and model aggregation,thus balancing privacy protection and model performance.The key innovations of this paper are as follows:(1)We propose an explainability-driven power data privacy federated learning framework.(2)We detail a privacy budget allocation strategy:assigning budgets per training round by gradient effectiveness and at model granularity by layer importance.(3)We design a weighted aggregation strategy that considers the SHAP value and model accuracy for quality knowledge sharing.(4)Experiments show the proposed framework outperforms traditional methods in balancing privacy protection and model performance in power load forecasting tasks. 展开更多
关键词 Power data federated learning differential privacy explainability
在线阅读 下载PDF
Defending against Backdoor Attacks in Federated Learning by Using Differential Privacy and OOD Data Attributes
6
作者 Qingyu Tan Yan Li Byeong-Seok Shin 《Computer Modeling in Engineering & Sciences》 2025年第5期2417-2428,共12页
Federated Learning(FL),a practical solution that leverages distributed data across devices without the need for centralized data storage,which enables multiple participants to jointly train models while preserving dat... Federated Learning(FL),a practical solution that leverages distributed data across devices without the need for centralized data storage,which enables multiple participants to jointly train models while preserving data privacy and avoiding direct data sharing.Despite its privacy-preserving advantages,FL remains vulnerable to backdoor attacks,where malicious participants introduce backdoors into local models that are then propagated to the global model through the aggregation process.While existing differential privacy defenses have demonstrated effectiveness against backdoor attacks in FL,they often incur a significant degradation in the performance of the aggregated models on benign tasks.To address this limitation,we propose a novel backdoor defense mechanism based on differential privacy.Our approach first utilizes the inherent out-of-distribution characteristics of backdoor samples to identify and exclude malicious model updates that significantly deviate from benign models.By filtering out models that are clearly backdoor-infected before applying differential privacy,our method reduces the required noise level for differential privacy,thereby enhancing model robustness while preserving performance.Experimental evaluations on the CIFAR10 and FEMNIST datasets demonstrate that our method effectively limits the backdoor accuracy to below 15%across various backdoor scenarios while maintaining high main task accuracy. 展开更多
关键词 Federated learning backdoor attacks differential privacy out-of-distribution data
在线阅读 下载PDF
DDLP:Dynamic Location Data Publishing with Differential Privacy in Mobile Crowdsensing
7
作者 Li Wen Ma Xuebin Wang Xu 《China Communications》 2025年第5期238-255,共18页
Mobile crowdsensing(MCS)has become an effective paradigm to facilitate urban sensing.However,mobile users participating in sensing tasks will face the risk of location privacy leakage when uploading their actual sensi... Mobile crowdsensing(MCS)has become an effective paradigm to facilitate urban sensing.However,mobile users participating in sensing tasks will face the risk of location privacy leakage when uploading their actual sensing location data.In the application of mobile crowdsensing,most location privacy protection studies do not consider the temporal correlations between locations,so they are vulnerable to various inference attacks,and there is the problem of low data availability.In order to solve the above problems,this paper proposes a dynamic differential location privacy data publishing framework(DDLP)that protects privacy while publishing locations continuously.Firstly,the corresponding Markov transition matrices are established according to different times of historical trajectories,and then the protection location set is generated based on the current location at each timestamp.Moreover,using the exponential mechanism in differential privacy perturbs the true location by designing the utility function.Finally,experiments on the real-world trajectory dataset show that our method not only provides strong privacy guarantees,but also outperforms existing methods in terms of data availability and computational efficiency. 展开更多
关键词 data publishing differential privacy mobile crowdsensing
在线阅读 下载PDF
A Differential Privacy Federated Learning Scheme Based on Adaptive Gaussian Noise
8
作者 Sanxiu Jiao Lecai Cai +2 位作者 Xinjie Wang Kui Cheng Xiang Gao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1679-1694,共16页
As a distributed machine learning method,federated learning(FL)has the advantage of naturally protecting data privacy.It keeps data locally and trains local models through local data to protect the privacy of local da... As a distributed machine learning method,federated learning(FL)has the advantage of naturally protecting data privacy.It keeps data locally and trains local models through local data to protect the privacy of local data.The federated learning method effectively solves the problem of artificial Smart data islands and privacy protection issues.However,existing research shows that attackersmay still steal user information by analyzing the parameters in the federated learning training process and the aggregation parameters on the server side.To solve this problem,differential privacy(DP)techniques are widely used for privacy protection in federated learning.However,adding Gaussian noise perturbations to the data degrades the model learning performance.To address these issues,this paper proposes a differential privacy federated learning scheme based on adaptive Gaussian noise(DPFL-AGN).To protect the data privacy and security of the federated learning training process,adaptive Gaussian noise is specifically added in the training process to hide the real parameters uploaded by the client.In addition,this paper proposes an adaptive noise reduction method.With the convergence of the model,the Gaussian noise in the later stage of the federated learning training process is reduced adaptively.This paper conducts a series of simulation experiments on realMNIST and CIFAR-10 datasets,and the results show that the DPFL-AGN algorithmperforms better compared to the other algorithms. 展开更多
关键词 differential privacy federated learning deep learning data privacy
在线阅读 下载PDF
KSKV:Key-Strategy for Key-Value Data Collection with Local Differential Privacy
9
作者 Dan Zhao Yang You +2 位作者 Chuanwen Luo Ting Chen Yang Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3063-3083,共21页
In recent years,the research field of data collection under local differential privacy(LDP)has expanded its focus fromelementary data types to includemore complex structural data,such as set-value and graph data.Howev... In recent years,the research field of data collection under local differential privacy(LDP)has expanded its focus fromelementary data types to includemore complex structural data,such as set-value and graph data.However,our comprehensive review of existing literature reveals that there needs to be more studies that engage with key-value data collection.Such studies would simultaneously collect the frequencies of keys and the mean of values associated with each key.Additionally,the allocation of the privacy budget between the frequencies of keys and the means of values for each key does not yield an optimal utility tradeoff.Recognizing the importance of obtaining accurate key frequencies and mean estimations for key-value data collection,this paper presents a novel framework:the Key-Strategy Framework forKey-ValueDataCollection under LDP.Initially,theKey-StrategyUnary Encoding(KS-UE)strategy is proposed within non-interactive frameworks for the purpose of privacy budget allocation to achieve precise key frequencies;subsequently,the Key-Strategy Generalized Randomized Response(KS-GRR)strategy is introduced for interactive frameworks to enhance the efficiency of collecting frequent keys through group-anditeration methods.Both strategies are adapted for scenarios in which users possess either a single or multiple key-value pairs.Theoretically,we demonstrate that the variance of KS-UE is lower than that of existing methods.These claims are substantiated through extensive experimental evaluation on real-world datasets,confirming the effectiveness and efficiency of the KS-UE and KS-GRR strategies. 展开更多
关键词 KEY-VALUE local differential privacy frequency estimation mean estimation data perturbation
在线阅读 下载PDF
Blockchain-Enabled Federated Learning with Differential Privacy for Internet of Vehicles
10
作者 Chi Cui Haiping Du +2 位作者 Zhijuan Jia Yuchu He Lipeng Wang 《Computers, Materials & Continua》 SCIE EI 2024年第10期1581-1593,共13页
The rapid evolution of artificial intelligence(AI)technologies has significantly propelled the advancement of the Internet of Vehicles(IoV).With AI support,represented by machine learning technology,vehicles gain the ... The rapid evolution of artificial intelligence(AI)technologies has significantly propelled the advancement of the Internet of Vehicles(IoV).With AI support,represented by machine learning technology,vehicles gain the capability to make intelligent decisions.As a distributed learning paradigm,federated learning(FL)has emerged as a preferred solution in IoV.Compared to traditional centralized machine learning,FL reduces communication overhead and improves privacy protection.Despite these benefits,FL still faces some security and privacy concerns,such as poisoning attacks and inference attacks,prompting exploration into blockchain integration to enhance its security posture.This paper introduces a novel blockchain-enabled federated learning(BCFL)scheme with differential privacy(DP)tailored for IoV.In order to meet the performance demanding IoV environment,the proposed methodology integrates a consortium blockchain with Practical Byzantine Fault Tolerance(PBFT)consensus,which offers superior efficiency over the conventional public blockchains.In addition,the proposed approach utilizes the Differentially Private Stochastic Gradient Descent(DP-SGD)algorithm in the local training process of FL for enhanced privacy protection.Experiment results indicate that the integration of blockchain elevates the security level of FL in that the proposed approach effectively safeguards FL against poisoning attacks.On the other hand,the additional overhead associated with blockchain integration is also limited to a moderate level to meet the efficiency criteria of IoV.Furthermore,by incorporating DP,the proposed approach is shown to have the(ε-δ)privacy guarantee while maintaining an acceptable level of model accuracy.This enhancement effectively mitigates the threat of inference attacks on private information. 展开更多
关键词 Blockchain federated learning differential privacy Internet of Vehicles
在线阅读 下载PDF
eDPRF:高效的差分隐私随机森林训练算法 被引量:2
11
作者 王树兰 邱瑶 +2 位作者 赵陈斌 邹家须 王彩芬 《软件学报》 北大核心 2025年第7期2929-2946,共18页
差分隐私凭借其强大的隐私保护能力被应用在随机森林算法解决其中的隐私泄露问题,然而,直接将差分隐私应用在随机森林算法会使模型的分类准确率严重下降.为了平衡隐私保护和模型准确性之间的矛盾,提出了一种高效的差分隐私随机森林训练... 差分隐私凭借其强大的隐私保护能力被应用在随机森林算法解决其中的隐私泄露问题,然而,直接将差分隐私应用在随机森林算法会使模型的分类准确率严重下降.为了平衡隐私保护和模型准确性之间的矛盾,提出了一种高效的差分隐私随机森林训练算法eDPRF(efficient differential privacy random forest).具体而言,该算法设计了决策树构建方法,通过引入重排翻转机制高效地查询输出优势,进一步设计相应的效用函数实现分裂特征以及标签的精准输出,有效改善树模型在扰动情况下对于数据信息的学习能力.同时基于组合定理设计了隐私预算分配的策略,通过不放回抽样获得训练子集以及差异化调整内部预算的方式提高树节点的查询预算.最后,通过理论分析以及实验评估,表明算法在给定相同隐私预算的情况下,模型的分类准确度优于同类算法. 展开更多
关键词 随机森林 差分隐私 隐私预算 重排翻转 扰动方式
在线阅读 下载PDF
Whispered Tuning: Data Privacy Preservation in Fine-Tuning LLMs through Differential Privacy
12
作者 Tanmay Singh Harshvardhan Aditya +1 位作者 Vijay K. Madisetti Arshdeep Bahga 《Journal of Software Engineering and Applications》 2024年第1期1-22,共22页
The proliferation of Large Language Models (LLMs) across various sectors underscored the urgency of addressing potential privacy breaches. Vulnerabilities, such as prompt injection attacks and other adversarial tactic... The proliferation of Large Language Models (LLMs) across various sectors underscored the urgency of addressing potential privacy breaches. Vulnerabilities, such as prompt injection attacks and other adversarial tactics, could make these models inadvertently disclose their training data. Such disclosures could compromise personal identifiable information, posing significant privacy risks. In this paper, we proposed a novel multi-faceted approach called Whispered Tuning to address privacy leaks in large language models (LLMs). We integrated a PII redaction model, differential privacy techniques, and an output filter into the LLM fine-tuning process to enhance confidentiality. Additionally, we introduced novel ideas like the Epsilon Dial for adjustable privacy budgeting for differentiated Training Phases per data handler role. Through empirical validation, including attacks on non-private models, we demonstrated the robustness of our proposed solution SecureNLP in safeguarding privacy without compromising utility. This pioneering methodology significantly fortified LLMs against privacy infringements, enabling responsible adoption across sectors. 展开更多
关键词 NLP differential privacy Adversarial Attacks Informed Decisions
在线阅读 下载PDF
Privacy Distributed Constrained Optimization Over Time-Varying Unbalanced Networks and Its Application in Federated Learning
13
作者 Mengli Wei Wenwu Yu +2 位作者 Duxin Chen Mingyu Kang Guang Cheng 《IEEE/CAA Journal of Automatica Sinica》 2025年第2期335-346,共12页
This paper investigates a class of constrained distributed zeroth-order optimization(ZOO) problems over timevarying unbalanced graphs while ensuring privacy preservation among individual agents. Not taking into accoun... This paper investigates a class of constrained distributed zeroth-order optimization(ZOO) problems over timevarying unbalanced graphs while ensuring privacy preservation among individual agents. Not taking into account recent progress and addressing these concerns separately, there remains a lack of solutions offering theoretical guarantees for both privacy protection and constrained ZOO over time-varying unbalanced graphs.We hereby propose a novel algorithm, termed the differential privacy(DP) distributed push-sum based zeroth-order constrained optimization algorithm(DP-ZOCOA). Operating over time-varying unbalanced graphs, DP-ZOCOA obviates the need for supplemental suboptimization problem computations, thereby reducing overhead in comparison to distributed primary-dual methods. DP-ZOCOA is specifically tailored to tackle constrained ZOO problems over time-varying unbalanced graphs,offering a guarantee of convergence to the optimal solution while robustly preserving privacy. Moreover, we provide rigorous proofs of convergence and privacy for DP-ZOCOA, underscoring its efficacy in attaining optimal convergence without constraints. To enhance its applicability, we incorporate DP-ZOCOA into the federated learning framework and formulate a decentralized zeroth-order constrained federated learning algorithm(ZOCOA-FL) to address challenges stemming from the timevarying imbalance of communication topology. Finally, the performance and effectiveness of the proposed algorithms are thoroughly evaluated through simulations on distributed least squares(DLS) and decentralized federated learning(DFL) tasks. 展开更多
关键词 Constrained distributed optimization decentralized federated learning(DFL) differential privacy(dp) time-varying unbalanced graphs zeroth-order gradient
在线阅读 下载PDF
SDLDP:一种支持数据敏感分级的本地差分隐私框架
14
作者 陈亚青 叶宇桐 +1 位作者 张敏 舒波文 《信息安全学报》 2025年第4期40-53,共14页
在当今大数据时代,人们在日常生活中产生的数据规模空前庞大。基于用户数据的分析与应用为各行各业的发展提供了有力支持,同时也引发了公众对隐私泄露的担忧。本地差分隐私模型常用于数据统计任务中保护用户的隐私数据,通过为真实数据... 在当今大数据时代,人们在日常生活中产生的数据规模空前庞大。基于用户数据的分析与应用为各行各业的发展提供了有力支持,同时也引发了公众对隐私泄露的担忧。本地差分隐私模型常用于数据统计任务中保护用户的隐私数据,通过为真实数据添加随机噪声,降低隐私泄露风险。然而本地差分隐私模型的高可用性伴随着对大规模数据以及较高隐私预算的依赖,隐私性和可用性之间更优的平衡仍待挖掘。本文根据数据的取值自然拥有不同敏感级别的特性,提出了一种支持数据敏感分级的本地差分隐私框架SDLDP,通过对不同取值的数据提供不同程度的隐私保护,针对性地降低低敏感数据的本地差分隐私噪声添加量,实现更高的数据可用性。进一步地,本文提出了基于该框架的两种机制:SDGRR和SDPM。SDGRR优化了本地差分隐私的经典离散型扰动机制GRR,适用于频率估计任务。SDPM对本地差分隐私的连续型扰动机制PM进行优化,经过EM算法后处理,可高效地估计数据均值。实验结果表明,与原始LDP机制相比,本文提出的两种机制显著提高了频率估计和均值估计结果的准确性。 展开更多
关键词 本地差分隐私 隐私保护 均值估计 频率估计 EM算法
在线阅读 下载PDF
基于DP、KPCA的支持向量机分类算法
15
作者 李培 刘海忠 《无线互联科技》 2025年第11期70-74,共5页
为实现对高维数据的分析和隐私保护,文章将核主成分分析(KPCA)和差分隐私(DP)相结合并用支持向量机(SVM)来衡量处理后的数据的可用性,提出了2种算法DPKPCA-SVM和KPCADP-SVM。文章从理论方面证明了2种算法的可行性并从实验的角度对2种算... 为实现对高维数据的分析和隐私保护,文章将核主成分分析(KPCA)和差分隐私(DP)相结合并用支持向量机(SVM)来衡量处理后的数据的可用性,提出了2种算法DPKPCA-SVM和KPCADP-SVM。文章从理论方面证明了2种算法的可行性并从实验的角度对2种算法在Accuracy、RMSE和MAPE方面的性能进行了评估。具体来说,在KPCA-SVM算法的不同阶段引入了差分隐私保护机制,2种算法在实现快速分类的同时满足纯差分隐私。实验结果表明,DPKPCA-SVM算法在保证数据隐私性的前提下,对不同的数据集在Accuracy、RMSE和MAPE方面以及数据处理速度方面都具有很好的可用性。 展开更多
关键词 核主成分分析 差分隐私 支持向量机 隐私保护
在线阅读 下载PDF
LFDP:融合低频信息的差分隐私鲁棒性增强方法
16
作者 王豪 许强 +1 位作者 张清华 李开菊 《信息安全学报》 2025年第1期47-60,共14页
机器学习模型由于其预测和分类的高精度和各种应用场景的普适性,在图像处理、自动驾驶、自然语言处理等领域得到广泛应用。但机器学习模型容易遭受对抗样本攻击,在遭受对抗样本攻击时,预测和分类的精度会大幅下降。目前,数据增强方法通... 机器学习模型由于其预测和分类的高精度和各种应用场景的普适性,在图像处理、自动驾驶、自然语言处理等领域得到广泛应用。但机器学习模型容易遭受对抗样本攻击,在遭受对抗样本攻击时,预测和分类的精度会大幅下降。目前,数据增强方法通过改变或者扰动原始图像的方式,使得机器学习模型具有更强的泛化能力,在保护隐私的同时,能够增强其抵御对抗样本攻击的鲁棒性,是当前机器学习模型鲁棒性增强的主流方法之一。但基于差分隐私的鲁棒性增强方法面临加入的高频噪声容易被滤除,导致鲁棒性增强效果下降的问题。针对这一问题,结合信号处理的知识,本文从频域角度阐述差分隐私能够增强机器学习模型鲁棒性的原理,从理论上证明其有效性。设计了一种高频噪声滤波器HFNF,能够将差分隐私加入的高频高斯噪声滤除,使得差分隐私的鲁棒性增强效果下降,从理论上分析差分隐私鲁棒性增强方法存在缺陷的原因。提出了一种普适的融合低频信息的差分隐私鲁棒性增强算法LFDP,通过对图像不同频域部分加入生成的高低频噪声,即使存在高频噪声滤波攻击,仍然能够保证模型的鲁棒性,弥补了差分隐私原有高频高斯噪声的不足。从理论上分析并给出所提出方案的鲁棒性和误差边界,并在实际的数据集中进行测试。实验结果表明,与直接加入高频噪声的差分隐私鲁棒性增强方法相比,LFDP在不增大噪声尺度的同时能够起到更好的鲁棒性增强效果。 展开更多
关键词 机器学习 鲁棒性 差分隐私 低频噪声
在线阅读 下载PDF
Towards Realizing Dynamic Statistical Publishing and Privacy Protection of Location-Based Data:An Adaptive Sampling and Grid Clustering Approach
17
作者 Yan Yan Sun Zichao +2 位作者 Adnan Mahmood Zhang Yue Quan Z.Sheng 《China Communications》 2025年第7期234-256,共23页
To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The... To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods. 展开更多
关键词 adaptive sampling differential privacy dynamic statistical publishing grid clustering privacy protection sliding windows
在线阅读 下载PDF
A Privacy-Preserving Graph Neural Network Framework with Attention Mechanism for Computational Offloading in the Internet of Vehicles
18
作者 Aishwarya Rajasekar Vetriselvi Vetrian 《Computer Modeling in Engineering & Sciences》 2025年第4期225-254,共30页
The integration of technologies like artificial intelligence,6G,and vehicular ad-hoc networks holds great potential to meet the communication demands of the Internet of Vehicles and drive the advancement of vehicle ap... The integration of technologies like artificial intelligence,6G,and vehicular ad-hoc networks holds great potential to meet the communication demands of the Internet of Vehicles and drive the advancement of vehicle applications.However,these advancements also generate a surge in data processing requirements,necessitating the offloading of vehicular tasks to edge servers due to the limited computational capacity of vehicles.Despite recent advancements,the robustness and scalability of the existing approaches with respect to the number of vehicles and edge servers and their resources,as well as privacy,remain a concern.In this paper,a lightweight offloading strategy that leverages ubiquitous connectivity through the Space Air Ground Integrated Vehicular Network architecture while ensuring privacy preservation is proposed.The Internet of Vehicles(IoV)environment is first modeled as a graph,with vehicles and base stations as nodes,and their communication links as edges.Secondly,vehicular applications are offloaded to suitable servers based on latency using an attention-based heterogeneous graph neural network(HetGNN)algorithm.Subsequently,a differential privacy stochastic gradient descent trainingmechanism is employed for privacypreserving of vehicles and offloading inference.Finally,the simulation results demonstrated that the proposedHetGNN method shows good performance with 0.321 s of inference time,which is 42.68%,63.93%,30.22%,and 76.04% less than baseline methods such as Deep Deterministic Policy Gradient,Deep Q Learning,Deep Neural Network,and Genetic Algorithm,respectively. 展开更多
关键词 Internet of vehicles vehicular ad-hoc networks(VANET) multiaccess edge computing task offloading graph neural networks differential privacy
在线阅读 下载PDF
FedLDP:本地化差分隐私下一种高效联邦学习方法
19
作者 成梦圆 李艳辉 +2 位作者 吕天赐 赵玉鑫 黄臣 《计算机与现代化》 2025年第9期109-118,共10页
联邦学习作为一种分布式机器学习框架,允许用户在不泄露原始数据的情况下通过共享模型参数来协作训练模型。然而,模型参数仍然可能包含大量隐私敏感信息,直接对其共享存在泄露用户隐私信息的风险。本地化差分隐私能够抵御具有任意背景... 联邦学习作为一种分布式机器学习框架,允许用户在不泄露原始数据的情况下通过共享模型参数来协作训练模型。然而,模型参数仍然可能包含大量隐私敏感信息,直接对其共享存在泄露用户隐私信息的风险。本地化差分隐私能够抵御具有任意背景知识的攻击者,为隐私信息提供更全面的保护。由于联邦学习的参数数据高维度和多轮次的特点,给本地化差分隐私应用于联邦学习带来了挑战。为此,本文提出一种满足本地化差分隐私的联邦学习算法FedLDP。该算法利用维度选择策略(EMDS)挑选出用于全局聚合的重要参数维度;采用拉普拉斯机制扰动所选的参数维度;为了提高模型的学习效率和整体性能,设计增量隐私预算分配策略调整迭代过程中的隐私预算分配方式,优化模型训练过程。理论分析证明FedLDP算法满足ε-本地化差分隐私。实验结果表明,在MNIST和Fashion-MNIST数据集上,FedLDP算法能够在相同级别的隐私约束下使模型准确率分别提升5.07百分点和3.01百分点,优于现有方法。 展开更多
关键词 增量隐私预算分配 差分隐私 维度选择 联邦学习
在线阅读 下载PDF
基于OLH和虚拟数据的SDP直方图发布算法 被引量:1
20
作者 曹来成 陈丽 《计算机应用研究》 CSCD 北大核心 2024年第12期3829-3833,共5页
中心化差分隐私和本地化差分隐私下的直方图发布技术已得到广泛研究。为解决用户隐私需求和发布误差之间难以平衡的问题,在混洗差分隐私模型下提出一种直方图发布算法OD-HP(histogram publishing based on optimized local hash and dum... 中心化差分隐私和本地化差分隐私下的直方图发布技术已得到广泛研究。为解决用户隐私需求和发布误差之间难以平衡的问题,在混洗差分隐私模型下提出一种直方图发布算法OD-HP(histogram publishing based on optimized local hash and dummy points)。该算法采用优化本地哈希扰动机制OLH对用户数据进行编码和扰动,解决了数据值域过大导致误差较大的问题。为抵御混洗器和收集端的合谋攻击,在扰动后的数据中添加虚拟数据,混洗端将扰动后的数据和虚拟数据随机均匀混洗,并在收集端进行直方图发布,最后使用EM算法对混洗后的数据求精优化。从理论上分析了OD-HP算法的隐私性和可用性,并在真实数据集上对所提出的方案进行验证。实验结果表明OD-HP算法在保证数据隐私性的同时有效降低了发布误差。 展开更多
关键词 混洗差分隐私 直方图发布 虚拟数据 均方误差
在线阅读 下载PDF
上一页 1 2 52 下一页 到第
使用帮助 返回顶部