期刊文献+
共找到1,086篇文章
< 1 2 55 >
每页显示 20 50 100
DPIL-Traj: Differential Privacy Trajectory Generation Framework with Imitation Learning
1
作者 Huaxiong Liao Xiangxuan Zhong +4 位作者 Xueqi Chen Yirui Huang Yuwei Lin Jing Zhang Bruce Gu 《Computers, Materials & Continua》 2026年第1期1530-1550,共21页
The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location re... The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location reidentification and correlation attacks.To address these challenges,privacy-preserving trajectory generation methods are critical for applications relying on sensitive location data.This paper introduces DPIL-Traj,an advanced framework designed to generate synthetic trajectories while achieving a superior balance between data utility and privacy preservation.Firstly,the framework incorporates Differential Privacy Clustering,which anonymizes trajectory data by applying differential privacy techniques that add noise,ensuring the protection of sensitive user information.Secondly,Imitation Learning is used to replicate decision-making behaviors observed in real-world trajectories.By learning from expert trajectories,this component generates synthetic data that closely mimics real-world decision-making processes while optimizing the quality of the generated trajectories.Finally,Markov-based Trajectory Generation is employed to capture and maintain the inherent temporal dynamics of movement patterns.Extensive experiments conducted on the GeoLife trajectory dataset show that DPIL-Traj improves utility performance by an average of 19.85%,and in terms of privacy performance by an average of 12.51%,compared to state-of-the-art approaches.Ablation studies further reveal that DP clustering effectively safeguards privacy,imitation learning enhances utility under noise,and the Markov module strengthens temporal coherence. 展开更多
关键词 privacy-PRESERVING trajectory generation differential privacy imitation learning Markov chain
在线阅读 下载PDF
FedDPL:Federated Dynamic Prototype Learning for Privacy-Preserving Malware Analysis across Heterogeneous Clients
2
作者 Danping Niu Yuan Ping +2 位作者 Chun Guo Xiaojun Wang Bin Hao 《Computers, Materials & Continua》 2026年第3期1989-2014,共26页
With the increasing complexity of malware attack techniques,traditional detection methods face significant challenges,such as privacy preservation,data heterogeneity,and lacking category information.To address these i... With the increasing complexity of malware attack techniques,traditional detection methods face significant challenges,such as privacy preservation,data heterogeneity,and lacking category information.To address these issues,we propose Federated Dynamic Prototype Learning(FedDPL)for malware classification by integrating Federated Learning with a specifically designed K-means.Under the Federated Learning framework,model training occurs locally without data sharing,effectively protecting user data privacy and preventing the leakage of sensitive information.Furthermore,to tackle the challenges of data heterogeneity and the lack of category information,FedDPL introduces a dynamic prototype learning mechanism,which adaptively adjusts the clustering prototypes in terms of position and number.Thus,the dependency on predefined category numbers in typical K-means and its variants can be significantly reduced,resulting in improved clustering performance.Theoretically,it provides a more accurate detection of malicious behavior.Experimental results confirm that FedDPL excels in handling malware classification tasks,demonstrating superior accuracy,robustness,and privacy protection. 展开更多
关键词 Malware classification data heterogeneity federated learning CLUSTERING differential privacy
在线阅读 下载PDF
DP-Fed6G:An adaptive differential privacy-empowered federated learning framework for 6G networks
3
作者 Miao Du Peng Yang +2 位作者 Yinqiu Liu Xiaoming He Mingkai Chen 《Digital Communications and Networks》 2025年第6期1994-2002,共9页
The advent of 6G networks is poised to drive a new era of intelligent,privacy-preserving distributed learning by leveraging advanced communication and AI-driven edge intelligence.Federated Learning(FL)has emerged as a... The advent of 6G networks is poised to drive a new era of intelligent,privacy-preserving distributed learning by leveraging advanced communication and AI-driven edge intelligence.Federated Learning(FL)has emerged as a promising paradigm to enable collaborative model training without exposing raw data.However,its deployment in 6G networks faces significant obstacles,including vulnerabilities to inference attacks,the complexities of heterogeneous and dynamic network environments,and the inherent trade-off between privacy protection and model performance.In response to these challenges,we introduce DP-Fed6G,a novel FL framework that integrates differential privacy(DP)to fortify data security while ensuring high-quality learning outcomes.Specifically,DPFed6G employs an adaptive noise injection strategy that dynamically adjusts privacy protection levels based on real-time 6G network conditions and device heterogeneity,ensuring robust data security while maximizing model performance and optimizing the trade-off between privacy and utility.Extensive experiments on three real-world healthcare datasets demonstrate that DP-Fed6G consistently outperforms existing baselines(DP-Fed SGD and DPFed Avg),achieving up to 10.3%higher test accuracy under the same privacy budget.The proposed framework thus provides a practical solution for secure and privacy-preserving AI in 6G,supporting intelligent decisionmaking in privacy-sensitive applications. 展开更多
关键词 differential privacy Federated learning 6G Gaussian noise
在线阅读 下载PDF
基于LDP的迭代自适应划分键值数据收集方法
4
作者 孙庆毅 李晓会 +2 位作者 兰洁 贾旭 李波 《计算机工程与设计》 北大核心 2026年第1期154-164,共11页
针对现有键值数据收集机制在数据精度估计方面的局限性,提出了一种基于本地差分隐私的迭代自适应划分键值数据收集方法。通过两阶段设计实现精准估计,第一阶段在本地对所有键值对值域进行初步划分并完成编码、扰动,由服务器聚合估计;第... 针对现有键值数据收集机制在数据精度估计方面的局限性,提出了一种基于本地差分隐私的迭代自适应划分键值数据收集方法。通过两阶段设计实现精准估计,第一阶段在本地对所有键值对值域进行初步划分并完成编码、扰动,由服务器聚合估计;第二阶段基于第一阶段估计结果自适应优化值域划分区间,迭代执行第一阶段步骤,利用误差阈值控制迭代次数,以实现准确的估计结果。理论分析了算法满足本地差分隐私和无偏估计。实验结果表明了算法的有效性和实用性。 展开更多
关键词 本地差分隐私 键值数据 自适应划分 频率估计 均值估计 数据收集 隐私保护
在线阅读 下载PDF
Differential Privacy Federated Learning Based on Adaptive Adjustment
5
作者 Yanjin Cheng Wenmin Li +1 位作者 Sujuan Qin Tengfei Tu 《Computers, Materials & Continua》 2025年第3期4777-4795,共19页
Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can com... Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can compromise user privacy and security by stealing model parameters.Therefore,differential privacy is applied in federated learning to further address malicious issues.However,the addition of noise and the update clipping mechanism in differential privacy jointly limit the further development of federated learning in privacy protection and performance optimization.Therefore,we propose an adaptive adjusted differential privacy federated learning method.First,a dynamic adaptive privacy budget allocation strategy is proposed,which flexibly adjusts the privacy budget within a given range based on the client’s data volume and training requirements,thereby alleviating the loss of privacy budget and the magnitude of model noise.Second,a longitudinal clipping differential privacy strategy is proposed,which based on the differences in factors that affect parameter updates,uses sparse methods to trim local updates,thereby reducing the impact of privacy pruning steps on model accuracy.The two strategies work together to ensure user privacy while the effect of differential privacy on model accuracy is reduced.To evaluate the effectiveness of our method,we conducted extensive experiments on benchmark datasets,and the results showed that our proposed method performed well in terms of performance and privacy protection. 展开更多
关键词 Federated learning privacy protection differential privacy deep learning
在线阅读 下载PDF
Layer-Level Adaptive Gradient Perturbation Protecting Deep Learning Based on Differential Privacy
6
作者 Zhang Xiangfei Zhang Qingchen Jiang Liming 《CAAI Transactions on Intelligence Technology》 2025年第3期929-944,共16页
Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garner... Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training.However,classic differentially private learning introduces the same level of noise into the gradients across training iterations,which affects the trade-off between model utility and privacy guarantees.To address this issue,an adaptive differential privacy mechanism was proposed in this paper,which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks.Specifically,an equal privacy budget is initially allocated to each layer.Subsequently,as training advances,the privacy budget for layers closer to the output is reduced(adding more noise),while the budget for layers closer to the input is increased.The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count.This dynamic allocation provides a simple process for adjusting privacy budgets,alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress.Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks. 展开更多
关键词 deep learning differential privacy information security privacy protection
在线阅读 下载PDF
Differential Privacy-Enabled TextCNN for MOOCs Fake Review Detection
7
作者 Caiyun Chen 《Journal of Electronic Research and Application》 2025年第1期191-201,共11页
The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform ... The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform has seriously affected the authenticity of course evaluations and user trust,requiring effective anomaly detection techniques for screening.The textual characteristics of MOOCs reviews,such as varying lengths and diverse emotional tendencies,have brought complexity to text analysis.Traditional rule-based analysis methods are often inadequate in dealing with such unstructured data.We propose a Differential Privacy-Enabled Text Convolutional Neural Network(DP-TextCNN)framework,aiming to achieve high-precision identification of outliers in MOOCs course reviews and ratings while protecting user privacy.This framework leverages the advantages of Convolutional Neural Networks(CNN)in text feature extraction and combines differential privacy techniques.It balances data privacy protection with model performance by introducing controlled random noise during the data preprocessing stage.By embedding differential privacy into the model training process,we ensure the privacy security of the framework when handling sensitive data,while maintaining a high recognition accuracy.Experimental results indicate that the DP-TextCNN framework achieves an exceptional accuracy of over 95%in identifying fake reviews on the dataset,this outcome not only verifies the applicability of differential privacy techniques in TextCNN but also underscores its potential in handling sensitive educational data.Additionally,we analyze the specific impact of differential privacy parameters on framework performance,offering theoretical support and empirical analysis to strike an optimal balance between privacy protection and framework efficiency. 展开更多
关键词 dp-TextCNN differential privacy Fake review MOOCs
在线阅读 下载PDF
Differential Privacy Integrated Federated Learning for Power Systems:An Explainability-Driven Approach
8
作者 Zekun Liu Junwei Ma +3 位作者 Xin Gong Xiu Liu Bingbing Liu Long An 《Computers, Materials & Continua》 2025年第10期983-999,共17页
With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Neve... With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Nevertheless,power data often contains sensitive information,making it a critical industry challenge to efficiently utilize this data while ensuring privacy.Traditional Federated Learning(FL)methods can mitigate data leakage by training models locally instead of transmitting raw data.Despite this,FL still has privacy concerns,especially gradient leakage,which might expose users’sensitive information.Therefore,integrating Differential Privacy(DP)techniques is essential for stronger privacy protection.Even so,the noise from DP may reduce the performance of federated learning models.To address this challenge,this paper presents an explainability-driven power data privacy federated learning framework.It incorporates DP technology and,based on model explainability,adaptively adjusts privacy budget allocation and model aggregation,thus balancing privacy protection and model performance.The key innovations of this paper are as follows:(1)We propose an explainability-driven power data privacy federated learning framework.(2)We detail a privacy budget allocation strategy:assigning budgets per training round by gradient effectiveness and at model granularity by layer importance.(3)We design a weighted aggregation strategy that considers the SHAP value and model accuracy for quality knowledge sharing.(4)Experiments show the proposed framework outperforms traditional methods in balancing privacy protection and model performance in power load forecasting tasks. 展开更多
关键词 Power data federated learning differential privacy explainability
在线阅读 下载PDF
Defending against Backdoor Attacks in Federated Learning by Using Differential Privacy and OOD Data Attributes
9
作者 Qingyu Tan Yan Li Byeong-Seok Shin 《Computer Modeling in Engineering & Sciences》 2025年第5期2417-2428,共12页
Federated Learning(FL),a practical solution that leverages distributed data across devices without the need for centralized data storage,which enables multiple participants to jointly train models while preserving dat... Federated Learning(FL),a practical solution that leverages distributed data across devices without the need for centralized data storage,which enables multiple participants to jointly train models while preserving data privacy and avoiding direct data sharing.Despite its privacy-preserving advantages,FL remains vulnerable to backdoor attacks,where malicious participants introduce backdoors into local models that are then propagated to the global model through the aggregation process.While existing differential privacy defenses have demonstrated effectiveness against backdoor attacks in FL,they often incur a significant degradation in the performance of the aggregated models on benign tasks.To address this limitation,we propose a novel backdoor defense mechanism based on differential privacy.Our approach first utilizes the inherent out-of-distribution characteristics of backdoor samples to identify and exclude malicious model updates that significantly deviate from benign models.By filtering out models that are clearly backdoor-infected before applying differential privacy,our method reduces the required noise level for differential privacy,thereby enhancing model robustness while preserving performance.Experimental evaluations on the CIFAR10 and FEMNIST datasets demonstrate that our method effectively limits the backdoor accuracy to below 15%across various backdoor scenarios while maintaining high main task accuracy. 展开更多
关键词 Federated learning backdoor attacks differential privacy out-of-distribution data
在线阅读 下载PDF
DDLP:Dynamic Location Data Publishing with Differential Privacy in Mobile Crowdsensing
10
作者 Li Wen Ma Xuebin Wang Xu 《China Communications》 2025年第5期238-255,共18页
Mobile crowdsensing(MCS)has become an effective paradigm to facilitate urban sensing.However,mobile users participating in sensing tasks will face the risk of location privacy leakage when uploading their actual sensi... Mobile crowdsensing(MCS)has become an effective paradigm to facilitate urban sensing.However,mobile users participating in sensing tasks will face the risk of location privacy leakage when uploading their actual sensing location data.In the application of mobile crowdsensing,most location privacy protection studies do not consider the temporal correlations between locations,so they are vulnerable to various inference attacks,and there is the problem of low data availability.In order to solve the above problems,this paper proposes a dynamic differential location privacy data publishing framework(DDLP)that protects privacy while publishing locations continuously.Firstly,the corresponding Markov transition matrices are established according to different times of historical trajectories,and then the protection location set is generated based on the current location at each timestamp.Moreover,using the exponential mechanism in differential privacy perturbs the true location by designing the utility function.Finally,experiments on the real-world trajectory dataset show that our method not only provides strong privacy guarantees,but also outperforms existing methods in terms of data availability and computational efficiency. 展开更多
关键词 data publishing differential privacy mobile crowdsensing
在线阅读 下载PDF
Privacy-Preserving Personnel Detection in Substations via Federated Learning with Dynamic Noise Adaptation
11
作者 Yuewei Tian Yang Su +4 位作者 Yujia Wang Lisa Guo Xuyang Wu Lei Cao Fang Ren 《Computers, Materials & Continua》 2026年第3期894-915,共22页
This study addresses the risk of privacy leakage during the transmission and sharing of multimodal data in smart grid substations by proposing a three-tier privacy-preserving architecture based on asynchronous federat... This study addresses the risk of privacy leakage during the transmission and sharing of multimodal data in smart grid substations by proposing a three-tier privacy-preserving architecture based on asynchronous federated learning.The framework integrates blockchain technology,the InterPlanetary File System(IPFS)for distributed storage,and a dynamic differential privacy mechanism to achieve collaborative security across the storage,service,and federated coordination layers.It accommodates both multimodal data classification and object detection tasks,enabling the identification and localization of key targets and abnormal behaviors in substation scenarios while ensuring privacy protection.This effectively mitigates the single-point failures and model leakage issues inherent in centralized architectures.A dynamically adjustable differential privacy mechanism is introduced to allocate privacy budgets according to client contribution levels and upload frequencies,achieving a personalized balance between model performance and privacy protection.Multi-dimensional experimental evaluations,including classification accuracy,F1-score,encryption latency,and aggregation latency,verify the security and efficiency of the proposed architecture.The improved CNN model achieves 72.34%accuracy and an F1-score of 0.72 in object detection and classification tasks on infrared surveillance imagery,effectively identifying typical risk events such as not wearing safety helmets and unauthorized intrusion,while maintaining an aggregation latency of only 1.58 s and a query latency of 80.79 ms.Compared with traditional static differential privacy and centralized approaches,the proposed method demonstrates significant advantages in accuracy,latency,and security,providing a new technical paradigm for efficient,secure data sharing,object detection,and privacy preservation in smart grid substations. 展开更多
关键词 SUBSTATION privacy preservation asynchronous federated learning CNN differential privacy
在线阅读 下载PDF
差分隐私HADPK-means++聚类算法
12
作者 徐富国 李磊 陈涛 《福建电脑》 2026年第2期7-15,共9页
为解决差分隐私k-means聚类算法在迭代过程中因噪声累积导致簇中心偏离,进而影响聚类可用性的问题,本文提出一种高可用性的差分隐私HADPK-means++算法。该方法通过基于逆序排序的初始簇中心选择以提升初始中心质量,引入结合簇内与簇间... 为解决差分隐私k-means聚类算法在迭代过程中因噪声累积导致簇中心偏离,进而影响聚类可用性的问题,本文提出一种高可用性的差分隐私HADPK-means++算法。该方法通过基于逆序排序的初始簇中心选择以提升初始中心质量,引入结合簇内与簇间相似度的新度量以优化样本划分,并利用差分隐私的变换不变性对加噪后的簇中心进行修正,防止其偏离有效数据范围。在Iris、Wine等多个真实数据集上的实验表明,在相同隐私保护预算下,本算法的F值与标准互信息(NMI)均优于现有主流差分隐私k-means算法。HADPK-means++算法能有效抑制簇中心偏离,提升聚类的可用性与鲁棒性。 展开更多
关键词 聚类 K-MEANS算法 差分隐私
在线阅读 下载PDF
A Method for Time-Series Location Data Publication Based on Differential Privacy 被引量:4
13
作者 KANG Haiyan ZHANG Shuxuan JIA Qianqian 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2019年第2期107-115,共9页
In the age of information sharing, logistics information sharing also faces the risk of privacy leakage. In regard to the privacy leakage of time-series location information in the field of logistics, this paper propo... In the age of information sharing, logistics information sharing also faces the risk of privacy leakage. In regard to the privacy leakage of time-series location information in the field of logistics, this paper proposes a method based on differential privacy for time-series location data publication. Firstly, it constructs public region of interest(PROI) related to time by using clustering optimal algorithm. And it adopts the method of the centroid point to ensure the public interest point(PIP) representing the location of the public interest zone. Secondly, according to the PIP, we can construct location search tree(LST) that is a commonly used index structure of spatial data, in order to ensure the inherent relation among location data. Thirdly, we add Laplace noise to the node of LST, which means fewer times to add Laplace noise on the original data set and ensures the data availability. Finally, experiments show that this method not only ensures the security of sequential location data publishing, but also has better data availability than the general differential privacy method, which achieves a good balance between the security and availability of data. 展开更多
关键词 sequential LOCATION DATA PUBLISHING region of INTEREST LOCATION search tree differential privacy
原文传递
A Privacy-Preserving Mechanism Based on Local Differential Privacy in Edge Computing 被引量:11
14
作者 Mengnan Bi Yingjie Wang +1 位作者 Zhipeng Cai Xiangrong Tong 《China Communications》 SCIE CSCD 2020年第9期50-65,共16页
With the development of Internet of Things(IoT),the delay caused by network transmission has led to low data processing efficiency.At the same time,the limited computing power and available energy consumption of IoT t... With the development of Internet of Things(IoT),the delay caused by network transmission has led to low data processing efficiency.At the same time,the limited computing power and available energy consumption of IoT terminal devices are also the important bottlenecks that would restrict the application of blockchain,but edge computing could solve this problem.The emergence of edge computing can effectively reduce the delay of data transmission and improve data processing capacity.However,user data in edge computing is usually stored and processed in some honest-but-curious authorized entities,which leads to the leakage of users’privacy information.In order to solve these problems,this paper proposes a location data collection method that satisfies the local differential privacy to protect users’privacy.In this paper,a Voronoi diagram constructed by the Delaunay method is used to divide the road network space and determine the Voronoi grid region where the edge nodes are located.A random disturbance mechanism that satisfies the local differential privacy is utilized to disturb the original location data in each Voronoi grid.In addition,the effectiveness of the proposed privacy-preserving mechanism is verified through comparison experiments.Compared with the existing privacy-preserving methods,the proposed privacy-preserving mechanism can not only better meet users’privacy needs,but also have higher data availability. 展开更多
关键词 Io T edge computing local differential privacy Voronoi diagram privacy-PRESERVING
在线阅读 下载PDF
An efficient data aggregation scheme with local differential privacy in smart grid 被引量:7
15
作者 Na Gai Kaiping Xue +3 位作者 Bin Zhu Jiayu Yang Jianqing Liu Debiao He 《Digital Communications and Networks》 SCIE CSCD 2022年第3期333-342,共10页
By integrating the traditional power grid with information and communication technology, smart grid achieves dependable, efficient, and flexible grid data processing. The smart meters deployed on the user side of the ... By integrating the traditional power grid with information and communication technology, smart grid achieves dependable, efficient, and flexible grid data processing. The smart meters deployed on the user side of the smart grid collect the users' power usage data on a regular basis and upload it to the control center to complete the smart grid data acquisition. The control center can evaluate the supply and demand of the power grid through aggregated data from users and then dynamically adjust the power supply and price, etc. However, since the grid data collected from users may disclose the user's electricity usage habits and daily activities, privacy concern has become a critical issue in smart grid data aggregation. Most of the existing privacy-preserving data collection schemes for smart grid adopt homomorphic encryption or randomization techniques which are either impractical because of the high computation overhead or unrealistic for requiring a trusted third party. 展开更多
关键词 Local differential privacy Data aggregation Smart grid privacy preserving
在线阅读 下载PDF
Safeguarding cross-silo federated learning with local differential privacy 被引量:8
16
作者 Chen Wang Xinkui Wu +3 位作者 Gaoyang Liu Tianping Deng Kai Peng Shaohua Wan 《Digital Communications and Networks》 SCIE CSCD 2022年第4期446-454,共9页
Federated Learning(FL)is a new computing paradigm in privacy-preserving Machine Learning(ML),where the ML model is trained in a decentralized manner by the clients,preventing the server from directly accessing privacy... Federated Learning(FL)is a new computing paradigm in privacy-preserving Machine Learning(ML),where the ML model is trained in a decentralized manner by the clients,preventing the server from directly accessing privacy-sensitive data from the clients.Unfortunately,recent advances have shown potential risks for user-level privacy breaches under the cross-silo FL framework.In this paper,we propose addressing the issue by using a three-plane framework to secure the cross-silo FL,taking advantage of the Local Differential Privacy(LDP)mechanism.The key insight here is that LDP can provide strong data privacy protection while still retaining user data statistics to preserve its high utility.Experimental results on three real-world datasets demonstrate the effectiveness of our framework. 展开更多
关键词 Federated learning Cross-silo Local differential privacy PERTURBATION
在线阅读 下载PDF
Privacy Protection Algorithm for the Internet of Vehicles Based on Local Differential Privacy and Game Model 被引量:5
17
作者 Wenxi Han Mingzhi Cheng +3 位作者 Min Lei Hanwen Xu Yu Yang Lei Qian 《Computers, Materials & Continua》 SCIE EI 2020年第8期1025-1038,共14页
In recent years,with the continuous advancement of the intelligent process of the Internet of Vehicles(IoV),the problem of privacy leakage in IoV has become increasingly prominent.The research on the privacy protectio... In recent years,with the continuous advancement of the intelligent process of the Internet of Vehicles(IoV),the problem of privacy leakage in IoV has become increasingly prominent.The research on the privacy protection of the IoV has become the focus of the society.This paper analyzes the advantages and disadvantages of the existing location privacy protection system structure and algorithms,proposes a privacy protection system structure based on untrusted data collection server,and designs a vehicle location acquisition algorithm based on a local differential privacy and game model.The algorithm first meshes the road network space.Then,the dynamic game model is introduced into the game user location privacy protection model and the attacker location semantic inference model,thereby minimizing the possibility of exposing the regional semantic privacy of the k-location set while maximizing the availability of the service.On this basis,a statistical method is designed,which satisfies the local differential privacy of k-location sets and obtains unbiased estimation of traffic density in different regions.Finally,this paper verifies the algorithm based on the data set of mobile vehicles in Shanghai.The experimental results show that the algorithm can guarantee the user’s location privacy and location semantic privacy while satisfying the service quality requirements,and provide better privacy protection and service for the users of the IoV. 展开更多
关键词 The Internet of Vehicles privacy protection local differential privacy location semantic inference attack game theory
在线阅读 下载PDF
Differential Privacy Preserving Dynamic Data Release Scheme Based on Jensen-Shannon Divergence 被引量:3
18
作者 Ying Cai Yu Zhang +1 位作者 Jingjing Qu Wenjin Li 《China Communications》 SCIE CSCD 2022年第6期11-21,共11页
Health monitoring data or the data about infectious diseases such as COVID-19 may need to be constantly updated and dynamically released,but they may contain user's sensitive information.Thus,how to preserve the u... Health monitoring data or the data about infectious diseases such as COVID-19 may need to be constantly updated and dynamically released,but they may contain user's sensitive information.Thus,how to preserve the user's privacy before their release is critically important yet challenging.Differential Privacy(DP)is well-known to provide effective privacy protection,and thus the dynamic DP preserving data release was designed to publish a histogram to meet DP guarantee.Unfortunately,this scheme may result in high cumulative errors and lower the data availability.To address this problem,in this paper,we apply Jensen-Shannon(JS)divergence to design the OPTICS(Ordering Points To Identify The Clustering Structure)scheme.It uses JS divergence to measure the difference between the updated data set at the current release time and private data set at the previous release time.By comparing the difference with a threshold,only when the difference is greater than the threshold,can we apply OPTICS to publish DP protected data sets.Our experimental results show that the absolute errors and average relative errors are significantly lower than those existing works. 展开更多
关键词 differential privacy dynamic data release Jensen-Shannon divergence
在线阅读 下载PDF
A Dynamic Social Network Data Publishing Algorithm Based on Differential Privacy 被引量:2
19
作者 Zhenpeng Liu Yawei Dong +1 位作者 Xuan Zhao Bin Zhang 《Journal of Information Security》 2017年第4期328-338,共11页
Social network contains the interaction between social members, which constitutes the structure and attribute of social network. The interactive relationship of social network contains a lot of personal privacy inform... Social network contains the interaction between social members, which constitutes the structure and attribute of social network. The interactive relationship of social network contains a lot of personal privacy information. The direct release of social network data will cause the disclosure of privacy information. Aiming at the dynamic characteristics of social network data release, a new dynamic social network data publishing method based on differential privacy was proposed. This method was consistent with differential privacy. It is named DDPA (Dynamic Differential Privacy Algorithm). DDPA algorithm is an improvement of privacy protection algorithm in static social network data publishing. DDPA adds noise which follows Laplace to network edge weights. DDPA identifies the edge weight information that changes as the number of iterations increases, adding the privacy protection budget. Through experiments on real data sets, the results show that the DDPA algorithm satisfies the user’s privacy requirement in social network. DDPA reduces the execution time brought by iterations and reduces the information loss rate of graph structure. 展开更多
关键词 DYNAMIC SOCIAL NETWORK Data PUBLISHING differential privacy
在线阅读 下载PDF
Frequent Itemset Mining of User’s Multi-Attribute under Local Differential Privacy 被引量:2
20
作者 Haijiang Liu Lianwei Cui +1 位作者 Xuebin Ma Celimuge Wu 《Computers, Materials & Continua》 SCIE EI 2020年第10期369-385,共17页
Frequent itemset mining is an essential problem in data mining and plays a key role in many data mining applications.However,users’personal privacy will be leaked in the mining process.In recent years,application of ... Frequent itemset mining is an essential problem in data mining and plays a key role in many data mining applications.However,users’personal privacy will be leaked in the mining process.In recent years,application of local differential privacy protection models to mine frequent itemsets is a relatively reliable and secure protection method.Local differential privacy means that users first perturb the original data and then send these data to the aggregator,preventing the aggregator from revealing the user’s private information.We propose a novel framework that implements frequent itemset mining under local differential privacy and is applicable to user’s multi-attribute.The main technique has bitmap encoding for converting the user’s original data into a binary string.It also includes how to choose the best perturbation algorithm for varying user attributes,and uses the frequent pattern tree(FP-tree)algorithm to mine frequent itemsets.Finally,we incorporate the threshold random response(TRR)algorithm in the framework and compare it with the existing algorithms,and demonstrate that the TRR algorithm has higher accuracy for mining frequent itemsets. 展开更多
关键词 Local differential privacy frequent itemset mining user’s multi-attribute
在线阅读 下载PDF
上一页 1 2 55 下一页 到第
使用帮助 返回顶部