In the competitive retail industry of the digital era,data-driven insights into gender-specific customer behavior are essential.They support the optimization of store performance,layout design,product placement,and ta...In the competitive retail industry of the digital era,data-driven insights into gender-specific customer behavior are essential.They support the optimization of store performance,layout design,product placement,and targeted marketing.However,existing computer vision solutions often rely on facial recognition to gather such insights,raising significant privacy and ethical concerns.To address these issues,this paper presents a privacypreserving customer analytics system through two key strategies.First,we deploy a deep learning framework using YOLOv9s,trained on the RCA-TVGender dataset.Cameras are positioned perpendicular to observation areas to reduce facial visibility while maintaining accurate gender classification.Second,we apply AES-128 encryption to customer position data,ensuring secure access and regulatory compliance.Our system achieved overall performance,with 81.5%mAP@50,77.7%precision,and 75.7%recall.Moreover,a 90-min observational study confirmed the system’s ability to generate privacy-protected heatmaps revealing distinct behavioral patterns between male and female customers.For instance,women spent more time in certain areas and showed interest in different products.These results confirm the system’s effectiveness in enabling personalized layout and marketing strategies without compromising privacy.展开更多
The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location re...The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location reidentification and correlation attacks.To address these challenges,privacy-preserving trajectory generation methods are critical for applications relying on sensitive location data.This paper introduces DPIL-Traj,an advanced framework designed to generate synthetic trajectories while achieving a superior balance between data utility and privacy preservation.Firstly,the framework incorporates Differential Privacy Clustering,which anonymizes trajectory data by applying differential privacy techniques that add noise,ensuring the protection of sensitive user information.Secondly,Imitation Learning is used to replicate decision-making behaviors observed in real-world trajectories.By learning from expert trajectories,this component generates synthetic data that closely mimics real-world decision-making processes while optimizing the quality of the generated trajectories.Finally,Markov-based Trajectory Generation is employed to capture and maintain the inherent temporal dynamics of movement patterns.Extensive experiments conducted on the GeoLife trajectory dataset show that DPIL-Traj improves utility performance by an average of 19.85%,and in terms of privacy performance by an average of 12.51%,compared to state-of-the-art approaches.Ablation studies further reveal that DP clustering effectively safeguards privacy,imitation learning enhances utility under noise,and the Markov module strengthens temporal coherence.展开更多
The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches l...The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.展开更多
To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The...To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.展开更多
Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can com...Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can compromise user privacy and security by stealing model parameters.Therefore,differential privacy is applied in federated learning to further address malicious issues.However,the addition of noise and the update clipping mechanism in differential privacy jointly limit the further development of federated learning in privacy protection and performance optimization.Therefore,we propose an adaptive adjusted differential privacy federated learning method.First,a dynamic adaptive privacy budget allocation strategy is proposed,which flexibly adjusts the privacy budget within a given range based on the client’s data volume and training requirements,thereby alleviating the loss of privacy budget and the magnitude of model noise.Second,a longitudinal clipping differential privacy strategy is proposed,which based on the differences in factors that affect parameter updates,uses sparse methods to trim local updates,thereby reducing the impact of privacy pruning steps on model accuracy.The two strategies work together to ensure user privacy while the effect of differential privacy on model accuracy is reduced.To evaluate the effectiveness of our method,we conducted extensive experiments on benchmark datasets,and the results showed that our proposed method performed well in terms of performance and privacy protection.展开更多
Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garner...Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training.However,classic differentially private learning introduces the same level of noise into the gradients across training iterations,which affects the trade-off between model utility and privacy guarantees.To address this issue,an adaptive differential privacy mechanism was proposed in this paper,which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks.Specifically,an equal privacy budget is initially allocated to each layer.Subsequently,as training advances,the privacy budget for layers closer to the output is reduced(adding more noise),while the budget for layers closer to the input is increased.The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count.This dynamic allocation provides a simple process for adjusting privacy budgets,alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress.Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.展开更多
With the rapid development of information technology and the continuous evolution of personalized ser- vices, huge amounts of data are accumulated by large internet companies in the process of serving users. Moreover,...With the rapid development of information technology and the continuous evolution of personalized ser- vices, huge amounts of data are accumulated by large internet companies in the process of serving users. Moreover, dynamic data interactions increase the intentional/unintentional persistence of private infor- mation in different information systems. However, problems such as the cask principle of preserving pri- vate information among different information systems and the dif culty of tracing the source of privacy violations are becoming increasingly serious. Therefore, existing privacy-preserving schemes cannot pro- vide systematic privacy preservation. In this paper, we examine the links of the information life-cycle, such as information collection, storage, processing, distribution, and destruction. We then propose a the- ory of privacy computing and a key technology system that includes a privacy computing framework, a formal de nition of privacy computing, four principles that should be followed in privacy computing, ffect algorithm design criteria, evaluation of the privacy-preserving effect, and a privacy computing language. Finally, we employ four application scenarios to describe the universal application of privacy computing, and discuss the prospect of future research trends. This work is expected to guide theoretical research on user privacy preservation within open environments.展开更多
In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach...In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments.展开更多
The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.De...The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.展开更多
As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and use...As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.展开更多
Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to syst...Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to systematically assess the security and privacy landscape of interoperability protocols for permissioned blockchains,identifying key properties,attack vectors,and countermeasures.Using PRISMA 2020 guidelines,we analysed 56 peerreviewed studies published between 2020 and 2025,retrieved from Scopus,ScienceDirect,Web of Science,and IEEE Xplore.The review focused on interoperability protocols for permissioned blockchains with security and privacy analyses,including only English-language journal articles and conference proceedings.Risk of bias in the included studies was assessed using the MMAT.Methods for presenting and synthesizing results included descriptive analysis,bibliometric analysis,and content analysis,with findings organized into tables,charts,and comparative summaries.The review classifies interoperability protocols into relay,sidechain,notary scheme,HTLC,and hybrid types and identifies 18 security and privacy properties along with 31 known attack types.Relay-based protocols showed the broadest security coverage,while HTLC and notary schemes demonstrated significant security gaps.Notably,93% of studies examined fewer than four properties or attack types,indicating a fragmented research landscape.The review identifies underexplored areas such as ACID properties,decentralization,and cross-chain attack resilience.It further highlights effective countermeasures,including cryptographic techniques,trusted execution environments,zero-knowledge proofs,and decentralized identity schemes.The findings suggest that despite growing adoption,current interoperability protocols lack comprehensive security evaluations.More holistic research is needed to ensure the resilience,trustworthiness,and scalability of cross-chain operations in permissioned blockchain ecosystems.展开更多
In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart...In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart contracts and blockchain technology as a robust framework for securing sensitive healthcare data.By leveraging the decentralized and immutable nature of blockchain,the proposed approach ensures transparency,integrity,and traceability of EMR transactions,effectivelymitigating risks of unauthorized access and data tampering.Smart contracts further enhance this framework by enabling the automation and enforcement of secure transactions,eliminating reliance on intermediaries and reducing the potential for human error.This integration marks a paradigm shift in management and exchange of healthcare information,fostering a secure and privacy-preserving ecosystem for all stakeholders.The research also evaluates the practical implementation of blockchain and smart contracts within healthcare systems,examining their real-world effectiveness in enhancing transactional security,safeguarding patient privacy,and maintaining data integrity.Findings from the study contribute valuable insights to the growing body of work on digital healthcare innovation,underscoring the potential of these technologies to transform EMR systems with high accuracy and precision.As global healthcare systems continue to face the challenge of protecting sensitive patient data,the proposed framework offers a forward-looking,scalable,and effective solution aligned with the evolving digital healthcare landscape.展开更多
With the proliferation of online services and applications,adopting Single Sign-On(SSO)mechanisms has become increasingly prevalent.SSO enables users to authenticate once and gain access to multiple services,eliminati...With the proliferation of online services and applications,adopting Single Sign-On(SSO)mechanisms has become increasingly prevalent.SSO enables users to authenticate once and gain access to multiple services,eliminating the need to provide their credentials repeatedly.However,this convenience raises concerns about user security and privacy.The increasing reliance on SSO and its potential risks make it imperative to comprehensively review the various SSO security and privacy threats,identify gaps in existing systems,and explore effective mitigation solutions.This need motivated the first systematic literature review(SLR)of SSO security and privacy,conducted in this paper.The SLR is performed based on rigorous structured research methodology with specific inclusion/exclusion criteria and focuses specifically on the Web environment.Furthermore,it encompasses a meticulous examination and thematic synthesis of 88 relevant publications selected out of 2315 journal articles and conference/proceeding papers published between 2017 and 2024 from reputable academic databases.The SLR highlights critical security and privacy threats relating to SSO systems,reveals significant gaps in existing countermeasures,and emphasizes the need for more comprehensive protection mechanisms.The findings of this SLR will serve as an invaluable resource for scientists and developers interested in enhancing the security and privacy preservation of SSO and designing more efficient and robust SSO systems,thus contributing to the development of the authentication technologies field.展开更多
Distributed data fusion is essential for numerous applications,yet faces significant privacy security challenges.Federated learning(FL),as a distributed machine learning paradigm,offers enhanced data privacy protectio...Distributed data fusion is essential for numerous applications,yet faces significant privacy security challenges.Federated learning(FL),as a distributed machine learning paradigm,offers enhanced data privacy protection and has attracted widespread attention.Consequently,research increasingly focuses on developing more secure FL techniques.However,in real-world scenarios involving malicious entities,the accuracy of FL results is often compromised,particularly due to the threat of collusion between two servers.To address this challenge,this paper proposes an efficient and verifiable data aggregation protocol with enhanced privacy protection.After analyzing attack methods against prior schemes,we implement key improvements.Specifically,by incorporating cascaded random numbers and perturbation terms into gradients,we strengthen the privacy protection afforded by polynomial masking,effectively preventing information leakage.Furthermore,our protocol features an enhanced verification mechanism capable of detecting collusive behaviors between two servers.Accuracy testing on the MNIST and CIFAR-10 datasets demonstrates that our protocol maintains accuracy comparable to the Federated Averaging Algorithm.In scheme efficiency comparisons,while incurring only a marginal increase in verification overhead relative to the baseline scheme,our protocol achieves an average improvement of 93.13% in privacy protection and verification overhead compared to the state-of-the-art scheme.This result highlights its optimal balance between overall overhead and functionality.A current limitation is that the verificationmechanismcannot precisely pinpoint the source of anomalies within aggregated results when server-side malicious behavior occurs.Addressing this limitation will be a focus of future research.展开更多
With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challe...With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality.展开更多
The processing of personal data gives a rise to many privacy concerns,and one of them is to ensure the transpar-ency of data processing to end users.Usually this information is communicated to them using privacy polic...The processing of personal data gives a rise to many privacy concerns,and one of them is to ensure the transpar-ency of data processing to end users.Usually this information is communicated to them using privacy policies.In this paper,the problem of user notifcation in case of data breaches and policy changes is addressed,besides an ontol-ogy-based approach to model them is proposed.To specify the ontology concepts and properties,the requirements and recommendations for the legislative regulations as well as existing privacy policies are evaluated.A set of SPARQL queries to validate the correctness and completeness of the proposed ontology are developed.The proposed approach is applied to evaluate the privacy policies designed by cloud computing providers and IoT device manu-facturers.The results of the analysis show that the transparency of user notifcation scenarios presented in the privacy policies is still very low,and the companies should reconsider the notifcation mechanisms and provide more detailed information in privacy policies.展开更多
Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an es...Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an essential characteristic,it often necessitates certain compromises,such as complexity and scalability,thereby complicating management efforts.The principal challenge lies in ensuring confidentiality while simultaneously preserving individuals’anonymity within the system.To address this,we present our proposed architecture for managing IoT devices using blockchain technology.Our proposed architecture works on and off blockchain and is integrated with dashcams and closed-circuit television(CCTV)security cameras.In this work,the videos recorded by the dashcams and CCTV security cameras are hashed through the InterPlanetary File System(IPFS)and this hash is stored in the blockchain.When the accessors want to access the video,they must pass through multiple authentications which include web token authentication and verifiable credentials,to mitigate the risk of malicious users.Our contributions include the proposition of the framework,which works on the single key for every new video,and a novel chaincode algorithm that incorporates verifiable credentials.Analyses are made to show the system’s throughput and latency through stress testing.Significant advantages of the proposed architecture are shown by comparing them to existing schemes.The proposed architecture features a robust design that significantly enhances the security of blockchain-enabled Internet of Things(IoT)deviceswhile effectively mitigating the risk of a single point of failure,which provides a reliable solution for security concerns in the IoT landscape.Our future endeavors will focus on scaling the system by integrating innovative methods to enhance security measures further.展开更多
Metaverse is a new emerging concept building up a virtual environment for the user using Virtual Reality(VR)and blockchain technology but introduces privacy risks.Now,a series of challenges arise in Metaverse security...Metaverse is a new emerging concept building up a virtual environment for the user using Virtual Reality(VR)and blockchain technology but introduces privacy risks.Now,a series of challenges arise in Metaverse security,including massive data traffic breaches,large-scale user tracking,analysis activities,unreliable Artificial Intelligence(AI)analysis results,and social engineering security for people.In this work,we concentrate on Decentraland and Sandbox,two well-known Metaverse applications in Web 3.0.Our experiments analyze,for the first time,the personal privacy data exposed by Metaverse applications and services from a combined perspective of network traffic and privacy policy.We develop a lightweight traffic processing approach suitable for the Web 3.0 environment,which does not rely on complex decryption or reverse engineering techniques.We propose a smart contract interaction traffic analysis method capable of retrieving user interactions with Metaverse applications and blockchain smart contracts.This method provides a new approach to de-anonymizing users'identities through Metaverse applications.Our system,METAseen,analyzes and compares network traffic with the privacy policies of Metaverse applications to identify controversial data collection practices.The consistency check experiment reveals that the data types exposed by Metaverse applications include Personal Identifiable Information(PII),device information,and Metaverse-related data.By comparing the data flows observed in the network traffic with assertions made in the privacy regulations of the Metaverse service provider,we discovered that far more than 49%of the Metaverse data flows needed to be disclosed appropriately.展开更多
【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data...【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data are not clearly defined around the world.This study aims to develop a tool that meets compliance-driven desensitization requirements tailored to diverse research needs.【Methods】To enhance the security of medical image data,we designed and implemented a DICOM format medical image de-identification system on the Windows operating system.【Results】Our custom de-identification system is adaptable to the legal standards of different countries and can accommodate specific research demands.The system offers both web-based online and desktop offline de-identification capabilities,enabling customization of de-identification rules and facilitating batch processing to improve efficiency.【Conclusions】This medical image de-identification system robustly strengthens the stewardship of sensitive medical data,aligning with data security protection requirements while facilitating the sharing and utilization of medical image data.This approach unlocks the intrinsic value inherent in such datasets.展开更多
文摘In the competitive retail industry of the digital era,data-driven insights into gender-specific customer behavior are essential.They support the optimization of store performance,layout design,product placement,and targeted marketing.However,existing computer vision solutions often rely on facial recognition to gather such insights,raising significant privacy and ethical concerns.To address these issues,this paper presents a privacypreserving customer analytics system through two key strategies.First,we deploy a deep learning framework using YOLOv9s,trained on the RCA-TVGender dataset.Cameras are positioned perpendicular to observation areas to reduce facial visibility while maintaining accurate gender classification.Second,we apply AES-128 encryption to customer position data,ensuring secure access and regulatory compliance.Our system achieved overall performance,with 81.5%mAP@50,77.7%precision,and 75.7%recall.Moreover,a 90-min observational study confirmed the system’s ability to generate privacy-protected heatmaps revealing distinct behavioral patterns between male and female customers.For instance,women spent more time in certain areas and showed interest in different products.These results confirm the system’s effectiveness in enabling personalized layout and marketing strategies without compromising privacy.
基金supported by the Natural Science Foundation of Fujian Province of China(2025J01380)National Natural Science Foundation of China(No.62471139)+3 种基金the Major Health Research Project of Fujian Province(2021ZD01001)Fujian Provincial Units Special Funds for Education and Research(2022639)Fujian University of Technology Research Start-up Fund(GY-S24002)Fujian Research and Training Grants for Young and Middle-aged Leaders in Healthcare(GY-H-24179).
文摘The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location reidentification and correlation attacks.To address these challenges,privacy-preserving trajectory generation methods are critical for applications relying on sensitive location data.This paper introduces DPIL-Traj,an advanced framework designed to generate synthetic trajectories while achieving a superior balance between data utility and privacy preservation.Firstly,the framework incorporates Differential Privacy Clustering,which anonymizes trajectory data by applying differential privacy techniques that add noise,ensuring the protection of sensitive user information.Secondly,Imitation Learning is used to replicate decision-making behaviors observed in real-world trajectories.By learning from expert trajectories,this component generates synthetic data that closely mimics real-world decision-making processes while optimizing the quality of the generated trajectories.Finally,Markov-based Trajectory Generation is employed to capture and maintain the inherent temporal dynamics of movement patterns.Extensive experiments conducted on the GeoLife trajectory dataset show that DPIL-Traj improves utility performance by an average of 19.85%,and in terms of privacy performance by an average of 12.51%,compared to state-of-the-art approaches.Ablation studies further reveal that DP clustering effectively safeguards privacy,imitation learning enhances utility under noise,and the Markov module strengthens temporal coherence.
文摘The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.
基金supported by National Nature Science Foundation of China(No.62361036)Nature Science Foundation of Gansu Province(No.22JR5RA279).
文摘To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.
基金funded by the Science and Technology Project of State Grid Corporation of China(Research on the theory and method of multiparty encrypted computation in the edge fusion environment of power IoT,No.5700-202358592A-3-2-ZN)the National Natural Science Foundation of China(Grant Nos.62272056,62372048,62371069).
文摘Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can compromise user privacy and security by stealing model parameters.Therefore,differential privacy is applied in federated learning to further address malicious issues.However,the addition of noise and the update clipping mechanism in differential privacy jointly limit the further development of federated learning in privacy protection and performance optimization.Therefore,we propose an adaptive adjusted differential privacy federated learning method.First,a dynamic adaptive privacy budget allocation strategy is proposed,which flexibly adjusts the privacy budget within a given range based on the client’s data volume and training requirements,thereby alleviating the loss of privacy budget and the magnitude of model noise.Second,a longitudinal clipping differential privacy strategy is proposed,which based on the differences in factors that affect parameter updates,uses sparse methods to trim local updates,thereby reducing the impact of privacy pruning steps on model accuracy.The two strategies work together to ensure user privacy while the effect of differential privacy on model accuracy is reduced.To evaluate the effectiveness of our method,we conducted extensive experiments on benchmark datasets,and the results showed that our proposed method performed well in terms of performance and privacy protection.
基金supported by the National Natural Science Foundation of China(Grant No.62462022)the Hainan Province Science and Technology Special Fund(Grants No.ZDYF2022GXJS229).
文摘Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training.However,classic differentially private learning introduces the same level of noise into the gradients across training iterations,which affects the trade-off between model utility and privacy guarantees.To address this issue,an adaptive differential privacy mechanism was proposed in this paper,which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks.Specifically,an equal privacy budget is initially allocated to each layer.Subsequently,as training advances,the privacy budget for layers closer to the output is reduced(adding more noise),while the budget for layers closer to the input is increased.The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count.This dynamic allocation provides a simple process for adjusting privacy budgets,alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress.Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.
文摘With the rapid development of information technology and the continuous evolution of personalized ser- vices, huge amounts of data are accumulated by large internet companies in the process of serving users. Moreover, dynamic data interactions increase the intentional/unintentional persistence of private infor- mation in different information systems. However, problems such as the cask principle of preserving pri- vate information among different information systems and the dif culty of tracing the source of privacy violations are becoming increasingly serious. Therefore, existing privacy-preserving schemes cannot pro- vide systematic privacy preservation. In this paper, we examine the links of the information life-cycle, such as information collection, storage, processing, distribution, and destruction. We then propose a the- ory of privacy computing and a key technology system that includes a privacy computing framework, a formal de nition of privacy computing, four principles that should be followed in privacy computing, ffect algorithm design criteria, evaluation of the privacy-preserving effect, and a privacy computing language. Finally, we employ four application scenarios to describe the universal application of privacy computing, and discuss the prospect of future research trends. This work is expected to guide theoretical research on user privacy preservation within open environments.
基金supported by Systematic Major Project of Shuohuang Railway Development Co.,Ltd.,National Energy Group(Grant Number:SHTL-23-31)Beijing Natural Science Foundation(U22B2027).
文摘In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments.
基金supported by the National Key R&D Program of China under Grant No.2022YFB3103500the National Natural Science Foundation of China under Grants No.62402087 and No.62020106013+3 种基金the Sichuan Science and Technology Program under Grant No.2023ZYD0142the Chengdu Science and Technology Program under Grant No.2023-XT00-00002-GXthe Fundamental Research Funds for Chinese Central Universities under Grants No.ZYGX2020ZB027 and No.Y030232063003002the Postdoctoral Innovation Talents Support Program under Grant No.BX20230060.
文摘The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.
基金supported by the National Key R&D Program of China(No.2023YFB2703700)the National Natural Science Foundation of China(Nos.U21A20465,62302457,62402444,62172292)+4 种基金the Fundamental Research Funds of Zhejiang Sci-Tech University(Nos.23222092-Y,22222266-Y)the Program for Leading Innovative Research Team of Zhejiang Province(No.2023R01001)the Zhejiang Provincial Natural Science Foundation of China(Nos.LQ24F020008,LQ24F020012)the Foundation of State Key Laboratory of Public Big Data(No.[2022]417)the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(No.2023C01119).
文摘As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.
基金supported by the International Scientific and Technological Cooperation Project of Huangpu and Development Districts in Guangzhou(2023GH17)the National Science and Technology Council in Taiwan under grant number NSTC-113-2224-E-027-001,Private Funding(PV009-2023)the KW IPPP(Research Maintenance Fee)Individual/Centre/Group(RMF1506-2021)at Universiti Malaya,Malaysia.
文摘Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to systematically assess the security and privacy landscape of interoperability protocols for permissioned blockchains,identifying key properties,attack vectors,and countermeasures.Using PRISMA 2020 guidelines,we analysed 56 peerreviewed studies published between 2020 and 2025,retrieved from Scopus,ScienceDirect,Web of Science,and IEEE Xplore.The review focused on interoperability protocols for permissioned blockchains with security and privacy analyses,including only English-language journal articles and conference proceedings.Risk of bias in the included studies was assessed using the MMAT.Methods for presenting and synthesizing results included descriptive analysis,bibliometric analysis,and content analysis,with findings organized into tables,charts,and comparative summaries.The review classifies interoperability protocols into relay,sidechain,notary scheme,HTLC,and hybrid types and identifies 18 security and privacy properties along with 31 known attack types.Relay-based protocols showed the broadest security coverage,while HTLC and notary schemes demonstrated significant security gaps.Notably,93% of studies examined fewer than four properties or attack types,indicating a fragmented research landscape.The review identifies underexplored areas such as ACID properties,decentralization,and cross-chain attack resilience.It further highlights effective countermeasures,including cryptographic techniques,trusted execution environments,zero-knowledge proofs,and decentralized identity schemes.The findings suggest that despite growing adoption,current interoperability protocols lack comprehensive security evaluations.More holistic research is needed to ensure the resilience,trustworthiness,and scalability of cross-chain operations in permissioned blockchain ecosystems.
文摘In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart contracts and blockchain technology as a robust framework for securing sensitive healthcare data.By leveraging the decentralized and immutable nature of blockchain,the proposed approach ensures transparency,integrity,and traceability of EMR transactions,effectivelymitigating risks of unauthorized access and data tampering.Smart contracts further enhance this framework by enabling the automation and enforcement of secure transactions,eliminating reliance on intermediaries and reducing the potential for human error.This integration marks a paradigm shift in management and exchange of healthcare information,fostering a secure and privacy-preserving ecosystem for all stakeholders.The research also evaluates the practical implementation of blockchain and smart contracts within healthcare systems,examining their real-world effectiveness in enhancing transactional security,safeguarding patient privacy,and maintaining data integrity.Findings from the study contribute valuable insights to the growing body of work on digital healthcare innovation,underscoring the potential of these technologies to transform EMR systems with high accuracy and precision.As global healthcare systems continue to face the challenge of protecting sensitive patient data,the proposed framework offers a forward-looking,scalable,and effective solution aligned with the evolving digital healthcare landscape.
文摘With the proliferation of online services and applications,adopting Single Sign-On(SSO)mechanisms has become increasingly prevalent.SSO enables users to authenticate once and gain access to multiple services,eliminating the need to provide their credentials repeatedly.However,this convenience raises concerns about user security and privacy.The increasing reliance on SSO and its potential risks make it imperative to comprehensively review the various SSO security and privacy threats,identify gaps in existing systems,and explore effective mitigation solutions.This need motivated the first systematic literature review(SLR)of SSO security and privacy,conducted in this paper.The SLR is performed based on rigorous structured research methodology with specific inclusion/exclusion criteria and focuses specifically on the Web environment.Furthermore,it encompasses a meticulous examination and thematic synthesis of 88 relevant publications selected out of 2315 journal articles and conference/proceeding papers published between 2017 and 2024 from reputable academic databases.The SLR highlights critical security and privacy threats relating to SSO systems,reveals significant gaps in existing countermeasures,and emphasizes the need for more comprehensive protection mechanisms.The findings of this SLR will serve as an invaluable resource for scientists and developers interested in enhancing the security and privacy preservation of SSO and designing more efficient and robust SSO systems,thus contributing to the development of the authentication technologies field.
基金supported by National Key R&D Program of China(2023YFB3106100)National Natural Science Foundation of China(62102452,62172436)Natural Science Foundation of Shaanxi Province(2023-JCYB-584).
文摘Distributed data fusion is essential for numerous applications,yet faces significant privacy security challenges.Federated learning(FL),as a distributed machine learning paradigm,offers enhanced data privacy protection and has attracted widespread attention.Consequently,research increasingly focuses on developing more secure FL techniques.However,in real-world scenarios involving malicious entities,the accuracy of FL results is often compromised,particularly due to the threat of collusion between two servers.To address this challenge,this paper proposes an efficient and verifiable data aggregation protocol with enhanced privacy protection.After analyzing attack methods against prior schemes,we implement key improvements.Specifically,by incorporating cascaded random numbers and perturbation terms into gradients,we strengthen the privacy protection afforded by polynomial masking,effectively preventing information leakage.Furthermore,our protocol features an enhanced verification mechanism capable of detecting collusive behaviors between two servers.Accuracy testing on the MNIST and CIFAR-10 datasets demonstrates that our protocol maintains accuracy comparable to the Federated Averaging Algorithm.In scheme efficiency comparisons,while incurring only a marginal increase in verification overhead relative to the baseline scheme,our protocol achieves an average improvement of 93.13% in privacy protection and verification overhead compared to the state-of-the-art scheme.This result highlights its optimal balance between overall overhead and functionality.A current limitation is that the verificationmechanismcannot precisely pinpoint the source of anomalies within aggregated results when server-side malicious behavior occurs.Addressing this limitation will be a focus of future research.
基金funded by the State Grid Corporation Science and Technology Project“Research and Application of Key Technologies for Integrated Sensing and Computing for Intelligent Operation of Power Grid”(Grant No.5700-202318596A-3-2-ZN).
文摘With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality.
文摘The processing of personal data gives a rise to many privacy concerns,and one of them is to ensure the transpar-ency of data processing to end users.Usually this information is communicated to them using privacy policies.In this paper,the problem of user notifcation in case of data breaches and policy changes is addressed,besides an ontol-ogy-based approach to model them is proposed.To specify the ontology concepts and properties,the requirements and recommendations for the legislative regulations as well as existing privacy policies are evaluated.A set of SPARQL queries to validate the correctness and completeness of the proposed ontology are developed.The proposed approach is applied to evaluate the privacy policies designed by cloud computing providers and IoT device manu-facturers.The results of the analysis show that the transparency of user notifcation scenarios presented in the privacy policies is still very low,and the companies should reconsider the notifcation mechanisms and provide more detailed information in privacy policies.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)(Project Nos.RS-2024-00438551,30%,2022-11220701,30%,2021-0-01816,30%)the National Research Foundation of Korea(NRF)grant funded by the Korean Government(Project No.RS-2023-00208460,10%).
文摘Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an essential characteristic,it often necessitates certain compromises,such as complexity and scalability,thereby complicating management efforts.The principal challenge lies in ensuring confidentiality while simultaneously preserving individuals’anonymity within the system.To address this,we present our proposed architecture for managing IoT devices using blockchain technology.Our proposed architecture works on and off blockchain and is integrated with dashcams and closed-circuit television(CCTV)security cameras.In this work,the videos recorded by the dashcams and CCTV security cameras are hashed through the InterPlanetary File System(IPFS)and this hash is stored in the blockchain.When the accessors want to access the video,they must pass through multiple authentications which include web token authentication and verifiable credentials,to mitigate the risk of malicious users.Our contributions include the proposition of the framework,which works on the single key for every new video,and a novel chaincode algorithm that incorporates verifiable credentials.Analyses are made to show the system’s throughput and latency through stress testing.Significant advantages of the proposed architecture are shown by comparing them to existing schemes.The proposed architecture features a robust design that significantly enhances the security of blockchain-enabled Internet of Things(IoT)deviceswhile effectively mitigating the risk of a single point of failure,which provides a reliable solution for security concerns in the IoT landscape.Our future endeavors will focus on scaling the system by integrating innovative methods to enhance security measures further.
基金supported by the National Key R&D Program of China (2021YFB2700200)the National Natural Science Foundation of China (U21B2021,61932014,61972018,62202027)+2 种基金Young Elite Scientists Sponsorship Program by CAST (2022QNRC001)Beijing Natural Science Foundation (M23016)Yunnan Key Laboratory of Blockchain Application Technology Open Project (202105AG070005,YNB202206)。
文摘Metaverse is a new emerging concept building up a virtual environment for the user using Virtual Reality(VR)and blockchain technology but introduces privacy risks.Now,a series of challenges arise in Metaverse security,including massive data traffic breaches,large-scale user tracking,analysis activities,unreliable Artificial Intelligence(AI)analysis results,and social engineering security for people.In this work,we concentrate on Decentraland and Sandbox,two well-known Metaverse applications in Web 3.0.Our experiments analyze,for the first time,the personal privacy data exposed by Metaverse applications and services from a combined perspective of network traffic and privacy policy.We develop a lightweight traffic processing approach suitable for the Web 3.0 environment,which does not rely on complex decryption or reverse engineering techniques.We propose a smart contract interaction traffic analysis method capable of retrieving user interactions with Metaverse applications and blockchain smart contracts.This method provides a new approach to de-anonymizing users'identities through Metaverse applications.Our system,METAseen,analyzes and compares network traffic with the privacy policies of Metaverse applications to identify controversial data collection practices.The consistency check experiment reveals that the data types exposed by Metaverse applications include Personal Identifiable Information(PII),device information,and Metaverse-related data.By comparing the data flows observed in the network traffic with assertions made in the privacy regulations of the Metaverse service provider,we discovered that far more than 49%of the Metaverse data flows needed to be disclosed appropriately.
基金CAMS Innovation Fund for Medical Sciences(CIFMS):“Construction of an Intelligent Management and Efficient Utilization Technology System for Big Data in Population Health Science.”(2021-I2M-1-057)Key Projects of the Innovation Fund of the National Clinical Research Center for Orthopedics and Sports Rehabilitation:“National Orthopedics and Sports Rehabilitation Real-World Research Platform System Construction”(23-NCRC-CXJJ-ZD4)。
文摘【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data are not clearly defined around the world.This study aims to develop a tool that meets compliance-driven desensitization requirements tailored to diverse research needs.【Methods】To enhance the security of medical image data,we designed and implemented a DICOM format medical image de-identification system on the Windows operating system.【Results】Our custom de-identification system is adaptable to the legal standards of different countries and can accommodate specific research demands.The system offers both web-based online and desktop offline de-identification capabilities,enabling customization of de-identification rules and facilitating batch processing to improve efficiency.【Conclusions】This medical image de-identification system robustly strengthens the stewardship of sensitive medical data,aligning with data security protection requirements while facilitating the sharing and utilization of medical image data.This approach unlocks the intrinsic value inherent in such datasets.