In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach...In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments.展开更多
The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.De...The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.展开更多
As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and use...As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.展开更多
The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches l...The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.展开更多
Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can com...Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can compromise user privacy and security by stealing model parameters.Therefore,differential privacy is applied in federated learning to further address malicious issues.However,the addition of noise and the update clipping mechanism in differential privacy jointly limit the further development of federated learning in privacy protection and performance optimization.Therefore,we propose an adaptive adjusted differential privacy federated learning method.First,a dynamic adaptive privacy budget allocation strategy is proposed,which flexibly adjusts the privacy budget within a given range based on the client’s data volume and training requirements,thereby alleviating the loss of privacy budget and the magnitude of model noise.Second,a longitudinal clipping differential privacy strategy is proposed,which based on the differences in factors that affect parameter updates,uses sparse methods to trim local updates,thereby reducing the impact of privacy pruning steps on model accuracy.The two strategies work together to ensure user privacy while the effect of differential privacy on model accuracy is reduced.To evaluate the effectiveness of our method,we conducted extensive experiments on benchmark datasets,and the results showed that our proposed method performed well in terms of performance and privacy protection.展开更多
Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garner...Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training.However,classic differentially private learning introduces the same level of noise into the gradients across training iterations,which affects the trade-off between model utility and privacy guarantees.To address this issue,an adaptive differential privacy mechanism was proposed in this paper,which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks.Specifically,an equal privacy budget is initially allocated to each layer.Subsequently,as training advances,the privacy budget for layers closer to the output is reduced(adding more noise),while the budget for layers closer to the input is increased.The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count.This dynamic allocation provides a simple process for adjusting privacy budgets,alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress.Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.展开更多
To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The...To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.展开更多
The widespread application of artificial intelligence(AI)technology in exams has significantly improved the efficiency and fairness of exams;it has also brought challenges of ethics and privacy protection.The article ...The widespread application of artificial intelligence(AI)technology in exams has significantly improved the efficiency and fairness of exams;it has also brought challenges of ethics and privacy protection.The article analyzes the fairness,transparency,and privacy protection issues caused by AI in exams and proposes strategic solutions.This article aims to provide guidance for the rational application of AI technology in exams,ensuring a balance between technological progress and ethical protection by strengthening laws and regulations,enhancing technological transparency,strengthening candidates’privacy rights,and improving the management measures of educational examination institutions.展开更多
Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to syst...Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to systematically assess the security and privacy landscape of interoperability protocols for permissioned blockchains,identifying key properties,attack vectors,and countermeasures.Using PRISMA 2020 guidelines,we analysed 56 peerreviewed studies published between 2020 and 2025,retrieved from Scopus,ScienceDirect,Web of Science,and IEEE Xplore.The review focused on interoperability protocols for permissioned blockchains with security and privacy analyses,including only English-language journal articles and conference proceedings.Risk of bias in the included studies was assessed using the MMAT.Methods for presenting and synthesizing results included descriptive analysis,bibliometric analysis,and content analysis,with findings organized into tables,charts,and comparative summaries.The review classifies interoperability protocols into relay,sidechain,notary scheme,HTLC,and hybrid types and identifies 18 security and privacy properties along with 31 known attack types.Relay-based protocols showed the broadest security coverage,while HTLC and notary schemes demonstrated significant security gaps.Notably,93% of studies examined fewer than four properties or attack types,indicating a fragmented research landscape.The review identifies underexplored areas such as ACID properties,decentralization,and cross-chain attack resilience.It further highlights effective countermeasures,including cryptographic techniques,trusted execution environments,zero-knowledge proofs,and decentralized identity schemes.The findings suggest that despite growing adoption,current interoperability protocols lack comprehensive security evaluations.More holistic research is needed to ensure the resilience,trustworthiness,and scalability of cross-chain operations in permissioned blockchain ecosystems.展开更多
As the 5G architecture gains momentum,interest in 6G is growing.The proliferation of Internet of Things(IoT)devices,capable of capturing sensitive images,has increased the need for secure transmission and robust acces...As the 5G architecture gains momentum,interest in 6G is growing.The proliferation of Internet of Things(IoT)devices,capable of capturing sensitive images,has increased the need for secure transmission and robust access control mechanisms.The vast amount of data generated by low-computing devices poses a challenge to traditional centralized access control,which relies on trusted third parties and complex computations,resulting in intricate interactions,higher hardware costs,and processing delays.To address these issues,this paper introduces a novel distributed access control approach that integrates a decentralized and lightweight encryption mechanism with image transmission.This method enhances data security and resource efficiency without imposing heavy computational and network burdens.In comparison to the best existing approach,it achieves a 7%improvement in accuracy,effectively addressing existing gaps in lightweight encryption and recognition performance.展开更多
With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challe...With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality.展开更多
The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform ...The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform has seriously affected the authenticity of course evaluations and user trust,requiring effective anomaly detection techniques for screening.The textual characteristics of MOOCs reviews,such as varying lengths and diverse emotional tendencies,have brought complexity to text analysis.Traditional rule-based analysis methods are often inadequate in dealing with such unstructured data.We propose a Differential Privacy-Enabled Text Convolutional Neural Network(DP-TextCNN)framework,aiming to achieve high-precision identification of outliers in MOOCs course reviews and ratings while protecting user privacy.This framework leverages the advantages of Convolutional Neural Networks(CNN)in text feature extraction and combines differential privacy techniques.It balances data privacy protection with model performance by introducing controlled random noise during the data preprocessing stage.By embedding differential privacy into the model training process,we ensure the privacy security of the framework when handling sensitive data,while maintaining a high recognition accuracy.Experimental results indicate that the DP-TextCNN framework achieves an exceptional accuracy of over 95%in identifying fake reviews on the dataset,this outcome not only verifies the applicability of differential privacy techniques in TextCNN but also underscores its potential in handling sensitive educational data.Additionally,we analyze the specific impact of differential privacy parameters on framework performance,offering theoretical support and empirical analysis to strike an optimal balance between privacy protection and framework efficiency.展开更多
Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an es...Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an essential characteristic,it often necessitates certain compromises,such as complexity and scalability,thereby complicating management efforts.The principal challenge lies in ensuring confidentiality while simultaneously preserving individuals’anonymity within the system.To address this,we present our proposed architecture for managing IoT devices using blockchain technology.Our proposed architecture works on and off blockchain and is integrated with dashcams and closed-circuit television(CCTV)security cameras.In this work,the videos recorded by the dashcams and CCTV security cameras are hashed through the InterPlanetary File System(IPFS)and this hash is stored in the blockchain.When the accessors want to access the video,they must pass through multiple authentications which include web token authentication and verifiable credentials,to mitigate the risk of malicious users.Our contributions include the proposition of the framework,which works on the single key for every new video,and a novel chaincode algorithm that incorporates verifiable credentials.Analyses are made to show the system’s throughput and latency through stress testing.Significant advantages of the proposed architecture are shown by comparing them to existing schemes.The proposed architecture features a robust design that significantly enhances the security of blockchain-enabled Internet of Things(IoT)deviceswhile effectively mitigating the risk of a single point of failure,which provides a reliable solution for security concerns in the IoT landscape.Our future endeavors will focus on scaling the system by integrating innovative methods to enhance security measures further.展开更多
In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart...In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart contracts and blockchain technology as a robust framework for securing sensitive healthcare data.By leveraging the decentralized and immutable nature of blockchain,the proposed approach ensures transparency,integrity,and traceability of EMR transactions,effectivelymitigating risks of unauthorized access and data tampering.Smart contracts further enhance this framework by enabling the automation and enforcement of secure transactions,eliminating reliance on intermediaries and reducing the potential for human error.This integration marks a paradigm shift in management and exchange of healthcare information,fostering a secure and privacy-preserving ecosystem for all stakeholders.The research also evaluates the practical implementation of blockchain and smart contracts within healthcare systems,examining their real-world effectiveness in enhancing transactional security,safeguarding patient privacy,and maintaining data integrity.Findings from the study contribute valuable insights to the growing body of work on digital healthcare innovation,underscoring the potential of these technologies to transform EMR systems with high accuracy and precision.As global healthcare systems continue to face the challenge of protecting sensitive patient data,the proposed framework offers a forward-looking,scalable,and effective solution aligned with the evolving digital healthcare landscape.展开更多
Metaverse is a new emerging concept building up a virtual environment for the user using Virtual Reality(VR)and blockchain technology but introduces privacy risks.Now,a series of challenges arise in Metaverse security...Metaverse is a new emerging concept building up a virtual environment for the user using Virtual Reality(VR)and blockchain technology but introduces privacy risks.Now,a series of challenges arise in Metaverse security,including massive data traffic breaches,large-scale user tracking,analysis activities,unreliable Artificial Intelligence(AI)analysis results,and social engineering security for people.In this work,we concentrate on Decentraland and Sandbox,two well-known Metaverse applications in Web 3.0.Our experiments analyze,for the first time,the personal privacy data exposed by Metaverse applications and services from a combined perspective of network traffic and privacy policy.We develop a lightweight traffic processing approach suitable for the Web 3.0 environment,which does not rely on complex decryption or reverse engineering techniques.We propose a smart contract interaction traffic analysis method capable of retrieving user interactions with Metaverse applications and blockchain smart contracts.This method provides a new approach to de-anonymizing users'identities through Metaverse applications.Our system,METAseen,analyzes and compares network traffic with the privacy policies of Metaverse applications to identify controversial data collection practices.The consistency check experiment reveals that the data types exposed by Metaverse applications include Personal Identifiable Information(PII),device information,and Metaverse-related data.By comparing the data flows observed in the network traffic with assertions made in the privacy regulations of the Metaverse service provider,we discovered that far more than 49%of the Metaverse data flows needed to be disclosed appropriately.展开更多
With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Neve...With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Nevertheless,power data often contains sensitive information,making it a critical industry challenge to efficiently utilize this data while ensuring privacy.Traditional Federated Learning(FL)methods can mitigate data leakage by training models locally instead of transmitting raw data.Despite this,FL still has privacy concerns,especially gradient leakage,which might expose users’sensitive information.Therefore,integrating Differential Privacy(DP)techniques is essential for stronger privacy protection.Even so,the noise from DP may reduce the performance of federated learning models.To address this challenge,this paper presents an explainability-driven power data privacy federated learning framework.It incorporates DP technology and,based on model explainability,adaptively adjusts privacy budget allocation and model aggregation,thus balancing privacy protection and model performance.The key innovations of this paper are as follows:(1)We propose an explainability-driven power data privacy federated learning framework.(2)We detail a privacy budget allocation strategy:assigning budgets per training round by gradient effectiveness and at model granularity by layer importance.(3)We design a weighted aggregation strategy that considers the SHAP value and model accuracy for quality knowledge sharing.(4)Experiments show the proposed framework outperforms traditional methods in balancing privacy protection and model performance in power load forecasting tasks.展开更多
Distributed data fusion is essential for numerous applications,yet faces significant privacy security challenges.Federated learning(FL),as a distributed machine learning paradigm,offers enhanced data privacy protectio...Distributed data fusion is essential for numerous applications,yet faces significant privacy security challenges.Federated learning(FL),as a distributed machine learning paradigm,offers enhanced data privacy protection and has attracted widespread attention.Consequently,research increasingly focuses on developing more secure FL techniques.However,in real-world scenarios involving malicious entities,the accuracy of FL results is often compromised,particularly due to the threat of collusion between two servers.To address this challenge,this paper proposes an efficient and verifiable data aggregation protocol with enhanced privacy protection.After analyzing attack methods against prior schemes,we implement key improvements.Specifically,by incorporating cascaded random numbers and perturbation terms into gradients,we strengthen the privacy protection afforded by polynomial masking,effectively preventing information leakage.Furthermore,our protocol features an enhanced verification mechanism capable of detecting collusive behaviors between two servers.Accuracy testing on the MNIST and CIFAR-10 datasets demonstrates that our protocol maintains accuracy comparable to the Federated Averaging Algorithm.In scheme efficiency comparisons,while incurring only a marginal increase in verification overhead relative to the baseline scheme,our protocol achieves an average improvement of 93.13% in privacy protection and verification overhead compared to the state-of-the-art scheme.This result highlights its optimal balance between overall overhead and functionality.A current limitation is that the verificationmechanismcannot precisely pinpoint the source of anomalies within aggregated results when server-side malicious behavior occurs.Addressing this limitation will be a focus of future research.展开更多
Mobile crowdsensing(MCS)has become an effective paradigm to facilitate urban sensing.However,mobile users participating in sensing tasks will face the risk of location privacy leakage when uploading their actual sensi...Mobile crowdsensing(MCS)has become an effective paradigm to facilitate urban sensing.However,mobile users participating in sensing tasks will face the risk of location privacy leakage when uploading their actual sensing location data.In the application of mobile crowdsensing,most location privacy protection studies do not consider the temporal correlations between locations,so they are vulnerable to various inference attacks,and there is the problem of low data availability.In order to solve the above problems,this paper proposes a dynamic differential location privacy data publishing framework(DDLP)that protects privacy while publishing locations continuously.Firstly,the corresponding Markov transition matrices are established according to different times of historical trajectories,and then the protection location set is generated based on the current location at each timestamp.Moreover,using the exponential mechanism in differential privacy perturbs the true location by designing the utility function.Finally,experiments on the real-world trajectory dataset show that our method not only provides strong privacy guarantees,but also outperforms existing methods in terms of data availability and computational efficiency.展开更多
【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data...【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data are not clearly defined around the world.This study aims to develop a tool that meets compliance-driven desensitization requirements tailored to diverse research needs.【Methods】To enhance the security of medical image data,we designed and implemented a DICOM format medical image de-identification system on the Windows operating system.【Results】Our custom de-identification system is adaptable to the legal standards of different countries and can accommodate specific research demands.The system offers both web-based online and desktop offline de-identification capabilities,enabling customization of de-identification rules and facilitating batch processing to improve efficiency.【Conclusions】This medical image de-identification system robustly strengthens the stewardship of sensitive medical data,aligning with data security protection requirements while facilitating the sharing and utilization of medical image data.This approach unlocks the intrinsic value inherent in such datasets.展开更多
This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environmen...This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environments.This innovative approach not only significantly improves the system’s real-time responsiveness and resource utilization efficiency but also addresses critical challenges in Internet of Things(IoT)ecosystems—such as high demand variability,resource allocation uncertainties,and data privacy concerns—through practical solutions.Initially,the framework employs an adaptive adjustment mechanism to dynamically manage task and resource states,complemented by online learning models for precise predictive analytics.Secondly,it accelerates the search for optimal solutions using Grover’s algorithm while efficiently evaluating complex constraints through multi-controlled Toffoli gates,thereby markedly enhancing the practicality and robustness of the proposed solution.Furthermore,to bolster the system’s adaptability and response speed in dynamic environments,an efficientmonitoring mechanism and event-driven architecture are incorporated,ensuring timely responses to environmental changes and maintaining synchronization between internal and external systems.Experimental evaluations confirm that the proposed algorithm demonstrates superior performance in complex application scenarios,characterized by faster convergence,enhanced stability,and superior data privacy protection,alongside notable reductions in latency and optimized resource utilization.This research paves the way for transformative advancements in edge computing and IoT technologies,driving smart edge computing towards unprecedented levels of intelligence and automation.展开更多
基金supported by Systematic Major Project of Shuohuang Railway Development Co.,Ltd.,National Energy Group(Grant Number:SHTL-23-31)Beijing Natural Science Foundation(U22B2027).
文摘In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments.
基金supported by the National Key R&D Program of China under Grant No.2022YFB3103500the National Natural Science Foundation of China under Grants No.62402087 and No.62020106013+3 种基金the Sichuan Science and Technology Program under Grant No.2023ZYD0142the Chengdu Science and Technology Program under Grant No.2023-XT00-00002-GXthe Fundamental Research Funds for Chinese Central Universities under Grants No.ZYGX2020ZB027 and No.Y030232063003002the Postdoctoral Innovation Talents Support Program under Grant No.BX20230060.
文摘The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.
基金supported by the National Key R&D Program of China(No.2023YFB2703700)the National Natural Science Foundation of China(Nos.U21A20465,62302457,62402444,62172292)+4 种基金the Fundamental Research Funds of Zhejiang Sci-Tech University(Nos.23222092-Y,22222266-Y)the Program for Leading Innovative Research Team of Zhejiang Province(No.2023R01001)the Zhejiang Provincial Natural Science Foundation of China(Nos.LQ24F020008,LQ24F020012)the Foundation of State Key Laboratory of Public Big Data(No.[2022]417)the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(No.2023C01119).
文摘As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.
文摘The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.
基金funded by the Science and Technology Project of State Grid Corporation of China(Research on the theory and method of multiparty encrypted computation in the edge fusion environment of power IoT,No.5700-202358592A-3-2-ZN)the National Natural Science Foundation of China(Grant Nos.62272056,62372048,62371069).
文摘Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can compromise user privacy and security by stealing model parameters.Therefore,differential privacy is applied in federated learning to further address malicious issues.However,the addition of noise and the update clipping mechanism in differential privacy jointly limit the further development of federated learning in privacy protection and performance optimization.Therefore,we propose an adaptive adjusted differential privacy federated learning method.First,a dynamic adaptive privacy budget allocation strategy is proposed,which flexibly adjusts the privacy budget within a given range based on the client’s data volume and training requirements,thereby alleviating the loss of privacy budget and the magnitude of model noise.Second,a longitudinal clipping differential privacy strategy is proposed,which based on the differences in factors that affect parameter updates,uses sparse methods to trim local updates,thereby reducing the impact of privacy pruning steps on model accuracy.The two strategies work together to ensure user privacy while the effect of differential privacy on model accuracy is reduced.To evaluate the effectiveness of our method,we conducted extensive experiments on benchmark datasets,and the results showed that our proposed method performed well in terms of performance and privacy protection.
基金supported by the National Natural Science Foundation of China(Grant No.62462022)the Hainan Province Science and Technology Special Fund(Grants No.ZDYF2022GXJS229).
文摘Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training.However,classic differentially private learning introduces the same level of noise into the gradients across training iterations,which affects the trade-off between model utility and privacy guarantees.To address this issue,an adaptive differential privacy mechanism was proposed in this paper,which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks.Specifically,an equal privacy budget is initially allocated to each layer.Subsequently,as training advances,the privacy budget for layers closer to the output is reduced(adding more noise),while the budget for layers closer to the input is increased.The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count.This dynamic allocation provides a simple process for adjusting privacy budgets,alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress.Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.
基金supported by National Nature Science Foundation of China(No.62361036)Nature Science Foundation of Gansu Province(No.22JR5RA279).
文摘To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.
文摘The widespread application of artificial intelligence(AI)technology in exams has significantly improved the efficiency and fairness of exams;it has also brought challenges of ethics and privacy protection.The article analyzes the fairness,transparency,and privacy protection issues caused by AI in exams and proposes strategic solutions.This article aims to provide guidance for the rational application of AI technology in exams,ensuring a balance between technological progress and ethical protection by strengthening laws and regulations,enhancing technological transparency,strengthening candidates’privacy rights,and improving the management measures of educational examination institutions.
基金supported by the International Scientific and Technological Cooperation Project of Huangpu and Development Districts in Guangzhou(2023GH17)the National Science and Technology Council in Taiwan under grant number NSTC-113-2224-E-027-001,Private Funding(PV009-2023)the KW IPPP(Research Maintenance Fee)Individual/Centre/Group(RMF1506-2021)at Universiti Malaya,Malaysia.
文摘Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to systematically assess the security and privacy landscape of interoperability protocols for permissioned blockchains,identifying key properties,attack vectors,and countermeasures.Using PRISMA 2020 guidelines,we analysed 56 peerreviewed studies published between 2020 and 2025,retrieved from Scopus,ScienceDirect,Web of Science,and IEEE Xplore.The review focused on interoperability protocols for permissioned blockchains with security and privacy analyses,including only English-language journal articles and conference proceedings.Risk of bias in the included studies was assessed using the MMAT.Methods for presenting and synthesizing results included descriptive analysis,bibliometric analysis,and content analysis,with findings organized into tables,charts,and comparative summaries.The review classifies interoperability protocols into relay,sidechain,notary scheme,HTLC,and hybrid types and identifies 18 security and privacy properties along with 31 known attack types.Relay-based protocols showed the broadest security coverage,while HTLC and notary schemes demonstrated significant security gaps.Notably,93% of studies examined fewer than four properties or attack types,indicating a fragmented research landscape.The review identifies underexplored areas such as ACID properties,decentralization,and cross-chain attack resilience.It further highlights effective countermeasures,including cryptographic techniques,trusted execution environments,zero-knowledge proofs,and decentralized identity schemes.The findings suggest that despite growing adoption,current interoperability protocols lack comprehensive security evaluations.More holistic research is needed to ensure the resilience,trustworthiness,and scalability of cross-chain operations in permissioned blockchain ecosystems.
基金supported in part by the National Natural Science Foundation of China under Grants(62250410365,62071084)the Youth Program of Humanities and Social Sciences of the MoE(23YJCZH291)+1 种基金the Key Laboratory of Computing Power Network and Information Security,Ministry of Education(2023ZD02)Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Research Project under grant number RGP2/15/46.
文摘As the 5G architecture gains momentum,interest in 6G is growing.The proliferation of Internet of Things(IoT)devices,capable of capturing sensitive images,has increased the need for secure transmission and robust access control mechanisms.The vast amount of data generated by low-computing devices poses a challenge to traditional centralized access control,which relies on trusted third parties and complex computations,resulting in intricate interactions,higher hardware costs,and processing delays.To address these issues,this paper introduces a novel distributed access control approach that integrates a decentralized and lightweight encryption mechanism with image transmission.This method enhances data security and resource efficiency without imposing heavy computational and network burdens.In comparison to the best existing approach,it achieves a 7%improvement in accuracy,effectively addressing existing gaps in lightweight encryption and recognition performance.
基金funded by the State Grid Corporation Science and Technology Project“Research and Application of Key Technologies for Integrated Sensing and Computing for Intelligent Operation of Power Grid”(Grant No.5700-202318596A-3-2-ZN).
文摘With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality.
文摘The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform has seriously affected the authenticity of course evaluations and user trust,requiring effective anomaly detection techniques for screening.The textual characteristics of MOOCs reviews,such as varying lengths and diverse emotional tendencies,have brought complexity to text analysis.Traditional rule-based analysis methods are often inadequate in dealing with such unstructured data.We propose a Differential Privacy-Enabled Text Convolutional Neural Network(DP-TextCNN)framework,aiming to achieve high-precision identification of outliers in MOOCs course reviews and ratings while protecting user privacy.This framework leverages the advantages of Convolutional Neural Networks(CNN)in text feature extraction and combines differential privacy techniques.It balances data privacy protection with model performance by introducing controlled random noise during the data preprocessing stage.By embedding differential privacy into the model training process,we ensure the privacy security of the framework when handling sensitive data,while maintaining a high recognition accuracy.Experimental results indicate that the DP-TextCNN framework achieves an exceptional accuracy of over 95%in identifying fake reviews on the dataset,this outcome not only verifies the applicability of differential privacy techniques in TextCNN but also underscores its potential in handling sensitive educational data.Additionally,we analyze the specific impact of differential privacy parameters on framework performance,offering theoretical support and empirical analysis to strike an optimal balance between privacy protection and framework efficiency.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)(Project Nos.RS-2024-00438551,30%,2022-11220701,30%,2021-0-01816,30%)the National Research Foundation of Korea(NRF)grant funded by the Korean Government(Project No.RS-2023-00208460,10%).
文摘Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an essential characteristic,it often necessitates certain compromises,such as complexity and scalability,thereby complicating management efforts.The principal challenge lies in ensuring confidentiality while simultaneously preserving individuals’anonymity within the system.To address this,we present our proposed architecture for managing IoT devices using blockchain technology.Our proposed architecture works on and off blockchain and is integrated with dashcams and closed-circuit television(CCTV)security cameras.In this work,the videos recorded by the dashcams and CCTV security cameras are hashed through the InterPlanetary File System(IPFS)and this hash is stored in the blockchain.When the accessors want to access the video,they must pass through multiple authentications which include web token authentication and verifiable credentials,to mitigate the risk of malicious users.Our contributions include the proposition of the framework,which works on the single key for every new video,and a novel chaincode algorithm that incorporates verifiable credentials.Analyses are made to show the system’s throughput and latency through stress testing.Significant advantages of the proposed architecture are shown by comparing them to existing schemes.The proposed architecture features a robust design that significantly enhances the security of blockchain-enabled Internet of Things(IoT)deviceswhile effectively mitigating the risk of a single point of failure,which provides a reliable solution for security concerns in the IoT landscape.Our future endeavors will focus on scaling the system by integrating innovative methods to enhance security measures further.
文摘In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart contracts and blockchain technology as a robust framework for securing sensitive healthcare data.By leveraging the decentralized and immutable nature of blockchain,the proposed approach ensures transparency,integrity,and traceability of EMR transactions,effectivelymitigating risks of unauthorized access and data tampering.Smart contracts further enhance this framework by enabling the automation and enforcement of secure transactions,eliminating reliance on intermediaries and reducing the potential for human error.This integration marks a paradigm shift in management and exchange of healthcare information,fostering a secure and privacy-preserving ecosystem for all stakeholders.The research also evaluates the practical implementation of blockchain and smart contracts within healthcare systems,examining their real-world effectiveness in enhancing transactional security,safeguarding patient privacy,and maintaining data integrity.Findings from the study contribute valuable insights to the growing body of work on digital healthcare innovation,underscoring the potential of these technologies to transform EMR systems with high accuracy and precision.As global healthcare systems continue to face the challenge of protecting sensitive patient data,the proposed framework offers a forward-looking,scalable,and effective solution aligned with the evolving digital healthcare landscape.
基金supported by the National Key R&D Program of China (2021YFB2700200)the National Natural Science Foundation of China (U21B2021,61932014,61972018,62202027)+2 种基金Young Elite Scientists Sponsorship Program by CAST (2022QNRC001)Beijing Natural Science Foundation (M23016)Yunnan Key Laboratory of Blockchain Application Technology Open Project (202105AG070005,YNB202206)。
文摘Metaverse is a new emerging concept building up a virtual environment for the user using Virtual Reality(VR)and blockchain technology but introduces privacy risks.Now,a series of challenges arise in Metaverse security,including massive data traffic breaches,large-scale user tracking,analysis activities,unreliable Artificial Intelligence(AI)analysis results,and social engineering security for people.In this work,we concentrate on Decentraland and Sandbox,two well-known Metaverse applications in Web 3.0.Our experiments analyze,for the first time,the personal privacy data exposed by Metaverse applications and services from a combined perspective of network traffic and privacy policy.We develop a lightweight traffic processing approach suitable for the Web 3.0 environment,which does not rely on complex decryption or reverse engineering techniques.We propose a smart contract interaction traffic analysis method capable of retrieving user interactions with Metaverse applications and blockchain smart contracts.This method provides a new approach to de-anonymizing users'identities through Metaverse applications.Our system,METAseen,analyzes and compares network traffic with the privacy policies of Metaverse applications to identify controversial data collection practices.The consistency check experiment reveals that the data types exposed by Metaverse applications include Personal Identifiable Information(PII),device information,and Metaverse-related data.By comparing the data flows observed in the network traffic with assertions made in the privacy regulations of the Metaverse service provider,we discovered that far more than 49%of the Metaverse data flows needed to be disclosed appropriately.
文摘With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Nevertheless,power data often contains sensitive information,making it a critical industry challenge to efficiently utilize this data while ensuring privacy.Traditional Federated Learning(FL)methods can mitigate data leakage by training models locally instead of transmitting raw data.Despite this,FL still has privacy concerns,especially gradient leakage,which might expose users’sensitive information.Therefore,integrating Differential Privacy(DP)techniques is essential for stronger privacy protection.Even so,the noise from DP may reduce the performance of federated learning models.To address this challenge,this paper presents an explainability-driven power data privacy federated learning framework.It incorporates DP technology and,based on model explainability,adaptively adjusts privacy budget allocation and model aggregation,thus balancing privacy protection and model performance.The key innovations of this paper are as follows:(1)We propose an explainability-driven power data privacy federated learning framework.(2)We detail a privacy budget allocation strategy:assigning budgets per training round by gradient effectiveness and at model granularity by layer importance.(3)We design a weighted aggregation strategy that considers the SHAP value and model accuracy for quality knowledge sharing.(4)Experiments show the proposed framework outperforms traditional methods in balancing privacy protection and model performance in power load forecasting tasks.
基金supported by National Key R&D Program of China(2023YFB3106100)National Natural Science Foundation of China(62102452,62172436)Natural Science Foundation of Shaanxi Province(2023-JCYB-584).
文摘Distributed data fusion is essential for numerous applications,yet faces significant privacy security challenges.Federated learning(FL),as a distributed machine learning paradigm,offers enhanced data privacy protection and has attracted widespread attention.Consequently,research increasingly focuses on developing more secure FL techniques.However,in real-world scenarios involving malicious entities,the accuracy of FL results is often compromised,particularly due to the threat of collusion between two servers.To address this challenge,this paper proposes an efficient and verifiable data aggregation protocol with enhanced privacy protection.After analyzing attack methods against prior schemes,we implement key improvements.Specifically,by incorporating cascaded random numbers and perturbation terms into gradients,we strengthen the privacy protection afforded by polynomial masking,effectively preventing information leakage.Furthermore,our protocol features an enhanced verification mechanism capable of detecting collusive behaviors between two servers.Accuracy testing on the MNIST and CIFAR-10 datasets demonstrates that our protocol maintains accuracy comparable to the Federated Averaging Algorithm.In scheme efficiency comparisons,while incurring only a marginal increase in verification overhead relative to the baseline scheme,our protocol achieves an average improvement of 93.13% in privacy protection and verification overhead compared to the state-of-the-art scheme.This result highlights its optimal balance between overall overhead and functionality.A current limitation is that the verificationmechanismcannot precisely pinpoint the source of anomalies within aggregated results when server-side malicious behavior occurs.Addressing this limitation will be a focus of future research.
基金supported by the Inner Mongolia Natural Science Foundation(Grant No.2023MS06022)the University Youth Science and Technology Talent Development Project(Innovation Group Development Plan)of Inner Mongolia A.R.of China(Grant No.NMGIRT2318)+1 种基金the“Inner Mongolia Science and Technology Achievement Transfer and Transformation Demonstration Zone,University Collaborative Innovation Base,and University Entrepreneurship Training Base”Construction Project(Supercomputing Power Project)(Grant No.21300-231510)the Engineering Research Center of Ecological Big Data,Ministry of Education.
文摘Mobile crowdsensing(MCS)has become an effective paradigm to facilitate urban sensing.However,mobile users participating in sensing tasks will face the risk of location privacy leakage when uploading their actual sensing location data.In the application of mobile crowdsensing,most location privacy protection studies do not consider the temporal correlations between locations,so they are vulnerable to various inference attacks,and there is the problem of low data availability.In order to solve the above problems,this paper proposes a dynamic differential location privacy data publishing framework(DDLP)that protects privacy while publishing locations continuously.Firstly,the corresponding Markov transition matrices are established according to different times of historical trajectories,and then the protection location set is generated based on the current location at each timestamp.Moreover,using the exponential mechanism in differential privacy perturbs the true location by designing the utility function.Finally,experiments on the real-world trajectory dataset show that our method not only provides strong privacy guarantees,but also outperforms existing methods in terms of data availability and computational efficiency.
基金CAMS Innovation Fund for Medical Sciences(CIFMS):“Construction of an Intelligent Management and Efficient Utilization Technology System for Big Data in Population Health Science.”(2021-I2M-1-057)Key Projects of the Innovation Fund of the National Clinical Research Center for Orthopedics and Sports Rehabilitation:“National Orthopedics and Sports Rehabilitation Real-World Research Platform System Construction”(23-NCRC-CXJJ-ZD4)。
文摘【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data are not clearly defined around the world.This study aims to develop a tool that meets compliance-driven desensitization requirements tailored to diverse research needs.【Methods】To enhance the security of medical image data,we designed and implemented a DICOM format medical image de-identification system on the Windows operating system.【Results】Our custom de-identification system is adaptable to the legal standards of different countries and can accommodate specific research demands.The system offers both web-based online and desktop offline de-identification capabilities,enabling customization of de-identification rules and facilitating batch processing to improve efficiency.【Conclusions】This medical image de-identification system robustly strengthens the stewardship of sensitive medical data,aligning with data security protection requirements while facilitating the sharing and utilization of medical image data.This approach unlocks the intrinsic value inherent in such datasets.
基金supported by National Natural Science Foundation of China(Nos.62071481 and 61501471).
文摘This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environments.This innovative approach not only significantly improves the system’s real-time responsiveness and resource utilization efficiency but also addresses critical challenges in Internet of Things(IoT)ecosystems—such as high demand variability,resource allocation uncertainties,and data privacy concerns—through practical solutions.Initially,the framework employs an adaptive adjustment mechanism to dynamically manage task and resource states,complemented by online learning models for precise predictive analytics.Secondly,it accelerates the search for optimal solutions using Grover’s algorithm while efficiently evaluating complex constraints through multi-controlled Toffoli gates,thereby markedly enhancing the practicality and robustness of the proposed solution.Furthermore,to bolster the system’s adaptability and response speed in dynamic environments,an efficientmonitoring mechanism and event-driven architecture are incorporated,ensuring timely responses to environmental changes and maintaining synchronization between internal and external systems.Experimental evaluations confirm that the proposed algorithm demonstrates superior performance in complex application scenarios,characterized by faster convergence,enhanced stability,and superior data privacy protection,alongside notable reductions in latency and optimized resource utilization.This research paves the way for transformative advancements in edge computing and IoT technologies,driving smart edge computing towards unprecedented levels of intelligence and automation.