The problem of privacy in social networks is well documented within literature;users have pri- vacy concerns however, they consistently disclose their sensitive information and leave it open to unintended third partie...The problem of privacy in social networks is well documented within literature;users have pri- vacy concerns however, they consistently disclose their sensitive information and leave it open to unintended third parties. While numerous causes of poor behaviour have been suggested by re- search the role of the User Interface (UI) and the system itself is underexplored. The field of Per- suasive Technology would suggest that Social Network Systems persuade users to deviate from their normal or habitual behaviour. This paper makes the case that the UI can be used as the basis for user empowerment by informing them of their privacy at the point of interaction and remind- ing them of their privacy needs. The Theory of Planned Behaviour is introduced as a potential theoretical foundation for exploring the psychology behind privacy behaviour as it describes the salient factors that influence intention and action. Based on these factors of personal attitude, subjective norms and perceived control, a series of UIs are presented and implemented in con- trolled experiments examining their effect on personal information disclosure. This is combined with observations and interviews with the participants. Results from this initial, pilot experiment suggest groups with privacy salient information embedded exhibit less disclosure than the control group. This work reviews this approach as a method for exploring privacy behaviour and propos- es further work required.展开更多
In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach...In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments.展开更多
This paper studies a strongly convergent inertial forward-backward-forward algorithm for the variational inequality problem in Hilbert spaces.In our convergence analysis,we do not assume the on-line rule of the inerti...This paper studies a strongly convergent inertial forward-backward-forward algorithm for the variational inequality problem in Hilbert spaces.In our convergence analysis,we do not assume the on-line rule of the inertial parameters and the iterates,which have been assumed by several authors whenever a strongly convergent algorithm with an inertial extrapolation step is proposed for a variational inequality problem.Consequently,our proof arguments are different from what is obtainable in the relevant literature.Finally,we give numerical tests to confirm the theoretical analysis and show that our proposed algorithm is superior to related ones in the literature.展开更多
On-line chemical characterization of atmospheric particulate matter(PM)with soft ionization technique and ultrahigh-resolution Mass Spectrometry(UHRMS)provides molecular information of organic constituents in real tim...On-line chemical characterization of atmospheric particulate matter(PM)with soft ionization technique and ultrahigh-resolution Mass Spectrometry(UHRMS)provides molecular information of organic constituents in real time.Here we describe the development and application of an automatic measurement system that incorporates PM_(2.5)sampling,thermal desorption,atmospheric pressure photoionization,and UHRMS analysis.Molecular formulas of detected organic compounds were deducted from the accurate(±10 ppm)molecular weights obtained at a mass resolution of 100,000,allowing the identification of small organic compounds in PM_(2.5).Detection efficiencies of 28 standard compounds were determined and we found a high sensitivity and selectivity towards organic amines with limits of detection below 10 pg.As a proof of principle,PM_(2.5)samples collected off-line in winter in the urban area of Beijing were analyzed using the Ionization Module and HRMS of the system.The automatic system was then applied to conduct on-line measurements during the summer time at a time resolution of 2 hr.The detected organic compounds comprised mainly CHON and CHN compounds below 350 m/z.Pronounced seasonal variations in elemental composition were observed with shorter carbon backbones and higher O/C ratios in summer than that in winter.This result is consistent with stronger photochemical reactions and thus a higher oxidation state of organics in summer.Diurnal variation in signal intensity of each formula provides crucial information to reveal its source and formation pathway.In summary,the automatic measurement system serves as an important tool for the on-line characterization and identification of organic species in PM_(2.5).展开更多
Recently,the application of Wireless Sensor Networks(WSNs)has been increasing rapidly.It requires privacy preserving data aggregation protocols to secure the data from compromises.Preserving privacy of the sensor data...Recently,the application of Wireless Sensor Networks(WSNs)has been increasing rapidly.It requires privacy preserving data aggregation protocols to secure the data from compromises.Preserving privacy of the sensor data is a challenging task.This paper presents a non-linear regression-based data aggregation protocol for preserving privacy of the sensor data.The proposed protocol uses non-linear regression functions to represent the sensor data collected from the sensor nodes.Instead of sending the complete data to the cluster head,the sensor nodes only send the coefficients of the non-linear function.This will reduce the communication overhead of the network.The data aggregation is performed on the masked coefficients and the sink node is able to retrieve the approximated results over the aggregated data.The analysis of experiment results shows that the proposed protocol is able to minimize communication overhead,enhance data aggregation accuracy,and preserve data privacy.展开更多
The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.De...The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.展开更多
As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and use...As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.展开更多
The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches l...The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.展开更多
Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can com...Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can compromise user privacy and security by stealing model parameters.Therefore,differential privacy is applied in federated learning to further address malicious issues.However,the addition of noise and the update clipping mechanism in differential privacy jointly limit the further development of federated learning in privacy protection and performance optimization.Therefore,we propose an adaptive adjusted differential privacy federated learning method.First,a dynamic adaptive privacy budget allocation strategy is proposed,which flexibly adjusts the privacy budget within a given range based on the client’s data volume and training requirements,thereby alleviating the loss of privacy budget and the magnitude of model noise.Second,a longitudinal clipping differential privacy strategy is proposed,which based on the differences in factors that affect parameter updates,uses sparse methods to trim local updates,thereby reducing the impact of privacy pruning steps on model accuracy.The two strategies work together to ensure user privacy while the effect of differential privacy on model accuracy is reduced.To evaluate the effectiveness of our method,we conducted extensive experiments on benchmark datasets,and the results showed that our proposed method performed well in terms of performance and privacy protection.展开更多
Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garner...Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training.However,classic differentially private learning introduces the same level of noise into the gradients across training iterations,which affects the trade-off between model utility and privacy guarantees.To address this issue,an adaptive differential privacy mechanism was proposed in this paper,which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks.Specifically,an equal privacy budget is initially allocated to each layer.Subsequently,as training advances,the privacy budget for layers closer to the output is reduced(adding more noise),while the budget for layers closer to the input is increased.The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count.This dynamic allocation provides a simple process for adjusting privacy budgets,alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress.Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.展开更多
To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The...To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.展开更多
The widespread application of artificial intelligence(AI)technology in exams has significantly improved the efficiency and fairness of exams;it has also brought challenges of ethics and privacy protection.The article ...The widespread application of artificial intelligence(AI)technology in exams has significantly improved the efficiency and fairness of exams;it has also brought challenges of ethics and privacy protection.The article analyzes the fairness,transparency,and privacy protection issues caused by AI in exams and proposes strategic solutions.This article aims to provide guidance for the rational application of AI technology in exams,ensuring a balance between technological progress and ethical protection by strengthening laws and regulations,enhancing technological transparency,strengthening candidates’privacy rights,and improving the management measures of educational examination institutions.展开更多
Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to syst...Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to systematically assess the security and privacy landscape of interoperability protocols for permissioned blockchains,identifying key properties,attack vectors,and countermeasures.Using PRISMA 2020 guidelines,we analysed 56 peerreviewed studies published between 2020 and 2025,retrieved from Scopus,ScienceDirect,Web of Science,and IEEE Xplore.The review focused on interoperability protocols for permissioned blockchains with security and privacy analyses,including only English-language journal articles and conference proceedings.Risk of bias in the included studies was assessed using the MMAT.Methods for presenting and synthesizing results included descriptive analysis,bibliometric analysis,and content analysis,with findings organized into tables,charts,and comparative summaries.The review classifies interoperability protocols into relay,sidechain,notary scheme,HTLC,and hybrid types and identifies 18 security and privacy properties along with 31 known attack types.Relay-based protocols showed the broadest security coverage,while HTLC and notary schemes demonstrated significant security gaps.Notably,93% of studies examined fewer than four properties or attack types,indicating a fragmented research landscape.The review identifies underexplored areas such as ACID properties,decentralization,and cross-chain attack resilience.It further highlights effective countermeasures,including cryptographic techniques,trusted execution environments,zero-knowledge proofs,and decentralized identity schemes.The findings suggest that despite growing adoption,current interoperability protocols lack comprehensive security evaluations.More holistic research is needed to ensure the resilience,trustworthiness,and scalability of cross-chain operations in permissioned blockchain ecosystems.展开更多
The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform ...The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform has seriously affected the authenticity of course evaluations and user trust,requiring effective anomaly detection techniques for screening.The textual characteristics of MOOCs reviews,such as varying lengths and diverse emotional tendencies,have brought complexity to text analysis.Traditional rule-based analysis methods are often inadequate in dealing with such unstructured data.We propose a Differential Privacy-Enabled Text Convolutional Neural Network(DP-TextCNN)framework,aiming to achieve high-precision identification of outliers in MOOCs course reviews and ratings while protecting user privacy.This framework leverages the advantages of Convolutional Neural Networks(CNN)in text feature extraction and combines differential privacy techniques.It balances data privacy protection with model performance by introducing controlled random noise during the data preprocessing stage.By embedding differential privacy into the model training process,we ensure the privacy security of the framework when handling sensitive data,while maintaining a high recognition accuracy.Experimental results indicate that the DP-TextCNN framework achieves an exceptional accuracy of over 95%in identifying fake reviews on the dataset,this outcome not only verifies the applicability of differential privacy techniques in TextCNN but also underscores its potential in handling sensitive educational data.Additionally,we analyze the specific impact of differential privacy parameters on framework performance,offering theoretical support and empirical analysis to strike an optimal balance between privacy protection and framework efficiency.展开更多
With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challe...With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality.展开更多
Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an es...Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an essential characteristic,it often necessitates certain compromises,such as complexity and scalability,thereby complicating management efforts.The principal challenge lies in ensuring confidentiality while simultaneously preserving individuals’anonymity within the system.To address this,we present our proposed architecture for managing IoT devices using blockchain technology.Our proposed architecture works on and off blockchain and is integrated with dashcams and closed-circuit television(CCTV)security cameras.In this work,the videos recorded by the dashcams and CCTV security cameras are hashed through the InterPlanetary File System(IPFS)and this hash is stored in the blockchain.When the accessors want to access the video,they must pass through multiple authentications which include web token authentication and verifiable credentials,to mitigate the risk of malicious users.Our contributions include the proposition of the framework,which works on the single key for every new video,and a novel chaincode algorithm that incorporates verifiable credentials.Analyses are made to show the system’s throughput and latency through stress testing.Significant advantages of the proposed architecture are shown by comparing them to existing schemes.The proposed architecture features a robust design that significantly enhances the security of blockchain-enabled Internet of Things(IoT)deviceswhile effectively mitigating the risk of a single point of failure,which provides a reliable solution for security concerns in the IoT landscape.Our future endeavors will focus on scaling the system by integrating innovative methods to enhance security measures further.展开更多
In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart...In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart contracts and blockchain technology as a robust framework for securing sensitive healthcare data.By leveraging the decentralized and immutable nature of blockchain,the proposed approach ensures transparency,integrity,and traceability of EMR transactions,effectivelymitigating risks of unauthorized access and data tampering.Smart contracts further enhance this framework by enabling the automation and enforcement of secure transactions,eliminating reliance on intermediaries and reducing the potential for human error.This integration marks a paradigm shift in management and exchange of healthcare information,fostering a secure and privacy-preserving ecosystem for all stakeholders.The research also evaluates the practical implementation of blockchain and smart contracts within healthcare systems,examining their real-world effectiveness in enhancing transactional security,safeguarding patient privacy,and maintaining data integrity.Findings from the study contribute valuable insights to the growing body of work on digital healthcare innovation,underscoring the potential of these technologies to transform EMR systems with high accuracy and precision.As global healthcare systems continue to face the challenge of protecting sensitive patient data,the proposed framework offers a forward-looking,scalable,and effective solution aligned with the evolving digital healthcare landscape.展开更多
Metaverse is a new emerging concept building up a virtual environment for the user using Virtual Reality(VR)and blockchain technology but introduces privacy risks.Now,a series of challenges arise in Metaverse security...Metaverse is a new emerging concept building up a virtual environment for the user using Virtual Reality(VR)and blockchain technology but introduces privacy risks.Now,a series of challenges arise in Metaverse security,including massive data traffic breaches,large-scale user tracking,analysis activities,unreliable Artificial Intelligence(AI)analysis results,and social engineering security for people.In this work,we concentrate on Decentraland and Sandbox,two well-known Metaverse applications in Web 3.0.Our experiments analyze,for the first time,the personal privacy data exposed by Metaverse applications and services from a combined perspective of network traffic and privacy policy.We develop a lightweight traffic processing approach suitable for the Web 3.0 environment,which does not rely on complex decryption or reverse engineering techniques.We propose a smart contract interaction traffic analysis method capable of retrieving user interactions with Metaverse applications and blockchain smart contracts.This method provides a new approach to de-anonymizing users'identities through Metaverse applications.Our system,METAseen,analyzes and compares network traffic with the privacy policies of Metaverse applications to identify controversial data collection practices.The consistency check experiment reveals that the data types exposed by Metaverse applications include Personal Identifiable Information(PII),device information,and Metaverse-related data.By comparing the data flows observed in the network traffic with assertions made in the privacy regulations of the Metaverse service provider,we discovered that far more than 49%of the Metaverse data flows needed to be disclosed appropriately.展开更多
With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Neve...With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Nevertheless,power data often contains sensitive information,making it a critical industry challenge to efficiently utilize this data while ensuring privacy.Traditional Federated Learning(FL)methods can mitigate data leakage by training models locally instead of transmitting raw data.Despite this,FL still has privacy concerns,especially gradient leakage,which might expose users’sensitive information.Therefore,integrating Differential Privacy(DP)techniques is essential for stronger privacy protection.Even so,the noise from DP may reduce the performance of federated learning models.To address this challenge,this paper presents an explainability-driven power data privacy federated learning framework.It incorporates DP technology and,based on model explainability,adaptively adjusts privacy budget allocation and model aggregation,thus balancing privacy protection and model performance.The key innovations of this paper are as follows:(1)We propose an explainability-driven power data privacy federated learning framework.(2)We detail a privacy budget allocation strategy:assigning budgets per training round by gradient effectiveness and at model granularity by layer importance.(3)We design a weighted aggregation strategy that considers the SHAP value and model accuracy for quality knowledge sharing.(4)Experiments show the proposed framework outperforms traditional methods in balancing privacy protection and model performance in power load forecasting tasks.展开更多
Distributed data fusion is essential for numerous applications,yet faces significant privacy security challenges.Federated learning(FL),as a distributed machine learning paradigm,offers enhanced data privacy protectio...Distributed data fusion is essential for numerous applications,yet faces significant privacy security challenges.Federated learning(FL),as a distributed machine learning paradigm,offers enhanced data privacy protection and has attracted widespread attention.Consequently,research increasingly focuses on developing more secure FL techniques.However,in real-world scenarios involving malicious entities,the accuracy of FL results is often compromised,particularly due to the threat of collusion between two servers.To address this challenge,this paper proposes an efficient and verifiable data aggregation protocol with enhanced privacy protection.After analyzing attack methods against prior schemes,we implement key improvements.Specifically,by incorporating cascaded random numbers and perturbation terms into gradients,we strengthen the privacy protection afforded by polynomial masking,effectively preventing information leakage.Furthermore,our protocol features an enhanced verification mechanism capable of detecting collusive behaviors between two servers.Accuracy testing on the MNIST and CIFAR-10 datasets demonstrates that our protocol maintains accuracy comparable to the Federated Averaging Algorithm.In scheme efficiency comparisons,while incurring only a marginal increase in verification overhead relative to the baseline scheme,our protocol achieves an average improvement of 93.13% in privacy protection and verification overhead compared to the state-of-the-art scheme.This result highlights its optimal balance between overall overhead and functionality.A current limitation is that the verificationmechanismcannot precisely pinpoint the source of anomalies within aggregated results when server-side malicious behavior occurs.Addressing this limitation will be a focus of future research.展开更多
文摘The problem of privacy in social networks is well documented within literature;users have pri- vacy concerns however, they consistently disclose their sensitive information and leave it open to unintended third parties. While numerous causes of poor behaviour have been suggested by re- search the role of the User Interface (UI) and the system itself is underexplored. The field of Per- suasive Technology would suggest that Social Network Systems persuade users to deviate from their normal or habitual behaviour. This paper makes the case that the UI can be used as the basis for user empowerment by informing them of their privacy at the point of interaction and remind- ing them of their privacy needs. The Theory of Planned Behaviour is introduced as a potential theoretical foundation for exploring the psychology behind privacy behaviour as it describes the salient factors that influence intention and action. Based on these factors of personal attitude, subjective norms and perceived control, a series of UIs are presented and implemented in con- trolled experiments examining their effect on personal information disclosure. This is combined with observations and interviews with the participants. Results from this initial, pilot experiment suggest groups with privacy salient information embedded exhibit less disclosure than the control group. This work reviews this approach as a method for exploring privacy behaviour and propos- es further work required.
基金supported by Systematic Major Project of Shuohuang Railway Development Co.,Ltd.,National Energy Group(Grant Number:SHTL-23-31)Beijing Natural Science Foundation(U22B2027).
文摘In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments.
文摘This paper studies a strongly convergent inertial forward-backward-forward algorithm for the variational inequality problem in Hilbert spaces.In our convergence analysis,we do not assume the on-line rule of the inertial parameters and the iterates,which have been assumed by several authors whenever a strongly convergent algorithm with an inertial extrapolation step is proposed for a variational inequality problem.Consequently,our proof arguments are different from what is obtainable in the relevant literature.Finally,we give numerical tests to confirm the theoretical analysis and show that our proposed algorithm is superior to related ones in the literature.
基金supported by the National Natural Science Foundation of China(No.41805105)。
文摘On-line chemical characterization of atmospheric particulate matter(PM)with soft ionization technique and ultrahigh-resolution Mass Spectrometry(UHRMS)provides molecular information of organic constituents in real time.Here we describe the development and application of an automatic measurement system that incorporates PM_(2.5)sampling,thermal desorption,atmospheric pressure photoionization,and UHRMS analysis.Molecular formulas of detected organic compounds were deducted from the accurate(±10 ppm)molecular weights obtained at a mass resolution of 100,000,allowing the identification of small organic compounds in PM_(2.5).Detection efficiencies of 28 standard compounds were determined and we found a high sensitivity and selectivity towards organic amines with limits of detection below 10 pg.As a proof of principle,PM_(2.5)samples collected off-line in winter in the urban area of Beijing were analyzed using the Ionization Module and HRMS of the system.The automatic system was then applied to conduct on-line measurements during the summer time at a time resolution of 2 hr.The detected organic compounds comprised mainly CHON and CHN compounds below 350 m/z.Pronounced seasonal variations in elemental composition were observed with shorter carbon backbones and higher O/C ratios in summer than that in winter.This result is consistent with stronger photochemical reactions and thus a higher oxidation state of organics in summer.Diurnal variation in signal intensity of each formula provides crucial information to reveal its source and formation pathway.In summary,the automatic measurement system serves as an important tool for the on-line characterization and identification of organic species in PM_(2.5).
文摘Recently,the application of Wireless Sensor Networks(WSNs)has been increasing rapidly.It requires privacy preserving data aggregation protocols to secure the data from compromises.Preserving privacy of the sensor data is a challenging task.This paper presents a non-linear regression-based data aggregation protocol for preserving privacy of the sensor data.The proposed protocol uses non-linear regression functions to represent the sensor data collected from the sensor nodes.Instead of sending the complete data to the cluster head,the sensor nodes only send the coefficients of the non-linear function.This will reduce the communication overhead of the network.The data aggregation is performed on the masked coefficients and the sink node is able to retrieve the approximated results over the aggregated data.The analysis of experiment results shows that the proposed protocol is able to minimize communication overhead,enhance data aggregation accuracy,and preserve data privacy.
基金supported by the National Key R&D Program of China under Grant No.2022YFB3103500the National Natural Science Foundation of China under Grants No.62402087 and No.62020106013+3 种基金the Sichuan Science and Technology Program under Grant No.2023ZYD0142the Chengdu Science and Technology Program under Grant No.2023-XT00-00002-GXthe Fundamental Research Funds for Chinese Central Universities under Grants No.ZYGX2020ZB027 and No.Y030232063003002the Postdoctoral Innovation Talents Support Program under Grant No.BX20230060.
文摘The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.
基金supported by the National Key R&D Program of China(No.2023YFB2703700)the National Natural Science Foundation of China(Nos.U21A20465,62302457,62402444,62172292)+4 种基金the Fundamental Research Funds of Zhejiang Sci-Tech University(Nos.23222092-Y,22222266-Y)the Program for Leading Innovative Research Team of Zhejiang Province(No.2023R01001)the Zhejiang Provincial Natural Science Foundation of China(Nos.LQ24F020008,LQ24F020012)the Foundation of State Key Laboratory of Public Big Data(No.[2022]417)the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(No.2023C01119).
文摘As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.
文摘The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.
基金funded by the Science and Technology Project of State Grid Corporation of China(Research on the theory and method of multiparty encrypted computation in the edge fusion environment of power IoT,No.5700-202358592A-3-2-ZN)the National Natural Science Foundation of China(Grant Nos.62272056,62372048,62371069).
文摘Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can compromise user privacy and security by stealing model parameters.Therefore,differential privacy is applied in federated learning to further address malicious issues.However,the addition of noise and the update clipping mechanism in differential privacy jointly limit the further development of federated learning in privacy protection and performance optimization.Therefore,we propose an adaptive adjusted differential privacy federated learning method.First,a dynamic adaptive privacy budget allocation strategy is proposed,which flexibly adjusts the privacy budget within a given range based on the client’s data volume and training requirements,thereby alleviating the loss of privacy budget and the magnitude of model noise.Second,a longitudinal clipping differential privacy strategy is proposed,which based on the differences in factors that affect parameter updates,uses sparse methods to trim local updates,thereby reducing the impact of privacy pruning steps on model accuracy.The two strategies work together to ensure user privacy while the effect of differential privacy on model accuracy is reduced.To evaluate the effectiveness of our method,we conducted extensive experiments on benchmark datasets,and the results showed that our proposed method performed well in terms of performance and privacy protection.
基金supported by the National Natural Science Foundation of China(Grant No.62462022)the Hainan Province Science and Technology Special Fund(Grants No.ZDYF2022GXJS229).
文摘Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training.However,classic differentially private learning introduces the same level of noise into the gradients across training iterations,which affects the trade-off between model utility and privacy guarantees.To address this issue,an adaptive differential privacy mechanism was proposed in this paper,which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks.Specifically,an equal privacy budget is initially allocated to each layer.Subsequently,as training advances,the privacy budget for layers closer to the output is reduced(adding more noise),while the budget for layers closer to the input is increased.The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count.This dynamic allocation provides a simple process for adjusting privacy budgets,alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress.Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.
基金supported by National Nature Science Foundation of China(No.62361036)Nature Science Foundation of Gansu Province(No.22JR5RA279).
文摘To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.
文摘The widespread application of artificial intelligence(AI)technology in exams has significantly improved the efficiency and fairness of exams;it has also brought challenges of ethics and privacy protection.The article analyzes the fairness,transparency,and privacy protection issues caused by AI in exams and proposes strategic solutions.This article aims to provide guidance for the rational application of AI technology in exams,ensuring a balance between technological progress and ethical protection by strengthening laws and regulations,enhancing technological transparency,strengthening candidates’privacy rights,and improving the management measures of educational examination institutions.
基金supported by the International Scientific and Technological Cooperation Project of Huangpu and Development Districts in Guangzhou(2023GH17)the National Science and Technology Council in Taiwan under grant number NSTC-113-2224-E-027-001,Private Funding(PV009-2023)the KW IPPP(Research Maintenance Fee)Individual/Centre/Group(RMF1506-2021)at Universiti Malaya,Malaysia.
文摘Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to systematically assess the security and privacy landscape of interoperability protocols for permissioned blockchains,identifying key properties,attack vectors,and countermeasures.Using PRISMA 2020 guidelines,we analysed 56 peerreviewed studies published between 2020 and 2025,retrieved from Scopus,ScienceDirect,Web of Science,and IEEE Xplore.The review focused on interoperability protocols for permissioned blockchains with security and privacy analyses,including only English-language journal articles and conference proceedings.Risk of bias in the included studies was assessed using the MMAT.Methods for presenting and synthesizing results included descriptive analysis,bibliometric analysis,and content analysis,with findings organized into tables,charts,and comparative summaries.The review classifies interoperability protocols into relay,sidechain,notary scheme,HTLC,and hybrid types and identifies 18 security and privacy properties along with 31 known attack types.Relay-based protocols showed the broadest security coverage,while HTLC and notary schemes demonstrated significant security gaps.Notably,93% of studies examined fewer than four properties or attack types,indicating a fragmented research landscape.The review identifies underexplored areas such as ACID properties,decentralization,and cross-chain attack resilience.It further highlights effective countermeasures,including cryptographic techniques,trusted execution environments,zero-knowledge proofs,and decentralized identity schemes.The findings suggest that despite growing adoption,current interoperability protocols lack comprehensive security evaluations.More holistic research is needed to ensure the resilience,trustworthiness,and scalability of cross-chain operations in permissioned blockchain ecosystems.
文摘The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform has seriously affected the authenticity of course evaluations and user trust,requiring effective anomaly detection techniques for screening.The textual characteristics of MOOCs reviews,such as varying lengths and diverse emotional tendencies,have brought complexity to text analysis.Traditional rule-based analysis methods are often inadequate in dealing with such unstructured data.We propose a Differential Privacy-Enabled Text Convolutional Neural Network(DP-TextCNN)framework,aiming to achieve high-precision identification of outliers in MOOCs course reviews and ratings while protecting user privacy.This framework leverages the advantages of Convolutional Neural Networks(CNN)in text feature extraction and combines differential privacy techniques.It balances data privacy protection with model performance by introducing controlled random noise during the data preprocessing stage.By embedding differential privacy into the model training process,we ensure the privacy security of the framework when handling sensitive data,while maintaining a high recognition accuracy.Experimental results indicate that the DP-TextCNN framework achieves an exceptional accuracy of over 95%in identifying fake reviews on the dataset,this outcome not only verifies the applicability of differential privacy techniques in TextCNN but also underscores its potential in handling sensitive educational data.Additionally,we analyze the specific impact of differential privacy parameters on framework performance,offering theoretical support and empirical analysis to strike an optimal balance between privacy protection and framework efficiency.
基金funded by the State Grid Corporation Science and Technology Project“Research and Application of Key Technologies for Integrated Sensing and Computing for Intelligent Operation of Power Grid”(Grant No.5700-202318596A-3-2-ZN).
文摘With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)(Project Nos.RS-2024-00438551,30%,2022-11220701,30%,2021-0-01816,30%)the National Research Foundation of Korea(NRF)grant funded by the Korean Government(Project No.RS-2023-00208460,10%).
文摘Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an essential characteristic,it often necessitates certain compromises,such as complexity and scalability,thereby complicating management efforts.The principal challenge lies in ensuring confidentiality while simultaneously preserving individuals’anonymity within the system.To address this,we present our proposed architecture for managing IoT devices using blockchain technology.Our proposed architecture works on and off blockchain and is integrated with dashcams and closed-circuit television(CCTV)security cameras.In this work,the videos recorded by the dashcams and CCTV security cameras are hashed through the InterPlanetary File System(IPFS)and this hash is stored in the blockchain.When the accessors want to access the video,they must pass through multiple authentications which include web token authentication and verifiable credentials,to mitigate the risk of malicious users.Our contributions include the proposition of the framework,which works on the single key for every new video,and a novel chaincode algorithm that incorporates verifiable credentials.Analyses are made to show the system’s throughput and latency through stress testing.Significant advantages of the proposed architecture are shown by comparing them to existing schemes.The proposed architecture features a robust design that significantly enhances the security of blockchain-enabled Internet of Things(IoT)deviceswhile effectively mitigating the risk of a single point of failure,which provides a reliable solution for security concerns in the IoT landscape.Our future endeavors will focus on scaling the system by integrating innovative methods to enhance security measures further.
文摘In the domain of Electronic Medical Records(EMRs),emerging technologies are crucial to addressing longstanding concerns surrounding transaction security and patient privacy.This paper explores the integration of smart contracts and blockchain technology as a robust framework for securing sensitive healthcare data.By leveraging the decentralized and immutable nature of blockchain,the proposed approach ensures transparency,integrity,and traceability of EMR transactions,effectivelymitigating risks of unauthorized access and data tampering.Smart contracts further enhance this framework by enabling the automation and enforcement of secure transactions,eliminating reliance on intermediaries and reducing the potential for human error.This integration marks a paradigm shift in management and exchange of healthcare information,fostering a secure and privacy-preserving ecosystem for all stakeholders.The research also evaluates the practical implementation of blockchain and smart contracts within healthcare systems,examining their real-world effectiveness in enhancing transactional security,safeguarding patient privacy,and maintaining data integrity.Findings from the study contribute valuable insights to the growing body of work on digital healthcare innovation,underscoring the potential of these technologies to transform EMR systems with high accuracy and precision.As global healthcare systems continue to face the challenge of protecting sensitive patient data,the proposed framework offers a forward-looking,scalable,and effective solution aligned with the evolving digital healthcare landscape.
基金supported by the National Key R&D Program of China (2021YFB2700200)the National Natural Science Foundation of China (U21B2021,61932014,61972018,62202027)+2 种基金Young Elite Scientists Sponsorship Program by CAST (2022QNRC001)Beijing Natural Science Foundation (M23016)Yunnan Key Laboratory of Blockchain Application Technology Open Project (202105AG070005,YNB202206)。
文摘Metaverse is a new emerging concept building up a virtual environment for the user using Virtual Reality(VR)and blockchain technology but introduces privacy risks.Now,a series of challenges arise in Metaverse security,including massive data traffic breaches,large-scale user tracking,analysis activities,unreliable Artificial Intelligence(AI)analysis results,and social engineering security for people.In this work,we concentrate on Decentraland and Sandbox,two well-known Metaverse applications in Web 3.0.Our experiments analyze,for the first time,the personal privacy data exposed by Metaverse applications and services from a combined perspective of network traffic and privacy policy.We develop a lightweight traffic processing approach suitable for the Web 3.0 environment,which does not rely on complex decryption or reverse engineering techniques.We propose a smart contract interaction traffic analysis method capable of retrieving user interactions with Metaverse applications and blockchain smart contracts.This method provides a new approach to de-anonymizing users'identities through Metaverse applications.Our system,METAseen,analyzes and compares network traffic with the privacy policies of Metaverse applications to identify controversial data collection practices.The consistency check experiment reveals that the data types exposed by Metaverse applications include Personal Identifiable Information(PII),device information,and Metaverse-related data.By comparing the data flows observed in the network traffic with assertions made in the privacy regulations of the Metaverse service provider,we discovered that far more than 49%of the Metaverse data flows needed to be disclosed appropriately.
文摘With the ongoing digitalization and intelligence of power systems,there is an increasing reliance on large-scale data-driven intelligent technologies for tasks such as scheduling optimization and load forecasting.Nevertheless,power data often contains sensitive information,making it a critical industry challenge to efficiently utilize this data while ensuring privacy.Traditional Federated Learning(FL)methods can mitigate data leakage by training models locally instead of transmitting raw data.Despite this,FL still has privacy concerns,especially gradient leakage,which might expose users’sensitive information.Therefore,integrating Differential Privacy(DP)techniques is essential for stronger privacy protection.Even so,the noise from DP may reduce the performance of federated learning models.To address this challenge,this paper presents an explainability-driven power data privacy federated learning framework.It incorporates DP technology and,based on model explainability,adaptively adjusts privacy budget allocation and model aggregation,thus balancing privacy protection and model performance.The key innovations of this paper are as follows:(1)We propose an explainability-driven power data privacy federated learning framework.(2)We detail a privacy budget allocation strategy:assigning budgets per training round by gradient effectiveness and at model granularity by layer importance.(3)We design a weighted aggregation strategy that considers the SHAP value and model accuracy for quality knowledge sharing.(4)Experiments show the proposed framework outperforms traditional methods in balancing privacy protection and model performance in power load forecasting tasks.
基金supported by National Key R&D Program of China(2023YFB3106100)National Natural Science Foundation of China(62102452,62172436)Natural Science Foundation of Shaanxi Province(2023-JCYB-584).
文摘Distributed data fusion is essential for numerous applications,yet faces significant privacy security challenges.Federated learning(FL),as a distributed machine learning paradigm,offers enhanced data privacy protection and has attracted widespread attention.Consequently,research increasingly focuses on developing more secure FL techniques.However,in real-world scenarios involving malicious entities,the accuracy of FL results is often compromised,particularly due to the threat of collusion between two servers.To address this challenge,this paper proposes an efficient and verifiable data aggregation protocol with enhanced privacy protection.After analyzing attack methods against prior schemes,we implement key improvements.Specifically,by incorporating cascaded random numbers and perturbation terms into gradients,we strengthen the privacy protection afforded by polynomial masking,effectively preventing information leakage.Furthermore,our protocol features an enhanced verification mechanism capable of detecting collusive behaviors between two servers.Accuracy testing on the MNIST and CIFAR-10 datasets demonstrates that our protocol maintains accuracy comparable to the Federated Averaging Algorithm.In scheme efficiency comparisons,while incurring only a marginal increase in verification overhead relative to the baseline scheme,our protocol achieves an average improvement of 93.13% in privacy protection and verification overhead compared to the state-of-the-art scheme.This result highlights its optimal balance between overall overhead and functionality.A current limitation is that the verificationmechanismcannot precisely pinpoint the source of anomalies within aggregated results when server-side malicious behavior occurs.Addressing this limitation will be a focus of future research.