This study addresses the risk of privacy leakage during the transmission and sharing of multimodal data in smart grid substations by proposing a three-tier privacy-preserving architecture based on asynchronous federat...This study addresses the risk of privacy leakage during the transmission and sharing of multimodal data in smart grid substations by proposing a three-tier privacy-preserving architecture based on asynchronous federated learning.The framework integrates blockchain technology,the InterPlanetary File System(IPFS)for distributed storage,and a dynamic differential privacy mechanism to achieve collaborative security across the storage,service,and federated coordination layers.It accommodates both multimodal data classification and object detection tasks,enabling the identification and localization of key targets and abnormal behaviors in substation scenarios while ensuring privacy protection.This effectively mitigates the single-point failures and model leakage issues inherent in centralized architectures.A dynamically adjustable differential privacy mechanism is introduced to allocate privacy budgets according to client contribution levels and upload frequencies,achieving a personalized balance between model performance and privacy protection.Multi-dimensional experimental evaluations,including classification accuracy,F1-score,encryption latency,and aggregation latency,verify the security and efficiency of the proposed architecture.The improved CNN model achieves 72.34%accuracy and an F1-score of 0.72 in object detection and classification tasks on infrared surveillance imagery,effectively identifying typical risk events such as not wearing safety helmets and unauthorized intrusion,while maintaining an aggregation latency of only 1.58 s and a query latency of 80.79 ms.Compared with traditional static differential privacy and centralized approaches,the proposed method demonstrates significant advantages in accuracy,latency,and security,providing a new technical paradigm for efficient,secure data sharing,object detection,and privacy preservation in smart grid substations.展开更多
With the increasing complexity of malware attack techniques,traditional detection methods face significant challenges,such as privacy preservation,data heterogeneity,and lacking category information.To address these i...With the increasing complexity of malware attack techniques,traditional detection methods face significant challenges,such as privacy preservation,data heterogeneity,and lacking category information.To address these issues,we propose Federated Dynamic Prototype Learning(FedDPL)for malware classification by integrating Federated Learning with a specifically designed K-means.Under the Federated Learning framework,model training occurs locally without data sharing,effectively protecting user data privacy and preventing the leakage of sensitive information.Furthermore,to tackle the challenges of data heterogeneity and the lack of category information,FedDPL introduces a dynamic prototype learning mechanism,which adaptively adjusts the clustering prototypes in terms of position and number.Thus,the dependency on predefined category numbers in typical K-means and its variants can be significantly reduced,resulting in improved clustering performance.Theoretically,it provides a more accurate detection of malicious behavior.Experimental results confirm that FedDPL excels in handling malware classification tasks,demonstrating superior accuracy,robustness,and privacy protection.展开更多
The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location re...The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location reidentification and correlation attacks.To address these challenges,privacy-preserving trajectory generation methods are critical for applications relying on sensitive location data.This paper introduces DPIL-Traj,an advanced framework designed to generate synthetic trajectories while achieving a superior balance between data utility and privacy preservation.Firstly,the framework incorporates Differential Privacy Clustering,which anonymizes trajectory data by applying differential privacy techniques that add noise,ensuring the protection of sensitive user information.Secondly,Imitation Learning is used to replicate decision-making behaviors observed in real-world trajectories.By learning from expert trajectories,this component generates synthetic data that closely mimics real-world decision-making processes while optimizing the quality of the generated trajectories.Finally,Markov-based Trajectory Generation is employed to capture and maintain the inherent temporal dynamics of movement patterns.Extensive experiments conducted on the GeoLife trajectory dataset show that DPIL-Traj improves utility performance by an average of 19.85%,and in terms of privacy performance by an average of 12.51%,compared to state-of-the-art approaches.Ablation studies further reveal that DP clustering effectively safeguards privacy,imitation learning enhances utility under noise,and the Markov module strengthens temporal coherence.展开更多
In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic h...In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.展开更多
In the competitive retail industry of the digital era,data-driven insights into gender-specific customer behavior are essential.They support the optimization of store performance,layout design,product placement,and ta...In the competitive retail industry of the digital era,data-driven insights into gender-specific customer behavior are essential.They support the optimization of store performance,layout design,product placement,and targeted marketing.However,existing computer vision solutions often rely on facial recognition to gather such insights,raising significant privacy and ethical concerns.To address these issues,this paper presents a privacypreserving customer analytics system through two key strategies.First,we deploy a deep learning framework using YOLOv9s,trained on the RCA-TVGender dataset.Cameras are positioned perpendicular to observation areas to reduce facial visibility while maintaining accurate gender classification.Second,we apply AES-128 encryption to customer position data,ensuring secure access and regulatory compliance.Our system achieved overall performance,with 81.5%mAP@50,77.7%precision,and 75.7%recall.Moreover,a 90-min observational study confirmed the system’s ability to generate privacy-protected heatmaps revealing distinct behavioral patterns between male and female customers.For instance,women spent more time in certain areas and showed interest in different products.These results confirm the system’s effectiveness in enabling personalized layout and marketing strategies without compromising privacy.展开更多
In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach...In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments.展开更多
The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.De...The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.展开更多
【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data...【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data are not clearly defined around the world.This study aims to develop a tool that meets compliance-driven desensitization requirements tailored to diverse research needs.【Methods】To enhance the security of medical image data,we designed and implemented a DICOM format medical image de-identification system on the Windows operating system.【Results】Our custom de-identification system is adaptable to the legal standards of different countries and can accommodate specific research demands.The system offers both web-based online and desktop offline de-identification capabilities,enabling customization of de-identification rules and facilitating batch processing to improve efficiency.【Conclusions】This medical image de-identification system robustly strengthens the stewardship of sensitive medical data,aligning with data security protection requirements while facilitating the sharing and utilization of medical image data.This approach unlocks the intrinsic value inherent in such datasets.展开更多
With the rapid development of web3.0 applications,the volume of data sharing is increasing,the inefficiency of big data file sharing and the problem of data privacy leakage are becoming more and more prominent,and the...With the rapid development of web3.0 applications,the volume of data sharing is increasing,the inefficiency of big data file sharing and the problem of data privacy leakage are becoming more and more prominent,and the existing data sharing schemes have been difficult to meet the growing demand for data sharing,this paper aims at exploring a secure,efficient and privacy-protecting data sharing scheme under web3.0 applications.Specifically,this paper adopts interplanetary file system(IPFS)technology to realize the storage of large data files to solve the problem of blockchain storage capacity limitation,and utilizes ciphertext policy attribute-based encryption(CP-ABE)and proxy re-encryption(PRE)technology to realize secure multi-party sharing and finegrained access control of data.This paper provides the detailed algorithm design and implementation of data sharing phases and processes,and analyzes the algorithms from the perspectives of security,privacy protection,and performance.展开更多
As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and use...As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.展开更多
The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches l...The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.展开更多
Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can com...Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can compromise user privacy and security by stealing model parameters.Therefore,differential privacy is applied in federated learning to further address malicious issues.However,the addition of noise and the update clipping mechanism in differential privacy jointly limit the further development of federated learning in privacy protection and performance optimization.Therefore,we propose an adaptive adjusted differential privacy federated learning method.First,a dynamic adaptive privacy budget allocation strategy is proposed,which flexibly adjusts the privacy budget within a given range based on the client’s data volume and training requirements,thereby alleviating the loss of privacy budget and the magnitude of model noise.Second,a longitudinal clipping differential privacy strategy is proposed,which based on the differences in factors that affect parameter updates,uses sparse methods to trim local updates,thereby reducing the impact of privacy pruning steps on model accuracy.The two strategies work together to ensure user privacy while the effect of differential privacy on model accuracy is reduced.To evaluate the effectiveness of our method,we conducted extensive experiments on benchmark datasets,and the results showed that our proposed method performed well in terms of performance and privacy protection.展开更多
Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garner...Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training.However,classic differentially private learning introduces the same level of noise into the gradients across training iterations,which affects the trade-off between model utility and privacy guarantees.To address this issue,an adaptive differential privacy mechanism was proposed in this paper,which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks.Specifically,an equal privacy budget is initially allocated to each layer.Subsequently,as training advances,the privacy budget for layers closer to the output is reduced(adding more noise),while the budget for layers closer to the input is increased.The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count.This dynamic allocation provides a simple process for adjusting privacy budgets,alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress.Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.展开更多
To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The...To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.展开更多
The widespread application of artificial intelligence(AI)technology in exams has significantly improved the efficiency and fairness of exams;it has also brought challenges of ethics and privacy protection.The article ...The widespread application of artificial intelligence(AI)technology in exams has significantly improved the efficiency and fairness of exams;it has also brought challenges of ethics and privacy protection.The article analyzes the fairness,transparency,and privacy protection issues caused by AI in exams and proposes strategic solutions.This article aims to provide guidance for the rational application of AI technology in exams,ensuring a balance between technological progress and ethical protection by strengthening laws and regulations,enhancing technological transparency,strengthening candidates’privacy rights,and improving the management measures of educational examination institutions.展开更多
Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to syst...Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to systematically assess the security and privacy landscape of interoperability protocols for permissioned blockchains,identifying key properties,attack vectors,and countermeasures.Using PRISMA 2020 guidelines,we analysed 56 peerreviewed studies published between 2020 and 2025,retrieved from Scopus,ScienceDirect,Web of Science,and IEEE Xplore.The review focused on interoperability protocols for permissioned blockchains with security and privacy analyses,including only English-language journal articles and conference proceedings.Risk of bias in the included studies was assessed using the MMAT.Methods for presenting and synthesizing results included descriptive analysis,bibliometric analysis,and content analysis,with findings organized into tables,charts,and comparative summaries.The review classifies interoperability protocols into relay,sidechain,notary scheme,HTLC,and hybrid types and identifies 18 security and privacy properties along with 31 known attack types.Relay-based protocols showed the broadest security coverage,while HTLC and notary schemes demonstrated significant security gaps.Notably,93% of studies examined fewer than four properties or attack types,indicating a fragmented research landscape.The review identifies underexplored areas such as ACID properties,decentralization,and cross-chain attack resilience.It further highlights effective countermeasures,including cryptographic techniques,trusted execution environments,zero-knowledge proofs,and decentralized identity schemes.The findings suggest that despite growing adoption,current interoperability protocols lack comprehensive security evaluations.More holistic research is needed to ensure the resilience,trustworthiness,and scalability of cross-chain operations in permissioned blockchain ecosystems.展开更多
As the 5G architecture gains momentum,interest in 6G is growing.The proliferation of Internet of Things(IoT)devices,capable of capturing sensitive images,has increased the need for secure transmission and robust acces...As the 5G architecture gains momentum,interest in 6G is growing.The proliferation of Internet of Things(IoT)devices,capable of capturing sensitive images,has increased the need for secure transmission and robust access control mechanisms.The vast amount of data generated by low-computing devices poses a challenge to traditional centralized access control,which relies on trusted third parties and complex computations,resulting in intricate interactions,higher hardware costs,and processing delays.To address these issues,this paper introduces a novel distributed access control approach that integrates a decentralized and lightweight encryption mechanism with image transmission.This method enhances data security and resource efficiency without imposing heavy computational and network burdens.In comparison to the best existing approach,it achieves a 7%improvement in accuracy,effectively addressing existing gaps in lightweight encryption and recognition performance.展开更多
With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challe...With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality.展开更多
The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform ...The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform has seriously affected the authenticity of course evaluations and user trust,requiring effective anomaly detection techniques for screening.The textual characteristics of MOOCs reviews,such as varying lengths and diverse emotional tendencies,have brought complexity to text analysis.Traditional rule-based analysis methods are often inadequate in dealing with such unstructured data.We propose a Differential Privacy-Enabled Text Convolutional Neural Network(DP-TextCNN)framework,aiming to achieve high-precision identification of outliers in MOOCs course reviews and ratings while protecting user privacy.This framework leverages the advantages of Convolutional Neural Networks(CNN)in text feature extraction and combines differential privacy techniques.It balances data privacy protection with model performance by introducing controlled random noise during the data preprocessing stage.By embedding differential privacy into the model training process,we ensure the privacy security of the framework when handling sensitive data,while maintaining a high recognition accuracy.Experimental results indicate that the DP-TextCNN framework achieves an exceptional accuracy of over 95%in identifying fake reviews on the dataset,this outcome not only verifies the applicability of differential privacy techniques in TextCNN but also underscores its potential in handling sensitive educational data.Additionally,we analyze the specific impact of differential privacy parameters on framework performance,offering theoretical support and empirical analysis to strike an optimal balance between privacy protection and framework efficiency.展开更多
Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an es...Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an essential characteristic,it often necessitates certain compromises,such as complexity and scalability,thereby complicating management efforts.The principal challenge lies in ensuring confidentiality while simultaneously preserving individuals’anonymity within the system.To address this,we present our proposed architecture for managing IoT devices using blockchain technology.Our proposed architecture works on and off blockchain and is integrated with dashcams and closed-circuit television(CCTV)security cameras.In this work,the videos recorded by the dashcams and CCTV security cameras are hashed through the InterPlanetary File System(IPFS)and this hash is stored in the blockchain.When the accessors want to access the video,they must pass through multiple authentications which include web token authentication and verifiable credentials,to mitigate the risk of malicious users.Our contributions include the proposition of the framework,which works on the single key for every new video,and a novel chaincode algorithm that incorporates verifiable credentials.Analyses are made to show the system’s throughput and latency through stress testing.Significant advantages of the proposed architecture are shown by comparing them to existing schemes.The proposed architecture features a robust design that significantly enhances the security of blockchain-enabled Internet of Things(IoT)deviceswhile effectively mitigating the risk of a single point of failure,which provides a reliable solution for security concerns in the IoT landscape.Our future endeavors will focus on scaling the system by integrating innovative methods to enhance security measures further.展开更多
基金funded by the National Natural Science Foundation of China,grant number 61605004the Fundamental Research Funds for the Central Universities,grant number FRF-TP-19-016A2Guizhou Power Grid Co.,Ltd.2024 first batch of services(2024-2026 technology R&D services for science and technology projects(in addition to national and SGCC key projects)),grant number 060100KC23100012。
文摘This study addresses the risk of privacy leakage during the transmission and sharing of multimodal data in smart grid substations by proposing a three-tier privacy-preserving architecture based on asynchronous federated learning.The framework integrates blockchain technology,the InterPlanetary File System(IPFS)for distributed storage,and a dynamic differential privacy mechanism to achieve collaborative security across the storage,service,and federated coordination layers.It accommodates both multimodal data classification and object detection tasks,enabling the identification and localization of key targets and abnormal behaviors in substation scenarios while ensuring privacy protection.This effectively mitigates the single-point failures and model leakage issues inherent in centralized architectures.A dynamically adjustable differential privacy mechanism is introduced to allocate privacy budgets according to client contribution levels and upload frequencies,achieving a personalized balance between model performance and privacy protection.Multi-dimensional experimental evaluations,including classification accuracy,F1-score,encryption latency,and aggregation latency,verify the security and efficiency of the proposed architecture.The improved CNN model achieves 72.34%accuracy and an F1-score of 0.72 in object detection and classification tasks on infrared surveillance imagery,effectively identifying typical risk events such as not wearing safety helmets and unauthorized intrusion,while maintaining an aggregation latency of only 1.58 s and a query latency of 80.79 ms.Compared with traditional static differential privacy and centralized approaches,the proposed method demonstrates significant advantages in accuracy,latency,and security,providing a new technical paradigm for efficient,secure data sharing,object detection,and privacy preservation in smart grid substations.
基金supported by the National Natural Science Foundation of China under Grant No.62162009the Key Technologies R&D Program of He’nan Province under Grant No.242102211065+2 种基金the Postgraduate Education Reform and Quality Improvement Project of Henan Province under Grant Nos.YJS2025GZZ36,YJS2024AL112,and YJS2024JD38the Innovation Scientists and Technicians Troop Construction Projects of Henan Province under Grant No.CXTD2017099the Scientific Research Innovation Team of Xuchang University under Grant No.2022CXTD003.
文摘With the increasing complexity of malware attack techniques,traditional detection methods face significant challenges,such as privacy preservation,data heterogeneity,and lacking category information.To address these issues,we propose Federated Dynamic Prototype Learning(FedDPL)for malware classification by integrating Federated Learning with a specifically designed K-means.Under the Federated Learning framework,model training occurs locally without data sharing,effectively protecting user data privacy and preventing the leakage of sensitive information.Furthermore,to tackle the challenges of data heterogeneity and the lack of category information,FedDPL introduces a dynamic prototype learning mechanism,which adaptively adjusts the clustering prototypes in terms of position and number.Thus,the dependency on predefined category numbers in typical K-means and its variants can be significantly reduced,resulting in improved clustering performance.Theoretically,it provides a more accurate detection of malicious behavior.Experimental results confirm that FedDPL excels in handling malware classification tasks,demonstrating superior accuracy,robustness,and privacy protection.
基金supported by the Natural Science Foundation of Fujian Province of China(2025J01380)National Natural Science Foundation of China(No.62471139)+3 种基金the Major Health Research Project of Fujian Province(2021ZD01001)Fujian Provincial Units Special Funds for Education and Research(2022639)Fujian University of Technology Research Start-up Fund(GY-S24002)Fujian Research and Training Grants for Young and Middle-aged Leaders in Healthcare(GY-H-24179).
文摘The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location reidentification and correlation attacks.To address these challenges,privacy-preserving trajectory generation methods are critical for applications relying on sensitive location data.This paper introduces DPIL-Traj,an advanced framework designed to generate synthetic trajectories while achieving a superior balance between data utility and privacy preservation.Firstly,the framework incorporates Differential Privacy Clustering,which anonymizes trajectory data by applying differential privacy techniques that add noise,ensuring the protection of sensitive user information.Secondly,Imitation Learning is used to replicate decision-making behaviors observed in real-world trajectories.By learning from expert trajectories,this component generates synthetic data that closely mimics real-world decision-making processes while optimizing the quality of the generated trajectories.Finally,Markov-based Trajectory Generation is employed to capture and maintain the inherent temporal dynamics of movement patterns.Extensive experiments conducted on the GeoLife trajectory dataset show that DPIL-Traj improves utility performance by an average of 19.85%,and in terms of privacy performance by an average of 12.51%,compared to state-of-the-art approaches.Ablation studies further reveal that DP clustering effectively safeguards privacy,imitation learning enhances utility under noise,and the Markov module strengthens temporal coherence.
基金funding from the European Commission by the Ruralities project(grant agreement no.101060876).
文摘In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.
文摘In the competitive retail industry of the digital era,data-driven insights into gender-specific customer behavior are essential.They support the optimization of store performance,layout design,product placement,and targeted marketing.However,existing computer vision solutions often rely on facial recognition to gather such insights,raising significant privacy and ethical concerns.To address these issues,this paper presents a privacypreserving customer analytics system through two key strategies.First,we deploy a deep learning framework using YOLOv9s,trained on the RCA-TVGender dataset.Cameras are positioned perpendicular to observation areas to reduce facial visibility while maintaining accurate gender classification.Second,we apply AES-128 encryption to customer position data,ensuring secure access and regulatory compliance.Our system achieved overall performance,with 81.5%mAP@50,77.7%precision,and 75.7%recall.Moreover,a 90-min observational study confirmed the system’s ability to generate privacy-protected heatmaps revealing distinct behavioral patterns between male and female customers.For instance,women spent more time in certain areas and showed interest in different products.These results confirm the system’s effectiveness in enabling personalized layout and marketing strategies without compromising privacy.
基金supported by Systematic Major Project of Shuohuang Railway Development Co.,Ltd.,National Energy Group(Grant Number:SHTL-23-31)Beijing Natural Science Foundation(U22B2027).
文摘In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments.
基金supported by the National Key R&D Program of China under Grant No.2022YFB3103500the National Natural Science Foundation of China under Grants No.62402087 and No.62020106013+3 种基金the Sichuan Science and Technology Program under Grant No.2023ZYD0142the Chengdu Science and Technology Program under Grant No.2023-XT00-00002-GXthe Fundamental Research Funds for Chinese Central Universities under Grants No.ZYGX2020ZB027 and No.Y030232063003002the Postdoctoral Innovation Talents Support Program under Grant No.BX20230060.
文摘The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats.
基金CAMS Innovation Fund for Medical Sciences(CIFMS):“Construction of an Intelligent Management and Efficient Utilization Technology System for Big Data in Population Health Science.”(2021-I2M-1-057)Key Projects of the Innovation Fund of the National Clinical Research Center for Orthopedics and Sports Rehabilitation:“National Orthopedics and Sports Rehabilitation Real-World Research Platform System Construction”(23-NCRC-CXJJ-ZD4)。
文摘【Objective】Medical imaging data has great value,but it contains a significant amount of sensitive information about patients.At present,laws and regulations regarding to the de-identification of medical imaging data are not clearly defined around the world.This study aims to develop a tool that meets compliance-driven desensitization requirements tailored to diverse research needs.【Methods】To enhance the security of medical image data,we designed and implemented a DICOM format medical image de-identification system on the Windows operating system.【Results】Our custom de-identification system is adaptable to the legal standards of different countries and can accommodate specific research demands.The system offers both web-based online and desktop offline de-identification capabilities,enabling customization of de-identification rules and facilitating batch processing to improve efficiency.【Conclusions】This medical image de-identification system robustly strengthens the stewardship of sensitive medical data,aligning with data security protection requirements while facilitating the sharing and utilization of medical image data.This approach unlocks the intrinsic value inherent in such datasets.
基金supported by the National Natural Science Foundation of China(Grant No.U24B20146)the National Key Research and Development Plan in China(Grant No.2020YFB1005500)Beijing Natural Science Foundation Project(No.M21034).
文摘With the rapid development of web3.0 applications,the volume of data sharing is increasing,the inefficiency of big data file sharing and the problem of data privacy leakage are becoming more and more prominent,and the existing data sharing schemes have been difficult to meet the growing demand for data sharing,this paper aims at exploring a secure,efficient and privacy-protecting data sharing scheme under web3.0 applications.Specifically,this paper adopts interplanetary file system(IPFS)technology to realize the storage of large data files to solve the problem of blockchain storage capacity limitation,and utilizes ciphertext policy attribute-based encryption(CP-ABE)and proxy re-encryption(PRE)technology to realize secure multi-party sharing and finegrained access control of data.This paper provides the detailed algorithm design and implementation of data sharing phases and processes,and analyzes the algorithms from the perspectives of security,privacy protection,and performance.
基金supported by the National Key R&D Program of China(No.2023YFB2703700)the National Natural Science Foundation of China(Nos.U21A20465,62302457,62402444,62172292)+4 种基金the Fundamental Research Funds of Zhejiang Sci-Tech University(Nos.23222092-Y,22222266-Y)the Program for Leading Innovative Research Team of Zhejiang Province(No.2023R01001)the Zhejiang Provincial Natural Science Foundation of China(Nos.LQ24F020008,LQ24F020012)the Foundation of State Key Laboratory of Public Big Data(No.[2022]417)the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(No.2023C01119).
文摘As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.
文摘The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.
基金funded by the Science and Technology Project of State Grid Corporation of China(Research on the theory and method of multiparty encrypted computation in the edge fusion environment of power IoT,No.5700-202358592A-3-2-ZN)the National Natural Science Foundation of China(Grant Nos.62272056,62372048,62371069).
文摘Federated learning effectively alleviates privacy and security issues raised by the development of artificial intelligence through a distributed training architecture.Existing research has shown that attackers can compromise user privacy and security by stealing model parameters.Therefore,differential privacy is applied in federated learning to further address malicious issues.However,the addition of noise and the update clipping mechanism in differential privacy jointly limit the further development of federated learning in privacy protection and performance optimization.Therefore,we propose an adaptive adjusted differential privacy federated learning method.First,a dynamic adaptive privacy budget allocation strategy is proposed,which flexibly adjusts the privacy budget within a given range based on the client’s data volume and training requirements,thereby alleviating the loss of privacy budget and the magnitude of model noise.Second,a longitudinal clipping differential privacy strategy is proposed,which based on the differences in factors that affect parameter updates,uses sparse methods to trim local updates,thereby reducing the impact of privacy pruning steps on model accuracy.The two strategies work together to ensure user privacy while the effect of differential privacy on model accuracy is reduced.To evaluate the effectiveness of our method,we conducted extensive experiments on benchmark datasets,and the results showed that our proposed method performed well in terms of performance and privacy protection.
基金supported by the National Natural Science Foundation of China(Grant No.62462022)the Hainan Province Science and Technology Special Fund(Grants No.ZDYF2022GXJS229).
文摘Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information.Differential privacy stands out as a crucial method for preserving privacy,garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training.However,classic differentially private learning introduces the same level of noise into the gradients across training iterations,which affects the trade-off between model utility and privacy guarantees.To address this issue,an adaptive differential privacy mechanism was proposed in this paper,which dynamically adjusts the privacy budget at the layer-level as training progresses to resist member inference attacks.Specifically,an equal privacy budget is initially allocated to each layer.Subsequently,as training advances,the privacy budget for layers closer to the output is reduced(adding more noise),while the budget for layers closer to the input is increased.The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count.This dynamic allocation provides a simple process for adjusting privacy budgets,alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress.Extensive experiments on five well-known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.
基金supported by National Nature Science Foundation of China(No.62361036)Nature Science Foundation of Gansu Province(No.22JR5RA279).
文摘To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.
文摘The widespread application of artificial intelligence(AI)technology in exams has significantly improved the efficiency and fairness of exams;it has also brought challenges of ethics and privacy protection.The article analyzes the fairness,transparency,and privacy protection issues caused by AI in exams and proposes strategic solutions.This article aims to provide guidance for the rational application of AI technology in exams,ensuring a balance between technological progress and ethical protection by strengthening laws and regulations,enhancing technological transparency,strengthening candidates’privacy rights,and improving the management measures of educational examination institutions.
基金supported by the International Scientific and Technological Cooperation Project of Huangpu and Development Districts in Guangzhou(2023GH17)the National Science and Technology Council in Taiwan under grant number NSTC-113-2224-E-027-001,Private Funding(PV009-2023)the KW IPPP(Research Maintenance Fee)Individual/Centre/Group(RMF1506-2021)at Universiti Malaya,Malaysia.
文摘Blockchain interoperability enables seamless communication and asset transfer across isolated permissioned blockchain systems,but it introduces significant security and privacy vulnerabilities.This review aims to systematically assess the security and privacy landscape of interoperability protocols for permissioned blockchains,identifying key properties,attack vectors,and countermeasures.Using PRISMA 2020 guidelines,we analysed 56 peerreviewed studies published between 2020 and 2025,retrieved from Scopus,ScienceDirect,Web of Science,and IEEE Xplore.The review focused on interoperability protocols for permissioned blockchains with security and privacy analyses,including only English-language journal articles and conference proceedings.Risk of bias in the included studies was assessed using the MMAT.Methods for presenting and synthesizing results included descriptive analysis,bibliometric analysis,and content analysis,with findings organized into tables,charts,and comparative summaries.The review classifies interoperability protocols into relay,sidechain,notary scheme,HTLC,and hybrid types and identifies 18 security and privacy properties along with 31 known attack types.Relay-based protocols showed the broadest security coverage,while HTLC and notary schemes demonstrated significant security gaps.Notably,93% of studies examined fewer than four properties or attack types,indicating a fragmented research landscape.The review identifies underexplored areas such as ACID properties,decentralization,and cross-chain attack resilience.It further highlights effective countermeasures,including cryptographic techniques,trusted execution environments,zero-knowledge proofs,and decentralized identity schemes.The findings suggest that despite growing adoption,current interoperability protocols lack comprehensive security evaluations.More holistic research is needed to ensure the resilience,trustworthiness,and scalability of cross-chain operations in permissioned blockchain ecosystems.
基金supported in part by the National Natural Science Foundation of China under Grants(62250410365,62071084)the Youth Program of Humanities and Social Sciences of the MoE(23YJCZH291)+1 种基金the Key Laboratory of Computing Power Network and Information Security,Ministry of Education(2023ZD02)Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Research Project under grant number RGP2/15/46.
文摘As the 5G architecture gains momentum,interest in 6G is growing.The proliferation of Internet of Things(IoT)devices,capable of capturing sensitive images,has increased the need for secure transmission and robust access control mechanisms.The vast amount of data generated by low-computing devices poses a challenge to traditional centralized access control,which relies on trusted third parties and complex computations,resulting in intricate interactions,higher hardware costs,and processing delays.To address these issues,this paper introduces a novel distributed access control approach that integrates a decentralized and lightweight encryption mechanism with image transmission.This method enhances data security and resource efficiency without imposing heavy computational and network burdens.In comparison to the best existing approach,it achieves a 7%improvement in accuracy,effectively addressing existing gaps in lightweight encryption and recognition performance.
基金funded by the State Grid Corporation Science and Technology Project“Research and Application of Key Technologies for Integrated Sensing and Computing for Intelligent Operation of Power Grid”(Grant No.5700-202318596A-3-2-ZN).
文摘With the deep integration of edge computing,5G and Artificial Intelligence ofThings(AIoT)technologies,the large-scale deployment of intelligent terminal devices has given rise to data silos and privacy security challenges in sensing-computing fusion scenarios.Traditional federated learning(FL)algorithms face significant limitations in practical applications due to client drift,model bias,and resource constraints under non-independent and identically distributed(Non-IID)data,as well as the computational overhead and utility loss caused by privacy-preserving techniques.To address these issues,this paper proposes an Efficient and Privacy-enhancing Clustering Federated Learning method(FedEPC).This method introduces a dual-round client selection mechanism to optimize training.First,the Sparsity-based Privacy-preserving Representation Extraction Module(SPRE)and Adaptive Isomorphic Devices Clustering Module(AIDC)cluster clients based on privacy-sensitive features.Second,the Context-aware Incluster Client Selection Module(CICS)dynamically selects representative devices for training,ensuring heterogeneous data distributions are fully represented.By conducting federated training within clusters and aggregating personalized models,FedEPC effectively mitigates weight divergence caused by data heterogeneity,reduces the impact of client drift and straggler issues.Experimental results demonstrate that FedEPC significantly improves test accuracy in highly Non-IID data scenarios compared to FedAvg and existing clustering FL methods.By ensuring privacy security,FedEPC provides an efficient and robust solution for FL in resource-constrained devices within sensing-computing fusion scenarios,offering both theoretical value and engineering practicality.
文摘The rapid development and widespread adoption of massive open online courses(MOOCs)have indeed had a significant impact on China’s education curriculum.However,the problem of fake reviews and ratings on the platform has seriously affected the authenticity of course evaluations and user trust,requiring effective anomaly detection techniques for screening.The textual characteristics of MOOCs reviews,such as varying lengths and diverse emotional tendencies,have brought complexity to text analysis.Traditional rule-based analysis methods are often inadequate in dealing with such unstructured data.We propose a Differential Privacy-Enabled Text Convolutional Neural Network(DP-TextCNN)framework,aiming to achieve high-precision identification of outliers in MOOCs course reviews and ratings while protecting user privacy.This framework leverages the advantages of Convolutional Neural Networks(CNN)in text feature extraction and combines differential privacy techniques.It balances data privacy protection with model performance by introducing controlled random noise during the data preprocessing stage.By embedding differential privacy into the model training process,we ensure the privacy security of the framework when handling sensitive data,while maintaining a high recognition accuracy.Experimental results indicate that the DP-TextCNN framework achieves an exceptional accuracy of over 95%in identifying fake reviews on the dataset,this outcome not only verifies the applicability of differential privacy techniques in TextCNN but also underscores its potential in handling sensitive educational data.Additionally,we analyze the specific impact of differential privacy parameters on framework performance,offering theoretical support and empirical analysis to strike an optimal balance between privacy protection and framework efficiency.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)(Project Nos.RS-2024-00438551,30%,2022-11220701,30%,2021-0-01816,30%)the National Research Foundation of Korea(NRF)grant funded by the Korean Government(Project No.RS-2023-00208460,10%).
文摘Theproliferation of Internet of Things(IoT)devices introduces substantial security challenges.Currently,privacy constitutes a significant concern for individuals.While maintaining privacy within these systems is an essential characteristic,it often necessitates certain compromises,such as complexity and scalability,thereby complicating management efforts.The principal challenge lies in ensuring confidentiality while simultaneously preserving individuals’anonymity within the system.To address this,we present our proposed architecture for managing IoT devices using blockchain technology.Our proposed architecture works on and off blockchain and is integrated with dashcams and closed-circuit television(CCTV)security cameras.In this work,the videos recorded by the dashcams and CCTV security cameras are hashed through the InterPlanetary File System(IPFS)and this hash is stored in the blockchain.When the accessors want to access the video,they must pass through multiple authentications which include web token authentication and verifiable credentials,to mitigate the risk of malicious users.Our contributions include the proposition of the framework,which works on the single key for every new video,and a novel chaincode algorithm that incorporates verifiable credentials.Analyses are made to show the system’s throughput and latency through stress testing.Significant advantages of the proposed architecture are shown by comparing them to existing schemes.The proposed architecture features a robust design that significantly enhances the security of blockchain-enabled Internet of Things(IoT)deviceswhile effectively mitigating the risk of a single point of failure,which provides a reliable solution for security concerns in the IoT landscape.Our future endeavors will focus on scaling the system by integrating innovative methods to enhance security measures further.