With the increasing complexity of malware attack techniques,traditional detection methods face significant challenges,such as privacy preservation,data heterogeneity,and lacking category information.To address these i...With the increasing complexity of malware attack techniques,traditional detection methods face significant challenges,such as privacy preservation,data heterogeneity,and lacking category information.To address these issues,we propose Federated Dynamic Prototype Learning(FedDPL)for malware classification by integrating Federated Learning with a specifically designed K-means.Under the Federated Learning framework,model training occurs locally without data sharing,effectively protecting user data privacy and preventing the leakage of sensitive information.Furthermore,to tackle the challenges of data heterogeneity and the lack of category information,FedDPL introduces a dynamic prototype learning mechanism,which adaptively adjusts the clustering prototypes in terms of position and number.Thus,the dependency on predefined category numbers in typical K-means and its variants can be significantly reduced,resulting in improved clustering performance.Theoretically,it provides a more accurate detection of malicious behavior.Experimental results confirm that FedDPL excels in handling malware classification tasks,demonstrating superior accuracy,robustness,and privacy protection.展开更多
The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location re...The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location reidentification and correlation attacks.To address these challenges,privacy-preserving trajectory generation methods are critical for applications relying on sensitive location data.This paper introduces DPIL-Traj,an advanced framework designed to generate synthetic trajectories while achieving a superior balance between data utility and privacy preservation.Firstly,the framework incorporates Differential Privacy Clustering,which anonymizes trajectory data by applying differential privacy techniques that add noise,ensuring the protection of sensitive user information.Secondly,Imitation Learning is used to replicate decision-making behaviors observed in real-world trajectories.By learning from expert trajectories,this component generates synthetic data that closely mimics real-world decision-making processes while optimizing the quality of the generated trajectories.Finally,Markov-based Trajectory Generation is employed to capture and maintain the inherent temporal dynamics of movement patterns.Extensive experiments conducted on the GeoLife trajectory dataset show that DPIL-Traj improves utility performance by an average of 19.85%,and in terms of privacy performance by an average of 12.51%,compared to state-of-the-art approaches.Ablation studies further reveal that DP clustering effectively safeguards privacy,imitation learning enhances utility under noise,and the Markov module strengthens temporal coherence.展开更多
The advent of 6G networks is poised to drive a new era of intelligent,privacy-preserving distributed learning by leveraging advanced communication and AI-driven edge intelligence.Federated Learning(FL)has emerged as a...The advent of 6G networks is poised to drive a new era of intelligent,privacy-preserving distributed learning by leveraging advanced communication and AI-driven edge intelligence.Federated Learning(FL)has emerged as a promising paradigm to enable collaborative model training without exposing raw data.However,its deployment in 6G networks faces significant obstacles,including vulnerabilities to inference attacks,the complexities of heterogeneous and dynamic network environments,and the inherent trade-off between privacy protection and model performance.In response to these challenges,we introduce DP-Fed6G,a novel FL framework that integrates differential privacy(DP)to fortify data security while ensuring high-quality learning outcomes.Specifically,DPFed6G employs an adaptive noise injection strategy that dynamically adjusts privacy protection levels based on real-time 6G network conditions and device heterogeneity,ensuring robust data security while maximizing model performance and optimizing the trade-off between privacy and utility.Extensive experiments on three real-world healthcare datasets demonstrate that DP-Fed6G consistently outperforms existing baselines(DP-Fed SGD and DPFed Avg),achieving up to 10.3%higher test accuracy under the same privacy budget.The proposed framework thus provides a practical solution for secure and privacy-preserving AI in 6G,supporting intelligent decisionmaking in privacy-sensitive applications.展开更多
Dear Editor,This letter addresses the critical challenge of preserving privacy in graph learning without compromising on data utility.Differential privacy(DP)is emerging as an effective method for privacy-preserving g...Dear Editor,This letter addresses the critical challenge of preserving privacy in graph learning without compromising on data utility.Differential privacy(DP)is emerging as an effective method for privacy-preserving graph learning.However,its application often diminishes data utility,especially for nodes with fewer neighbors in graph neural networks(GNNs).展开更多
This study addresses the risk of privacy leakage during the transmission and sharing of multimodal data in smart grid substations by proposing a three-tier privacy-preserving architecture based on asynchronous federat...This study addresses the risk of privacy leakage during the transmission and sharing of multimodal data in smart grid substations by proposing a three-tier privacy-preserving architecture based on asynchronous federated learning.The framework integrates blockchain technology,the InterPlanetary File System(IPFS)for distributed storage,and a dynamic differential privacy mechanism to achieve collaborative security across the storage,service,and federated coordination layers.It accommodates both multimodal data classification and object detection tasks,enabling the identification and localization of key targets and abnormal behaviors in substation scenarios while ensuring privacy protection.This effectively mitigates the single-point failures and model leakage issues inherent in centralized architectures.A dynamically adjustable differential privacy mechanism is introduced to allocate privacy budgets according to client contribution levels and upload frequencies,achieving a personalized balance between model performance and privacy protection.Multi-dimensional experimental evaluations,including classification accuracy,F1-score,encryption latency,and aggregation latency,verify the security and efficiency of the proposed architecture.The improved CNN model achieves 72.34%accuracy and an F1-score of 0.72 in object detection and classification tasks on infrared surveillance imagery,effectively identifying typical risk events such as not wearing safety helmets and unauthorized intrusion,while maintaining an aggregation latency of only 1.58 s and a query latency of 80.79 ms.Compared with traditional static differential privacy and centralized approaches,the proposed method demonstrates significant advantages in accuracy,latency,and security,providing a new technical paradigm for efficient,secure data sharing,object detection,and privacy preservation in smart grid substations.展开更多
With the popularization of smart devices,Location-Based Services(LBS)greatly facilitates users’life,but at the same time brings the risk of users’location privacy leakage.Existing location privacy protection methods...With the popularization of smart devices,Location-Based Services(LBS)greatly facilitates users’life,but at the same time brings the risk of users’location privacy leakage.Existing location privacy protection methods are deficient,failing to reasonably allocate the privacy budget for non-outlier location points and ignoring the critical location information that may be contained in the outlier points,leading to decreased data availability and privacy exposure problems.To address these problems,this paper proposes a Mix Location Privacy Preservation Method Based on Differential Privacy with Clustering(MLDP).The method first utilizes the DBSCAN clustering algorithm to classify location points into non-outliers and outliers.For non-outliers,the scoring function is designed by combining geographic information and semantic information,and the privacy budget is allocated according to the heat intensity of the hotspot area;for outliers,the scoring function is constructed to allocate the privacy budget based on their correlation with the hotspot area.By comprehensively considering the geographic information,semantic information,and correlation with hotspot areas of the location points,a reasonable privacy budget is assigned to each location point,andfinallynoise is added throughthe Laplacemechanismto realizeprivacyprotection.Experimental results on tworeal trajectory datasets,Geolife and T-Drive,show that the MLDP approach significantly improves data availability while effectively protecting location privacy.Compared with the comparison methods,the maximum available data ratio of MLDP is 1.Moreover,compared with the RandomNoise method,its execution time is 0.056–0.061 s longer,and the logRE is 0.12951–0.62194 lower;compared with KemeansDP,QTK-DP,DPK-F,IDP-SC,and DPK-Means-up methods,it saves 0.114–0.296 s in execution time,and the logRE is 0.01112–0.38283 lower.展开更多
As deep learning(DL)models are increasingly deployed in sensitive domains(e.g.,healthcare),concerns over privacy and security have intensified.Conventional penetration testing frameworks,such asOWASP and NIST,are effe...As deep learning(DL)models are increasingly deployed in sensitive domains(e.g.,healthcare),concerns over privacy and security have intensified.Conventional penetration testing frameworks,such asOWASP and NIST,are effective for traditional networks and applications but lack the capabilities to address DL-specific threats,such asmodel inversion,membership inference,and adversarial attacks.This review provides a comprehensive analysis of penetration testing for the privacy of DL models,examining the shortfalls of existing frameworks,tools,and testing methodologies.Through systematic evaluation of existing literature and empirical analysis,we identify three major contributions:(i)a critical assessment of traditional penetration testing frameworks’inadequacies when applied to DL-specific privacy vulnerabilities,(ii)a comprehensive evaluation of state-of-the-art privacy-preserving methods and their integration with penetration testing workflows,and(iii)the development of a structured framework that combines reconnaissance,threat modeling,exploitation,and post-exploitation phases specifically tailored for DL privacy assessment.Moreover,this review evaluates popular solutions such as IBMAdversarial Robustness Toolbox and TensorFlowPrivacy,alongside privacy-preserving techniques(e.g.,Differential Privacy,Homomorphic Encryption,and Federated Learning),which we systematically analyze through comparative studies of their effectiveness,computational overhead,and practical deployment constraints.While these techniques offer promising safeguards,their adoption is hindered by accuracy loss,performance overheads,and the rapid evolution of attack strategies.Our findings reveal that no single existing solution provides comprehensive protection,which leads us to propose a hybrid approach that strategically combines multiple privacy-preserving mechanisms.The findings of this survey underscore an urgent need for automated,regulationcompliant penetration testing frameworks specifically tailored to DL systems.We argue for hybrid privacy solutions that combinemultiple protectivemechanisms to ensure bothmodel accuracy and privacy.Building on our analysis,we present actionable recommendations for developing adaptive penetration testing strategies that incorporate automated vulnerability assessment,continuous monitoring,and regulatory compliance verification.展开更多
The support vector machine,a widely used binary classification method,may expose sensitive information during training.To address this,the authors propose a personalized differential privacy method that extends differ...The support vector machine,a widely used binary classification method,may expose sensitive information during training.To address this,the authors propose a personalized differential privacy method that extends differential privacy.Specifically,the authors introduce personalized differentially private support vector machines to meet different individuals'privacy requirements,using a reweighting strategy and the Laplace mechanism.Theoretical analysis demonstrates that the proposed methods simultaneously satisfy the requirements of personalized differential privacy and ensure model prediction accuracy at these privacy levels.Extensive experiments demonstrate that the proposed methods outperform the existing methods.展开更多
In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic h...In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.展开更多
The convergence of Artificial Intelligence(AI)and the Internet of Things(IoT)has enabled Artificial Intelligence of Things(AIoT)systems that support intelligent and responsive smart societies,but it also introduces ma...The convergence of Artificial Intelligence(AI)and the Internet of Things(IoT)has enabled Artificial Intelligence of Things(AIoT)systems that support intelligent and responsive smart societies,but it also introduces major security and privacy concerns across domains such as healthcare,transportation,and smart cities.This Systemic Literature Review(SLR)addresses three research questions:identifying major threats and challenges in AIoT ecosystems,reviewing state-of-the-art security and privacy techniques,and evaluating their effectiveness.An SLR covering the period from 2020 to 2025 was conducted using major academic digital libraries,including IEEE Xplore,ACM Digital Library,ScienceDirect,SpringerLink,and Wiley Online Library,with a focus on security-and privacy-enhancing techniques such as blockchain,federated learning,and edge AI.The SLR identifies key challenges including data privacy leakage,authentication,cloud dependency,and attack surface expansion,and finds that emerging techniques,while promising,often involve trade-offs related to latency,scalability,and compliance.The study highlights future directions including lightweight cryptography,standardization,and explainable AI to support secure and trustworthy AIoT-enabled smart societies.展开更多
基金supported by the National Natural Science Foundation of China under Grant No.62162009the Key Technologies R&D Program of He’nan Province under Grant No.242102211065+2 种基金the Postgraduate Education Reform and Quality Improvement Project of Henan Province under Grant Nos.YJS2025GZZ36,YJS2024AL112,and YJS2024JD38the Innovation Scientists and Technicians Troop Construction Projects of Henan Province under Grant No.CXTD2017099the Scientific Research Innovation Team of Xuchang University under Grant No.2022CXTD003.
文摘With the increasing complexity of malware attack techniques,traditional detection methods face significant challenges,such as privacy preservation,data heterogeneity,and lacking category information.To address these issues,we propose Federated Dynamic Prototype Learning(FedDPL)for malware classification by integrating Federated Learning with a specifically designed K-means.Under the Federated Learning framework,model training occurs locally without data sharing,effectively protecting user data privacy and preventing the leakage of sensitive information.Furthermore,to tackle the challenges of data heterogeneity and the lack of category information,FedDPL introduces a dynamic prototype learning mechanism,which adaptively adjusts the clustering prototypes in terms of position and number.Thus,the dependency on predefined category numbers in typical K-means and its variants can be significantly reduced,resulting in improved clustering performance.Theoretically,it provides a more accurate detection of malicious behavior.Experimental results confirm that FedDPL excels in handling malware classification tasks,demonstrating superior accuracy,robustness,and privacy protection.
基金supported by the Natural Science Foundation of Fujian Province of China(2025J01380)National Natural Science Foundation of China(No.62471139)+3 种基金the Major Health Research Project of Fujian Province(2021ZD01001)Fujian Provincial Units Special Funds for Education and Research(2022639)Fujian University of Technology Research Start-up Fund(GY-S24002)Fujian Research and Training Grants for Young and Middle-aged Leaders in Healthcare(GY-H-24179).
文摘The generation of synthetic trajectories has become essential in various fields for analyzing complex movement patterns.However,the use of real-world trajectory data poses significant privacy risks,such as location reidentification and correlation attacks.To address these challenges,privacy-preserving trajectory generation methods are critical for applications relying on sensitive location data.This paper introduces DPIL-Traj,an advanced framework designed to generate synthetic trajectories while achieving a superior balance between data utility and privacy preservation.Firstly,the framework incorporates Differential Privacy Clustering,which anonymizes trajectory data by applying differential privacy techniques that add noise,ensuring the protection of sensitive user information.Secondly,Imitation Learning is used to replicate decision-making behaviors observed in real-world trajectories.By learning from expert trajectories,this component generates synthetic data that closely mimics real-world decision-making processes while optimizing the quality of the generated trajectories.Finally,Markov-based Trajectory Generation is employed to capture and maintain the inherent temporal dynamics of movement patterns.Extensive experiments conducted on the GeoLife trajectory dataset show that DPIL-Traj improves utility performance by an average of 19.85%,and in terms of privacy performance by an average of 12.51%,compared to state-of-the-art approaches.Ablation studies further reveal that DP clustering effectively safeguards privacy,imitation learning enhances utility under noise,and the Markov module strengthens temporal coherence.
基金supported in part by the Research and Development Project of China Railway Information Technology Group under Grant WJZG-CKY-2024040(2024P01)the National Natural Science Foun-dation of China under Grant 62272100the Consulting Project of Chinese Academy of Engineering under Grant 2023-XY-09。
文摘The advent of 6G networks is poised to drive a new era of intelligent,privacy-preserving distributed learning by leveraging advanced communication and AI-driven edge intelligence.Federated Learning(FL)has emerged as a promising paradigm to enable collaborative model training without exposing raw data.However,its deployment in 6G networks faces significant obstacles,including vulnerabilities to inference attacks,the complexities of heterogeneous and dynamic network environments,and the inherent trade-off between privacy protection and model performance.In response to these challenges,we introduce DP-Fed6G,a novel FL framework that integrates differential privacy(DP)to fortify data security while ensuring high-quality learning outcomes.Specifically,DPFed6G employs an adaptive noise injection strategy that dynamically adjusts privacy protection levels based on real-time 6G network conditions and device heterogeneity,ensuring robust data security while maximizing model performance and optimizing the trade-off between privacy and utility.Extensive experiments on three real-world healthcare datasets demonstrate that DP-Fed6G consistently outperforms existing baselines(DP-Fed SGD and DPFed Avg),achieving up to 10.3%higher test accuracy under the same privacy budget.The proposed framework thus provides a practical solution for secure and privacy-preserving AI in 6G,supporting intelligent decisionmaking in privacy-sensitive applications.
基金supported by the National Key Research and Development Program of China(2023YFF0612900,2023YFF0612902)the Natural Science Foundation of Beijing,China(4254086)+3 种基金the National Natural Science Foundation of China(62472032)the Open Project Funding of Key Laboratory of Mobile Application Innovation and Governance Technology,Ministry of Industry and Information Technology(2023IFS080601-K)the Beijing Institute of Technology Research Fund Program for Young Scholarsthe Young Elite Scientists Sponsorship Program by CAST(2023QNRC001)。
文摘Dear Editor,This letter addresses the critical challenge of preserving privacy in graph learning without compromising on data utility.Differential privacy(DP)is emerging as an effective method for privacy-preserving graph learning.However,its application often diminishes data utility,especially for nodes with fewer neighbors in graph neural networks(GNNs).
基金funded by the National Natural Science Foundation of China,grant number 61605004the Fundamental Research Funds for the Central Universities,grant number FRF-TP-19-016A2Guizhou Power Grid Co.,Ltd.2024 first batch of services(2024-2026 technology R&D services for science and technology projects(in addition to national and SGCC key projects)),grant number 060100KC23100012。
文摘This study addresses the risk of privacy leakage during the transmission and sharing of multimodal data in smart grid substations by proposing a three-tier privacy-preserving architecture based on asynchronous federated learning.The framework integrates blockchain technology,the InterPlanetary File System(IPFS)for distributed storage,and a dynamic differential privacy mechanism to achieve collaborative security across the storage,service,and federated coordination layers.It accommodates both multimodal data classification and object detection tasks,enabling the identification and localization of key targets and abnormal behaviors in substation scenarios while ensuring privacy protection.This effectively mitigates the single-point failures and model leakage issues inherent in centralized architectures.A dynamically adjustable differential privacy mechanism is introduced to allocate privacy budgets according to client contribution levels and upload frequencies,achieving a personalized balance between model performance and privacy protection.Multi-dimensional experimental evaluations,including classification accuracy,F1-score,encryption latency,and aggregation latency,verify the security and efficiency of the proposed architecture.The improved CNN model achieves 72.34%accuracy and an F1-score of 0.72 in object detection and classification tasks on infrared surveillance imagery,effectively identifying typical risk events such as not wearing safety helmets and unauthorized intrusion,while maintaining an aggregation latency of only 1.58 s and a query latency of 80.79 ms.Compared with traditional static differential privacy and centralized approaches,the proposed method demonstrates significant advantages in accuracy,latency,and security,providing a new technical paradigm for efficient,secure data sharing,object detection,and privacy preservation in smart grid substations.
基金supported in part by the National Natural Science Foundation of China(Grant No.61971291)the Basic Scientific Research Project of the Liaoning Provincial Department of Education(LJ212410144013)+2 种基金the Leading Talent of the‘Xing Liao Ying Cai Plan’(XLYC2202013)the Shenyang Natural Science Foundation(22-315-6-10)the Guangxuan Scholar of Shenyang Ligong University(SYLUGXXZ202205).
文摘With the popularization of smart devices,Location-Based Services(LBS)greatly facilitates users’life,but at the same time brings the risk of users’location privacy leakage.Existing location privacy protection methods are deficient,failing to reasonably allocate the privacy budget for non-outlier location points and ignoring the critical location information that may be contained in the outlier points,leading to decreased data availability and privacy exposure problems.To address these problems,this paper proposes a Mix Location Privacy Preservation Method Based on Differential Privacy with Clustering(MLDP).The method first utilizes the DBSCAN clustering algorithm to classify location points into non-outliers and outliers.For non-outliers,the scoring function is designed by combining geographic information and semantic information,and the privacy budget is allocated according to the heat intensity of the hotspot area;for outliers,the scoring function is constructed to allocate the privacy budget based on their correlation with the hotspot area.By comprehensively considering the geographic information,semantic information,and correlation with hotspot areas of the location points,a reasonable privacy budget is assigned to each location point,andfinallynoise is added throughthe Laplacemechanismto realizeprivacyprotection.Experimental results on tworeal trajectory datasets,Geolife and T-Drive,show that the MLDP approach significantly improves data availability while effectively protecting location privacy.Compared with the comparison methods,the maximum available data ratio of MLDP is 1.Moreover,compared with the RandomNoise method,its execution time is 0.056–0.061 s longer,and the logRE is 0.12951–0.62194 lower;compared with KemeansDP,QTK-DP,DPK-F,IDP-SC,and DPK-Means-up methods,it saves 0.114–0.296 s in execution time,and the logRE is 0.01112–0.38283 lower.
基金supported in part by the Tianjin Natural Science Foundation Project(24JCZDJC01000)the Fundamental Research Funds for the Central Universities of China(No.3122025091).
文摘As deep learning(DL)models are increasingly deployed in sensitive domains(e.g.,healthcare),concerns over privacy and security have intensified.Conventional penetration testing frameworks,such asOWASP and NIST,are effective for traditional networks and applications but lack the capabilities to address DL-specific threats,such asmodel inversion,membership inference,and adversarial attacks.This review provides a comprehensive analysis of penetration testing for the privacy of DL models,examining the shortfalls of existing frameworks,tools,and testing methodologies.Through systematic evaluation of existing literature and empirical analysis,we identify three major contributions:(i)a critical assessment of traditional penetration testing frameworks’inadequacies when applied to DL-specific privacy vulnerabilities,(ii)a comprehensive evaluation of state-of-the-art privacy-preserving methods and their integration with penetration testing workflows,and(iii)the development of a structured framework that combines reconnaissance,threat modeling,exploitation,and post-exploitation phases specifically tailored for DL privacy assessment.Moreover,this review evaluates popular solutions such as IBMAdversarial Robustness Toolbox and TensorFlowPrivacy,alongside privacy-preserving techniques(e.g.,Differential Privacy,Homomorphic Encryption,and Federated Learning),which we systematically analyze through comparative studies of their effectiveness,computational overhead,and practical deployment constraints.While these techniques offer promising safeguards,their adoption is hindered by accuracy loss,performance overheads,and the rapid evolution of attack strategies.Our findings reveal that no single existing solution provides comprehensive protection,which leads us to propose a hybrid approach that strategically combines multiple privacy-preserving mechanisms.The findings of this survey underscore an urgent need for automated,regulationcompliant penetration testing frameworks specifically tailored to DL systems.We argue for hybrid privacy solutions that combinemultiple protectivemechanisms to ensure bothmodel accuracy and privacy.Building on our analysis,we present actionable recommendations for developing adaptive penetration testing strategies that incorporate automated vulnerability assessment,continuous monitoring,and regulatory compliance verification.
基金supported by the National Key R&D Program of China under Grant No.2023YFA1008702the National Natural Science Foundation of China under Grant No.12571300。
文摘The support vector machine,a widely used binary classification method,may expose sensitive information during training.To address this,the authors propose a personalized differential privacy method that extends differential privacy.Specifically,the authors introduce personalized differentially private support vector machines to meet different individuals'privacy requirements,using a reweighting strategy and the Laplace mechanism.Theoretical analysis demonstrates that the proposed methods simultaneously satisfy the requirements of personalized differential privacy and ensure model prediction accuracy at these privacy levels.Extensive experiments demonstrate that the proposed methods outperform the existing methods.
基金funding from the European Commission by the Ruralities project(grant agreement no.101060876).
文摘In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.
文摘The convergence of Artificial Intelligence(AI)and the Internet of Things(IoT)has enabled Artificial Intelligence of Things(AIoT)systems that support intelligent and responsive smart societies,but it also introduces major security and privacy concerns across domains such as healthcare,transportation,and smart cities.This Systemic Literature Review(SLR)addresses three research questions:identifying major threats and challenges in AIoT ecosystems,reviewing state-of-the-art security and privacy techniques,and evaluating their effectiveness.An SLR covering the period from 2020 to 2025 was conducted using major academic digital libraries,including IEEE Xplore,ACM Digital Library,ScienceDirect,SpringerLink,and Wiley Online Library,with a focus on security-and privacy-enhancing techniques such as blockchain,federated learning,and edge AI.The SLR identifies key challenges including data privacy leakage,authentication,cloud dependency,and attack surface expansion,and finds that emerging techniques,while promising,often involve trade-offs related to latency,scalability,and compliance.The study highlights future directions including lightweight cryptography,standardization,and explainable AI to support secure and trustworthy AIoT-enabled smart societies.