The combination of blockchain and Internet of Things technology has made significant progress in smart agriculture,which provides substantial support for data sharing and data privacy protection.Nevertheless,achieving...The combination of blockchain and Internet of Things technology has made significant progress in smart agriculture,which provides substantial support for data sharing and data privacy protection.Nevertheless,achieving efficient interactivity and privacy protection of agricultural data remains a crucial issues.To address the above problems,we propose a blockchain-assisted federated learningdriven support vector machine(BAFL-SVM)framework to realize efficient data sharing and privacy protection.The BAFL-SVM is composed of the FedSVM-RiceCare module and the FedPrivChain module.Specifically,in FedSVM-RiceCare,we utilize federated learning and SVM to train the model,improving the accuracy of the experiment.Then,in FedPrivChain,we adopt homomorphic encryption and a secret-sharing scheme to encrypt the local model parameters and upload them.Finally,we conduct a large number of experiments on a real-world dataset of rice pests and diseases,and the experimental results show that our framework not only guarantees the secure sharing of data but also achieves a higher recognition accuracy compared with other schemes.展开更多
Large Language Models(LLMs)are complex artificial intelligence systems,which can understand,generate,and translate human languages.By analyzing large amounts of textual data,these models learn language patterns to per...Large Language Models(LLMs)are complex artificial intelligence systems,which can understand,generate,and translate human languages.By analyzing large amounts of textual data,these models learn language patterns to perform tasks such as writing,conversation,and summarization.Agents built on LLMs(LLM agents)further extend these capabilities,allowing them to process user interactions and perform complex operations in diverse task environments.However,during the processing and generation of massive data,LLMs and LLM agents pose a risk of sensitive information leakage,potentially threatening data privacy.This paper aims to demonstrate data privacy issues associated with LLMs and LLM agents to facilitate a comprehensive understanding.Specifically,we conduct an in-depth survey about privacy threats,encompassing passive privacy leakage and active privacy attacks.Subsequently,we introduce the privacy protection mechanisms employed by LLMs and LLM agents and provide a detailed analysis of their effectiveness.Finally,we explore the privacy protection challenges for LLMs and LLM agents as well as outline potential directions for future developments in this domain.展开更多
Edge computing is becoming ever more relevant to offload compute-heavy tasks in vehicular networks.In this context,the concept of vehicular micro clouds(VMCs)has been proposed to use compute and storage resources on n...Edge computing is becoming ever more relevant to offload compute-heavy tasks in vehicular networks.In this context,the concept of vehicular micro clouds(VMCs)has been proposed to use compute and storage resources on nearby vehicles to complete computational tasks.As many tasks in this application domain are time critical,offloading to the cloud is prohibitive.Additionally,task deadlines have to be dealt with.This paper addresses two main challenges.First,we present a task migration algorithm supporting deadlines in vehicular edge computing.The algorithm is following the earliest deadline first model but in presence of dynamic processing resources,i.e,vehicles joining and leaving a VMC.This task offloading is very sensitive to the mobility of vehicles in a VMC,i.e,the so-called dwell time a vehicles spends in the VMC.Thus,secondly,we propose a machine learning-based solution for dwell time prediction.Our dwell time prediction model uses a random forest approach to estimate how long a vehicle will stay in a VMC.Our approach is evaluated using mobility traces of an artificial simple intersection scenario as well as of real urban traffic in cities of Luxembourg and Nagoya.Our proposed approach is able to realize low-delay and low-failure task migration in dynamic vehicular conditions,advancing the state of the art in vehicular edge computing.展开更多
Federated Learning(FL)is currently a widely used collaborative learning framework,and the distinguished feature of FL is that the clients involved in training do not need to share raw data,but only transfer the model ...Federated Learning(FL)is currently a widely used collaborative learning framework,and the distinguished feature of FL is that the clients involved in training do not need to share raw data,but only transfer the model parameters to share knowledge,and finally get a global model with improved performance.However,recent studies have found that sharing model parameters may still lead to privacy leakage.From the shared model parameters,local training data can be reconstructed and thus lead to a threat to individual privacy and security.We observed that most of the current attacks are aimed at client-specific data reconstruction,while limited attention is paid to the information leakage of the global model.In our work,we propose a novel FL attack based on shared model parameters that can deduce the data distribution of the global model.Different from other FL attacks that aim to infer individual clients’raw data,the data distribution inference attack proposed in this work shows that the attackers can have the capability to deduce the data distribution information behind the global model.We argue that such information is valuable since the training data behind a welltrained global model indicates the common knowledge of a specific task,such as social networks and e-commerce applications.To implement such an attack,our key idea is to adopt a deep reinforcement learning approach to guide the attack process,where the RL agent adjusts the pseudo-data distribution automatically until it is similar to the ground truth data distribution.By a carefully designed Markov decision proces(MDP)process,our implementation ensures our attack can have stable performance and experimental results verify the effectiveness of our proposed inference attack.展开更多
With the rapid advancement of cloud technologies,cloud services have enormously contributed to the cloud community for application development life-cycle.In this context,Kubernetes has played a pivotal role as a cloud...With the rapid advancement of cloud technologies,cloud services have enormously contributed to the cloud community for application development life-cycle.In this context,Kubernetes has played a pivotal role as a cloud computing tool,enabling developers to adopt efficient and automated deployment strategies.Using Kubernetes as an orchestration tool and a cloud computing system as a manager of the infrastructures,developers can boost the development and deployment process.With cloud providers such as GCP,AWS,Azure,and Oracle offering Kubernetes services,the availability of both x86 and ARM platforms has become evident.However,while x86 currently dominates the market,ARM-based solutions have seen limited adoption,with only a few individuals actively working on ARM deployments.This study explores the efficiency and cost-effectiveness of implementing Kubernetes on different CPU platforms.By comparing the performance of x86 and ARM platforms,this research seeks to ascertain whether transitioning to ARM presents a more advantageous option for Kubernetes deployments.Through a comprehensive evaluation of scalability,cost,and overall performance,this study aims to shed light on the viability of leveraging ARM on different CPUs by providing valuable insights.展开更多
The ever-escalating prevalence of malware is a serious cybersecurity threat,often requiring advanced post-incident forensic investigation techniques.This paper proposes a framework to enhance malware forensics by leve...The ever-escalating prevalence of malware is a serious cybersecurity threat,often requiring advanced post-incident forensic investigation techniques.This paper proposes a framework to enhance malware forensics by leveraging reinforcement learning(RL).The approach combines heuristic and signaturebased methods,supported by RL through a unified MDP model,which breaks down malware analysis into distinct states and actions.This optimisation enhances the identification and classification of malware variants.The framework employs Q-learning and other techniques to boost the speed and accuracy of detecting new and unknown malware,outperforming traditional methods.We tested the experimental framework across multiple virtual environments infected with various malware types.The RL agent collected forensic evidence and improved its performance through Q-tables and temporal difference learning.The epsilon-greedy exploration strategy,in conjunction with Q-learning updates,effectively facilitated transitions.The learning rate depended on the complexity of the MDP environment:higher in simpler ones for quicker convergence and lower in more complex ones for stability.This RL-enhanced model significantly reduced the time required for post-incident malware investigations,achieving a high accuracy rate of 94%in identifying malware.These results indicate RL’s potential to revolutionise post-incident forensics investigations in cybersecurity.Future work will incorporate more advanced RL algorithms and large language models(LLMs)to further enhance the effectiveness of malware forensic analysis.展开更多
We present a consensus mechanism in this paper that is designed specifically for supply chain blockchains,with a core focus on establishing trust among participating stakeholders through a novel reputation-based appro...We present a consensus mechanism in this paper that is designed specifically for supply chain blockchains,with a core focus on establishing trust among participating stakeholders through a novel reputation-based approach.The prevailing consensus mechanisms,initially crafted for cryptocurrency applications,prove unsuitable for the unique dynamics of supply chain systems.Unlike the broad inclusivity of cryptocurrency networks,our proposed mechanism insists on stakeholder participation rooted in process-specific quality criteria.The delineation of roles for supply chain participants within the consensus process becomes paramount.While reputation serves as a well-established quality parameter in various domains,its nuanced impact on non-cryptocurrency consensus mechanisms remains uncharted territory.Moreover,recognizing the primary role of efficient block verification in blockchain-enabled supply chains,our work introduces a comprehensive reputation model.This model strategically selects a leader node to orchestrate the entire block mining process within the consensus.Additionally,we innovate with a Schnorr Multisignature-based block verification mechanism seamlessly integrated into our proposed consensus model.Rigorous experiments are conducted to evaluate the performance and feasibility of our pioneering consensus mechanism,contributing valuable insights to the evolving landscape of blockchain technology in supply chain applications.展开更多
Docker is a vital tool in modern development,enabling the creation,deployment,and execution of applications using containers,thereby ensuring consistency across various environments.However,developers often face chall...Docker is a vital tool in modern development,enabling the creation,deployment,and execution of applications using containers,thereby ensuring consistency across various environments.However,developers often face challenges,particularly with filesystem complexities and performance bottlenecks when working directly within Docker containers.This is where Mutagen comes into play,significantly enhancing the Docker experience by offering efficient network file synchronization,reducing latency in file operations,and improving overall data transfer rates in containerized environments.By exploring Docker’s architecture,examining Mutagen’s role,and evaluating their combined performance impacts,particularly in terms of file operation speeds and development workflow efficiencies,this research provides a deep understanding of these technologies and their potential to streamline development processes in networked and distributed environments.展开更多
The increasing prevalence of cancer necessitates advanced methodologies for early detection and diagnosis.Early intervention is crucial for improving patient outcomes and reducing the overall burden on healthcare syst...The increasing prevalence of cancer necessitates advanced methodologies for early detection and diagnosis.Early intervention is crucial for improving patient outcomes and reducing the overall burden on healthcare systems.Traditional centralized methods of medical image analysis pose significant risks to patient privacy and data security,as they require the aggregation of sensitive information in a single location.Furthermore,these methods often suffer from limitations related to data diversity and scalability,hindering the development of universally robust diagnostic models.Recent advancements in machine learning,particularly deep learning,have shown promise in enhancing medical image analysis.However,the need to access large and diverse datasets for training these models introduces challenges in maintaining patient confidentiality and adhering to strict data protection regulations.This paper introduces FedViTBloc,a secure and privacy-enhanced framework for medical image analysis utilizing Federated Learning(FL)combined with Vision Transformers(ViT)and blockchain technology.The proposed system ensures patient data privacy and security through fully homomorphic encryption and differential privacy techniques.By employing a decentralized FL approach,multiple medical institutions can collaboratively train a robust deep-learning model without sharing raw data.Blockchain integration further enhances the security and trustworthiness of the FL process by managing client registration and ensuring secure onboarding of participants.Experimental results demonstrate the effectiveness of FedViTBloc in medical image analysis while maintaining stringent privacy standards,achieving 67%accuracy and reducing loss below 2 across 10 clients,ensuring scalability and robustness.展开更多
The modern industries of today demand the classification of satellite images,and to use the information obtained from it for their advantage and growth.The extracted information also plays a crucial role in national s...The modern industries of today demand the classification of satellite images,and to use the information obtained from it for their advantage and growth.The extracted information also plays a crucial role in national security and the mapping of geographical locations.The conventional methods often fail to handle the complexities of this process.So,an effective method is required with high accuracy and stability.In this paper,a new methodology named RankEnsembleFS is proposed that addresses the crucial issues of stability and feature aggregation in the context of the SAT-6 dataset.RankEnsembleFS makes use of a two-step process that consists of ranking the features and then selecting the optimal feature subset from the top-ranked features.RankEnsembleFS achieved comparable accuracy results to state-of-the-art models for the SAT-6 dataset while significantly reducing the feature space.This reduction in feature space is important because it reduces computational complexity and enhances the interpretability of the model.Moreover,the proposed method demonstrated good stability in handling changes in data characteristics,which is critical for reliable performance over time and surpasses existing ML ensemble methods in terms of stability,threshold setting,and feature aggregation.In summary,this paper provides compelling evidence that this RankEnsembleFS methodology presents excellent performance and overcomes key issues in feature selection and image classification for the SAT-6 dataset.展开更多
Knowledge graph(KG)representation learning aims to map entities and relations into a low-dimensional representation space,showing significant potential in many tasks.Existing approaches follow two categories:(1)Graph-...Knowledge graph(KG)representation learning aims to map entities and relations into a low-dimensional representation space,showing significant potential in many tasks.Existing approaches follow two categories:(1)Graph-based approaches encode KG elements into vectors using structural score functions.(2)Text-based approaches embed text descriptions of entities and relations via pre-trained language models(PLMs),further fine-tuned with triples.We argue that graph-based approaches struggle with sparse data,while text-based approaches face challenges with complex relations.To address these limitations,we propose a unified Text-Augmented Attention-based Recurrent Network,bridging the gap between graph and natural language.Specifically,we employ a graph attention network based on local influence weights to model local structural information and utilize a PLM based prompt learning to learn textual information,enhanced by a mask-reconstruction strategy based on global influence weights and textual contrastive learning for improved robustness and generalizability.Besides,to effectively model multi-hop relations,we propose a novel semantic-depth guided path extraction algorithm and integrate cross-attention layers into recurrent neural networks to facilitate learning the long-term relation dependency and offer an adaptive attention mechanism for varied-length information.Extensive experiments demonstrate that our model exhibits superiority over existing models across KG completion and question-answering tasks.展开更多
In the emerging field of Meta Computing,where data collection and integration are essential components,the threat of adversary hidden link attacks poses a significant challenge to web crawlers.In this paper,we investi...In the emerging field of Meta Computing,where data collection and integration are essential components,the threat of adversary hidden link attacks poses a significant challenge to web crawlers.In this paper,we investigate the influence of these attacks on data collection by web crawlers,which famously elude conventional detection techniques using large language models(LLMs).Empirically,we find some vulnerabilities in the current crawler mechanisms and large language model detection,especially in code inspection,and propose enhancements that will help mitigate these weaknesses.Our assessment of real-world web pages reveals the prevalence and impact of adversary hidden link attacks,emphasizing the necessity for robust countermeasures.Furthermore,we introduce a mitigation framework that integrates element visual inspection techniques.Our evaluation demonstrates the framework’s efficacy in detecting and addressing these advanced cyber threats within the evolving landscape of Meta Computing.展开更多
In the growing demand for data sharing,how to realize fine-grained trusted access control of shared data and protect data security has become a difficult problem.Ciphertext policy attribute-based encryption(CP-ABE)mod...In the growing demand for data sharing,how to realize fine-grained trusted access control of shared data and protect data security has become a difficult problem.Ciphertext policy attribute-based encryption(CP-ABE)model is widely used in cloud data sharing scenarios,but there are problems such as privacy leakage of access policy,irrevocability of user or attribute,key escrow,and trust bottleneck.Therefore,we propose a blockchain-assisted CP-ABE(B-CP-ABE)mechanism for trusted data access control.Firstly,we construct a data trusted access control architecture based on the B-CP-ABE,which realizes the automated execution of access policies through smart contracts and guarantees the trusted access process through blockchain.Then,we define the B-CP-ABE scheme,which has the functions of policy partial hidden,attribute revocation,and anti-key escrow.The B-CP-ABE scheme utilizes Bloom filter to hide the mapping relationship of sensitive attributes in the access structure,realizes flexible revocation and recovery of users and attributes by re-encryption algorithm,and solves the key escrow problem by joint authorization of data owners and attribute authority.Finally,we demonstrate the usability of the B-CP-ABE scheme by performing security analysis and performance analysis.展开更多
A Link Flooding Attack(LFA)is a special type of Denial-of-Service(DoS)attack in which the attacker sends out a huge number of requests to exhaust the capacity of a link on the path the traffic comes to a server.As a r...A Link Flooding Attack(LFA)is a special type of Denial-of-Service(DoS)attack in which the attacker sends out a huge number of requests to exhaust the capacity of a link on the path the traffic comes to a server.As a result,user traffic cannot reach the server.As a result,DoS and degradation of Qualityof-Service(QoS)occur.Because the attack traffic does not go to the victim,protecting the legitimate traffic alone is hard for the victim.The victim can protect its legitimate traffic by using a special type of router called filter router(FR).An FR can receive server filters and apply them to block a link incident to it.An FR probabilistically appends its own IP address to packets it forwards,and the victim uses that information to discover the traffic topology.By analyzing traffic rates and paths,the victim identifies some links that may be congested.The victim needs to select some of these possible congested links(PCLs)and send a filter to the corresponding FR so that legitimate traffic avoids congested paths.In this paper,we formulate two optimization problems for blocking the least number of PCLs so that the legitimate traffic goes through a non-congested path.We consider the scenario where every user has at least one non-congested shortest path in the first problem.We extend the first problem to a scenario where there are some users whose shortest paths are all congested.We transform the original problem to the vertex separation problem to find the links to block.We use a custom-built Java multi-threaded simulator and conduct extensive simulations to support our solutions.展开更多
Hierarchical Federated Learning(HFL)extends traditional Federated Learning(FL)by introducing multi-level aggregation in which model updates pass through clients,edge servers,and a global server.While this hierarchical...Hierarchical Federated Learning(HFL)extends traditional Federated Learning(FL)by introducing multi-level aggregation in which model updates pass through clients,edge servers,and a global server.While this hierarchical structure enhances scalability,it also increases vulnerability to adversarial attacks—such as data poisoning and model poisoning—that disrupt learning by introducing discrepancies at the edge server level.These discrepancies propagate through aggregation,affecting model consistency and overall integrity.Existing studies on adversarial behaviour in FL primarily rely on single-metric approaches—such as cosine similarity or Euclidean distance—to assess model discrepancies and filter out anomalous updates.However,these methods fail to capture the diverse ways adversarial attacks influence model updates,particularly in highly heterogeneous data environments and hierarchical structures.Attackers can exploit the limitations of single-metric defences by crafting updates that seem benign under one metric while remaining anomalous under another.Moreover,prior studies have not systematically analysed how model discrepancies evolve over time,vary across regions,or affect clustering structures in HFL architectures.To address these limitations,we propose the Model Discrepancy Score(MDS),a multi-metric framework that integrates Dissimilarity,Distance,Uncorrelation,and Divergence to provide a comprehensive analysis of how adversarial activity affects model discrepancies.Through temporal,spatial,and clustering analyses,we examine how attacks affect model discrepancies at the edge server level in 3LHFL and 4LHFL architectures and evaluate MDS’s ability to distinguish between benign and malicious servers.Our results show that while 4LHFL effectively mitigates discrepancies in regional attack scenarios,it struggles with distributed attacks due to additional aggregation layers that obscure distinguishable discrepancy patterns over time,across regions,and within clustering structures.Factors influencing detection include data heterogeneity,attack sophistication,and hierarchical aggregation depth.These findings highlight the limitations of single-metric approaches and emphasize the need for multi-metric strategies such as MDS to enhance HFL security.展开更多
In recent research on the Digital Twin-based Vehicular Ad hoc Network(DT-VANET),Federated Learning(FL)has shown its ability to provide data privacy.However,Federated learning struggles to adequately train a global mod...In recent research on the Digital Twin-based Vehicular Ad hoc Network(DT-VANET),Federated Learning(FL)has shown its ability to provide data privacy.However,Federated learning struggles to adequately train a global model when confronted with data heterogeneity and data sparsity among vehicles,which ensure suboptimal accuracy in making precise predictions for different vehicle types.To address these challenges,this paper combines Federated Transfer Learning(FTL)to conduct vehicle clustering related to types of vehicles and proposes a novel Hierarchical Federated Transfer Learning(HFTL).We construct a framework for DT-VANET,along with two algorithms designed for cloud server model updates and intra-cluster federated transfer learning,to improve the accuracy of the global model.In addition,we developed a data quality score-based mechanism to prevent the global model from being affected by malicious vehicles.Lastly,detailed experiments on real-world datasets are conducted,considering different performance metrics that verify the effectiveness and efficiency of our algorithm.展开更多
The Industrial Internet of Things(IIoT)achieves the automation,monitoring,and optimization of industrial processes by interconnecting various sensors,smart devices,and the Internet,which dramatically increases product...The Industrial Internet of Things(IIoT)achieves the automation,monitoring,and optimization of industrial processes by interconnecting various sensors,smart devices,and the Internet,which dramatically increases productivity and product quality.Nevertheless,the IIoT comprises a substantial amount of sensitive data,which requires encryption to ensure data privacy and security.Recently,Sun et al.proposed a certificateless searchable encryption scheme for IIoT to enable the retrieval of ciphertext data while protecting data privacy.However,we found that their scheme not only fails to satisfy trapdoor indistinguishability but also lacks defense against keyword guessing attacks.In addition,some schemes use deterministic algorithms in the encryption process,resulting in the same ciphertexts after encryption for the same keyword,thereby leaking the potential frequency distribution of the keyword in the ciphertext space,thereby leaking the potential frequency distribution of the keyword in the ciphertext space,allowing attackers to infer the plaintext information corresponding to the ciphertext through statistical analysis.To better protect data privacy,we propose an improved certificateless searchable encryption scheme with a designated server.With security analysis,we prove that our scheme provides multi-ciphertext indistinguishability and multi-trapdoor indistinguishability security under the random oracle.Experimental results show that the proposed scheme has good overall performance in terms of computational overhead,communication overhead,and security features.展开更多
The study aims to address the challenge of dynamic assessment in power systems by proposing a design scheme for an intelligent adaptive power distribution system based on runtime verification.The system architecture i...The study aims to address the challenge of dynamic assessment in power systems by proposing a design scheme for an intelligent adaptive power distribution system based on runtime verification.The system architecture is built upon cloud-edge-end collaboration,enabling comprehensive monitoring and precise management of the power grid through coordinated efforts across different levels.Specif-ically,the study employs the adaptive observer approach,allowing dynamic adjustments to observers to reflect updates in requirements and ensure system reliability.This method covers both structural and parametric adjustments to specifications,including updating time protection conditions,updating events,and adding or removing responses.The results demonstrate that with the implementation of adaptive observers,the system becomes more flexible in responding to changes,significantly enhancing its level of efficiency.By employing dynamically changing verification specifications,the system achieves real-time and flexible verification.This research provides technical support for the safe,efficient,and reliable operation of electrical power distribution systems.展开更多
The potential of cloud computing,an emerging concept to minimize the costs associated with computing has recently drawn the interest of a number of researchers.The fast advancements in cloud computing techniques led t...The potential of cloud computing,an emerging concept to minimize the costs associated with computing has recently drawn the interest of a number of researchers.The fast advancements in cloud computing techniques led to the amazing arrival of cloud services.But data security is a challenging issue for modern civilization.The main issues with cloud computing are cloud security as well as effective cloud distribution over the network.Increasing the privacy of data with encryption methods is the greatest approach,which has highly progressed in recent times.In this aspect,sanitization is also the process of confidentiality of data.The goal of this work is to present a deep learning-assisted data sanitization procedure for data security.The proposed data sanitization process involves the following steps:data preprocessing,optimal key generation,deep learning-assisted key fine-tuning,and Kronecker product.Here,the data preprocessing considers original data as well as the extracted statistical feature.Key generation is the subsequent process,for which,a self-adaptive Namib beetle optimization(SANBO)algorithm is developed in this research.Among the generated keys,appropriate keys are fine-tuned by the improved Deep Maxout classifier.Then,the Kronecker product is done in the sanitization process.Reversing the sanitization procedure will yield the original data during the data restoration phase.The study part notes that the suggested data sanitization technique guarantees cloud data security against malign attacks.Also,the analysis of proposed work in terms of restoration effectiveness and key sensitivity analysis is also done.展开更多
In recent years,the rapid development of Internet of Things(IoT)technology has led to a significant increase in the amount of data stored in the cloud.However,traditional IoT systems rely primarily on cloud data cente...In recent years,the rapid development of Internet of Things(IoT)technology has led to a significant increase in the amount of data stored in the cloud.However,traditional IoT systems rely primarily on cloud data centers for information storage and user access control services.This practice creates the risk of privacy breaches on IoT data sharing platforms,including issues such as data tampering and data breaches.To address these concerns,blockchain technology,with its inherent properties such as tamper-proof and decentralization,has emerged as a promising solution that enables trusted sharing of IoT data.Still,there are challenges to implementing encrypted data search in this context.This paper proposes a novel searchable attribute cryptographic access control mechanism that facilitates trusted cloud data sharing.Users can use keywords To efficiently search for specific data and decrypt content keys when their properties are consistent with access policies.In this way,cloud service providers will not be able to access any data privacy-related information,ensuring the security and trustworthiness of data sharing,as well as the protection of user data privacy.Our simulation results show that our approach outperforms existing studies in terms of time overhead.Compared to traditional access control schemes,our approach reduces data encryption time by 33%,decryption time by 5%,and search time by 75%.展开更多
基金supported by the National Natural Science Foundation of China(62272256,62202250)the Major Program of Shandong Provincial Natural Science Foundation for the Fundamental Research(ZR2022ZD03)+3 种基金the National Science Foundation of Shandong Province(ZR2021QF079)the Talent Cultivation Promotion Program of Computer Science and Technology in Qilu University of Technology(Shandong Academy of Sciences)(2023PY059)the Pilot Project for Integrated Innovation of Science,Education and Industry of Qilu University of Technology(Shandong Academy of Sciences)(2022XD001)the Colleges and Universities 20 Terms Foundation of Jinan City(202228093).
文摘The combination of blockchain and Internet of Things technology has made significant progress in smart agriculture,which provides substantial support for data sharing and data privacy protection.Nevertheless,achieving efficient interactivity and privacy protection of agricultural data remains a crucial issues.To address the above problems,we propose a blockchain-assisted federated learningdriven support vector machine(BAFL-SVM)framework to realize efficient data sharing and privacy protection.The BAFL-SVM is composed of the FedSVM-RiceCare module and the FedPrivChain module.Specifically,in FedSVM-RiceCare,we utilize federated learning and SVM to train the model,improving the accuracy of the experiment.Then,in FedPrivChain,we adopt homomorphic encryption and a secret-sharing scheme to encrypt the local model parameters and upload them.Finally,we conduct a large number of experiments on a real-world dataset of rice pests and diseases,and the experimental results show that our framework not only guarantees the secure sharing of data but also achieves a higher recognition accuracy compared with other schemes.
基金supported in part by the National Natural Science Foundation of China(62402288 and 62302063)the China Postdoctoral Science Foundation,China(2024M751811).
文摘Large Language Models(LLMs)are complex artificial intelligence systems,which can understand,generate,and translate human languages.By analyzing large amounts of textual data,these models learn language patterns to perform tasks such as writing,conversation,and summarization.Agents built on LLMs(LLM agents)further extend these capabilities,allowing them to process user interactions and perform complex operations in diverse task environments.However,during the processing and generation of massive data,LLMs and LLM agents pose a risk of sensitive information leakage,potentially threatening data privacy.This paper aims to demonstrate data privacy issues associated with LLMs and LLM agents to facilitate a comprehensive understanding.Specifically,we conduct an in-depth survey about privacy threats,encompassing passive privacy leakage and active privacy attacks.Subsequently,we introduce the privacy protection mechanisms employed by LLMs and LLM agents and provide a detailed analysis of their effectiveness.Finally,we explore the privacy protection challenges for LLMs and LLM agents as well as outline potential directions for future developments in this domain.
文摘Edge computing is becoming ever more relevant to offload compute-heavy tasks in vehicular networks.In this context,the concept of vehicular micro clouds(VMCs)has been proposed to use compute and storage resources on nearby vehicles to complete computational tasks.As many tasks in this application domain are time critical,offloading to the cloud is prohibitive.Additionally,task deadlines have to be dealt with.This paper addresses two main challenges.First,we present a task migration algorithm supporting deadlines in vehicular edge computing.The algorithm is following the earliest deadline first model but in presence of dynamic processing resources,i.e,vehicles joining and leaving a VMC.This task offloading is very sensitive to the mobility of vehicles in a VMC,i.e,the so-called dwell time a vehicles spends in the VMC.Thus,secondly,we propose a machine learning-based solution for dwell time prediction.Our dwell time prediction model uses a random forest approach to estimate how long a vehicle will stay in a VMC.Our approach is evaluated using mobility traces of an artificial simple intersection scenario as well as of real urban traffic in cities of Luxembourg and Nagoya.Our proposed approach is able to realize low-delay and low-failure task migration in dynamic vehicular conditions,advancing the state of the art in vehicular edge computing.
文摘Federated Learning(FL)is currently a widely used collaborative learning framework,and the distinguished feature of FL is that the clients involved in training do not need to share raw data,but only transfer the model parameters to share knowledge,and finally get a global model with improved performance.However,recent studies have found that sharing model parameters may still lead to privacy leakage.From the shared model parameters,local training data can be reconstructed and thus lead to a threat to individual privacy and security.We observed that most of the current attacks are aimed at client-specific data reconstruction,while limited attention is paid to the information leakage of the global model.In our work,we propose a novel FL attack based on shared model parameters that can deduce the data distribution of the global model.Different from other FL attacks that aim to infer individual clients’raw data,the data distribution inference attack proposed in this work shows that the attackers can have the capability to deduce the data distribution information behind the global model.We argue that such information is valuable since the training data behind a welltrained global model indicates the common knowledge of a specific task,such as social networks and e-commerce applications.To implement such an attack,our key idea is to adopt a deep reinforcement learning approach to guide the attack process,where the RL agent adjusts the pseudo-data distribution automatically until it is similar to the ground truth data distribution.By a carefully designed Markov decision proces(MDP)process,our implementation ensures our attack can have stable performance and experimental results verify the effectiveness of our proposed inference attack.
文摘With the rapid advancement of cloud technologies,cloud services have enormously contributed to the cloud community for application development life-cycle.In this context,Kubernetes has played a pivotal role as a cloud computing tool,enabling developers to adopt efficient and automated deployment strategies.Using Kubernetes as an orchestration tool and a cloud computing system as a manager of the infrastructures,developers can boost the development and deployment process.With cloud providers such as GCP,AWS,Azure,and Oracle offering Kubernetes services,the availability of both x86 and ARM platforms has become evident.However,while x86 currently dominates the market,ARM-based solutions have seen limited adoption,with only a few individuals actively working on ARM deployments.This study explores the efficiency and cost-effectiveness of implementing Kubernetes on different CPU platforms.By comparing the performance of x86 and ARM platforms,this research seeks to ascertain whether transitioning to ARM presents a more advantageous option for Kubernetes deployments.Through a comprehensive evaluation of scalability,cost,and overall performance,this study aims to shed light on the viability of leveraging ARM on different CPUs by providing valuable insights.
文摘The ever-escalating prevalence of malware is a serious cybersecurity threat,often requiring advanced post-incident forensic investigation techniques.This paper proposes a framework to enhance malware forensics by leveraging reinforcement learning(RL).The approach combines heuristic and signaturebased methods,supported by RL through a unified MDP model,which breaks down malware analysis into distinct states and actions.This optimisation enhances the identification and classification of malware variants.The framework employs Q-learning and other techniques to boost the speed and accuracy of detecting new and unknown malware,outperforming traditional methods.We tested the experimental framework across multiple virtual environments infected with various malware types.The RL agent collected forensic evidence and improved its performance through Q-tables and temporal difference learning.The epsilon-greedy exploration strategy,in conjunction with Q-learning updates,effectively facilitated transitions.The learning rate depended on the complexity of the MDP environment:higher in simpler ones for quicker convergence and lower in more complex ones for stability.This RL-enhanced model significantly reduced the time required for post-incident malware investigations,achieving a high accuracy rate of 94%in identifying malware.These results indicate RL’s potential to revolutionise post-incident forensics investigations in cybersecurity.Future work will incorporate more advanced RL algorithms and large language models(LLMs)to further enhance the effectiveness of malware forensic analysis.
基金made possible by NPRP(NPRP11S-1227-1701359)from the Qatar National Research Fund(a member of Qatar Foundation).
文摘We present a consensus mechanism in this paper that is designed specifically for supply chain blockchains,with a core focus on establishing trust among participating stakeholders through a novel reputation-based approach.The prevailing consensus mechanisms,initially crafted for cryptocurrency applications,prove unsuitable for the unique dynamics of supply chain systems.Unlike the broad inclusivity of cryptocurrency networks,our proposed mechanism insists on stakeholder participation rooted in process-specific quality criteria.The delineation of roles for supply chain participants within the consensus process becomes paramount.While reputation serves as a well-established quality parameter in various domains,its nuanced impact on non-cryptocurrency consensus mechanisms remains uncharted territory.Moreover,recognizing the primary role of efficient block verification in blockchain-enabled supply chains,our work introduces a comprehensive reputation model.This model strategically selects a leader node to orchestrate the entire block mining process within the consensus.Additionally,we innovate with a Schnorr Multisignature-based block verification mechanism seamlessly integrated into our proposed consensus model.Rigorous experiments are conducted to evaluate the performance and feasibility of our pioneering consensus mechanism,contributing valuable insights to the evolving landscape of blockchain technology in supply chain applications.
文摘Docker is a vital tool in modern development,enabling the creation,deployment,and execution of applications using containers,thereby ensuring consistency across various environments.However,developers often face challenges,particularly with filesystem complexities and performance bottlenecks when working directly within Docker containers.This is where Mutagen comes into play,significantly enhancing the Docker experience by offering efficient network file synchronization,reducing latency in file operations,and improving overall data transfer rates in containerized environments.By exploring Docker’s architecture,examining Mutagen’s role,and evaluating their combined performance impacts,particularly in terms of file operation speeds and development workflow efficiencies,this research provides a deep understanding of these technologies and their potential to streamline development processes in networked and distributed environments.
基金supported by the Ministry of Science and ICT,Korea,under the Grand IT Research Center support program(IITP-2022-2020-0-01612)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)Priority Research Centers Program through the National Research Fund(NRF)Korea funded by the Ministry of Education,Science and Technology,South Korea(2018R1A6A1A03024003)+1 种基金supported in part by National Science Foundation(NSF)of USA(2200673)the Office of Sponsored Programs&Research Seed Funding Program at Towson University,United States.
文摘The increasing prevalence of cancer necessitates advanced methodologies for early detection and diagnosis.Early intervention is crucial for improving patient outcomes and reducing the overall burden on healthcare systems.Traditional centralized methods of medical image analysis pose significant risks to patient privacy and data security,as they require the aggregation of sensitive information in a single location.Furthermore,these methods often suffer from limitations related to data diversity and scalability,hindering the development of universally robust diagnostic models.Recent advancements in machine learning,particularly deep learning,have shown promise in enhancing medical image analysis.However,the need to access large and diverse datasets for training these models introduces challenges in maintaining patient confidentiality and adhering to strict data protection regulations.This paper introduces FedViTBloc,a secure and privacy-enhanced framework for medical image analysis utilizing Federated Learning(FL)combined with Vision Transformers(ViT)and blockchain technology.The proposed system ensures patient data privacy and security through fully homomorphic encryption and differential privacy techniques.By employing a decentralized FL approach,multiple medical institutions can collaboratively train a robust deep-learning model without sharing raw data.Blockchain integration further enhances the security and trustworthiness of the FL process by managing client registration and ensuring secure onboarding of participants.Experimental results demonstrate the effectiveness of FedViTBloc in medical image analysis while maintaining stringent privacy standards,achieving 67%accuracy and reducing loss below 2 across 10 clients,ensuring scalability and robustness.
文摘The modern industries of today demand the classification of satellite images,and to use the information obtained from it for their advantage and growth.The extracted information also plays a crucial role in national security and the mapping of geographical locations.The conventional methods often fail to handle the complexities of this process.So,an effective method is required with high accuracy and stability.In this paper,a new methodology named RankEnsembleFS is proposed that addresses the crucial issues of stability and feature aggregation in the context of the SAT-6 dataset.RankEnsembleFS makes use of a two-step process that consists of ranking the features and then selecting the optimal feature subset from the top-ranked features.RankEnsembleFS achieved comparable accuracy results to state-of-the-art models for the SAT-6 dataset while significantly reducing the feature space.This reduction in feature space is important because it reduces computational complexity and enhances the interpretability of the model.Moreover,the proposed method demonstrated good stability in handling changes in data characteristics,which is critical for reliable performance over time and surpasses existing ML ensemble methods in terms of stability,threshold setting,and feature aggregation.In summary,this paper provides compelling evidence that this RankEnsembleFS methodology presents excellent performance and overcomes key issues in feature selection and image classification for the SAT-6 dataset.
基金supported in part by National Key R&D Program of China(2020AAA0108501).
文摘Knowledge graph(KG)representation learning aims to map entities and relations into a low-dimensional representation space,showing significant potential in many tasks.Existing approaches follow two categories:(1)Graph-based approaches encode KG elements into vectors using structural score functions.(2)Text-based approaches embed text descriptions of entities and relations via pre-trained language models(PLMs),further fine-tuned with triples.We argue that graph-based approaches struggle with sparse data,while text-based approaches face challenges with complex relations.To address these limitations,we propose a unified Text-Augmented Attention-based Recurrent Network,bridging the gap between graph and natural language.Specifically,we employ a graph attention network based on local influence weights to model local structural information and utilize a PLM based prompt learning to learn textual information,enhanced by a mask-reconstruction strategy based on global influence weights and textual contrastive learning for improved robustness and generalizability.Besides,to effectively model multi-hop relations,we propose a novel semantic-depth guided path extraction algorithm and integrate cross-attention layers into recurrent neural networks to facilitate learning the long-term relation dependency and offer an adaptive attention mechanism for varied-length information.Extensive experiments demonstrate that our model exhibits superiority over existing models across KG completion and question-answering tasks.
文摘In the emerging field of Meta Computing,where data collection and integration are essential components,the threat of adversary hidden link attacks poses a significant challenge to web crawlers.In this paper,we investigate the influence of these attacks on data collection by web crawlers,which famously elude conventional detection techniques using large language models(LLMs).Empirically,we find some vulnerabilities in the current crawler mechanisms and large language model detection,especially in code inspection,and propose enhancements that will help mitigate these weaknesses.Our assessment of real-world web pages reveals the prevalence and impact of adversary hidden link attacks,emphasizing the necessity for robust countermeasures.Furthermore,we introduce a mitigation framework that integrates element visual inspection techniques.Our evaluation demonstrates the framework’s efficacy in detecting and addressing these advanced cyber threats within the evolving landscape of Meta Computing.
基金supported by the National Key R&D Program of China(2022YFB2703400)the BUPT Excellent Ph.D.Students Foundation(CX2022218).
文摘In the growing demand for data sharing,how to realize fine-grained trusted access control of shared data and protect data security has become a difficult problem.Ciphertext policy attribute-based encryption(CP-ABE)model is widely used in cloud data sharing scenarios,but there are problems such as privacy leakage of access policy,irrevocability of user or attribute,key escrow,and trust bottleneck.Therefore,we propose a blockchain-assisted CP-ABE(B-CP-ABE)mechanism for trusted data access control.Firstly,we construct a data trusted access control architecture based on the B-CP-ABE,which realizes the automated execution of access policies through smart contracts and guarantees the trusted access process through blockchain.Then,we define the B-CP-ABE scheme,which has the functions of policy partial hidden,attribute revocation,and anti-key escrow.The B-CP-ABE scheme utilizes Bloom filter to hide the mapping relationship of sensitive attributes in the access structure,realizes flexible revocation and recovery of users and attributes by re-encryption algorithm,and solves the key escrow problem by joint authorization of data owners and attribute authority.Finally,we demonstrate the usability of the B-CP-ABE scheme by performing security analysis and performance analysis.
基金supported in part by the NSF grants(CNS 1757533,CNS 1629746,CNS 1564128,CNS 1449860,CNS 1461932,CNS1460971,and IIP 1439672).
文摘A Link Flooding Attack(LFA)is a special type of Denial-of-Service(DoS)attack in which the attacker sends out a huge number of requests to exhaust the capacity of a link on the path the traffic comes to a server.As a result,user traffic cannot reach the server.As a result,DoS and degradation of Qualityof-Service(QoS)occur.Because the attack traffic does not go to the victim,protecting the legitimate traffic alone is hard for the victim.The victim can protect its legitimate traffic by using a special type of router called filter router(FR).An FR can receive server filters and apply them to block a link incident to it.An FR probabilistically appends its own IP address to packets it forwards,and the victim uses that information to discover the traffic topology.By analyzing traffic rates and paths,the victim identifies some links that may be congested.The victim needs to select some of these possible congested links(PCLs)and send a filter to the corresponding FR so that legitimate traffic avoids congested paths.In this paper,we formulate two optimization problems for blocking the least number of PCLs so that the legitimate traffic goes through a non-congested path.We consider the scenario where every user has at least one non-congested shortest path in the first problem.We extend the first problem to a scenario where there are some users whose shortest paths are all congested.We transform the original problem to the vertex separation problem to find the links to block.We use a custom-built Java multi-threaded simulator and conduct extensive simulations to support our solutions.
基金supported by the Technical and Vocational Training Corporation(TVTC)through the Saudi Arabian Culture Bureau(SACB)in the United Kingdom and the EPSRC-funded project National Edge AI Hub for Real Data:Edge Intelligence for Cyber-disturbances and Data Quality(EP/Y028813/1).
文摘Hierarchical Federated Learning(HFL)extends traditional Federated Learning(FL)by introducing multi-level aggregation in which model updates pass through clients,edge servers,and a global server.While this hierarchical structure enhances scalability,it also increases vulnerability to adversarial attacks—such as data poisoning and model poisoning—that disrupt learning by introducing discrepancies at the edge server level.These discrepancies propagate through aggregation,affecting model consistency and overall integrity.Existing studies on adversarial behaviour in FL primarily rely on single-metric approaches—such as cosine similarity or Euclidean distance—to assess model discrepancies and filter out anomalous updates.However,these methods fail to capture the diverse ways adversarial attacks influence model updates,particularly in highly heterogeneous data environments and hierarchical structures.Attackers can exploit the limitations of single-metric defences by crafting updates that seem benign under one metric while remaining anomalous under another.Moreover,prior studies have not systematically analysed how model discrepancies evolve over time,vary across regions,or affect clustering structures in HFL architectures.To address these limitations,we propose the Model Discrepancy Score(MDS),a multi-metric framework that integrates Dissimilarity,Distance,Uncorrelation,and Divergence to provide a comprehensive analysis of how adversarial activity affects model discrepancies.Through temporal,spatial,and clustering analyses,we examine how attacks affect model discrepancies at the edge server level in 3LHFL and 4LHFL architectures and evaluate MDS’s ability to distinguish between benign and malicious servers.Our results show that while 4LHFL effectively mitigates discrepancies in regional attack scenarios,it struggles with distributed attacks due to additional aggregation layers that obscure distinguishable discrepancy patterns over time,across regions,and within clustering structures.Factors influencing detection include data heterogeneity,attack sophistication,and hierarchical aggregation depth.These findings highlight the limitations of single-metric approaches and emphasize the need for multi-metric strategies such as MDS to enhance HFL security.
基金supported by the National Science Foundation(2343619,2416872,2244219,and 2146497).
文摘In recent research on the Digital Twin-based Vehicular Ad hoc Network(DT-VANET),Federated Learning(FL)has shown its ability to provide data privacy.However,Federated learning struggles to adequately train a global model when confronted with data heterogeneity and data sparsity among vehicles,which ensure suboptimal accuracy in making precise predictions for different vehicle types.To address these challenges,this paper combines Federated Transfer Learning(FTL)to conduct vehicle clustering related to types of vehicles and proposes a novel Hierarchical Federated Transfer Learning(HFTL).We construct a framework for DT-VANET,along with two algorithms designed for cloud server model updates and intra-cluster federated transfer learning,to improve the accuracy of the global model.In addition,we developed a data quality score-based mechanism to prevent the global model from being affected by malicious vehicles.Lastly,detailed experiments on real-world datasets are conducted,considering different performance metrics that verify the effectiveness and efficiency of our algorithm.
基金supported by National Key Research and Development Program of China(2021YFB3101100)National Natural Science Foundation of China(62272123)+2 种基金Project of High-level Innovative Talents of Guizhou Province,China([2020]6008)Science and Technology Program of Guiyang,China([2022]2-4)Science and Technology Program of Guizhou Province,China([2022]065 and[2022]ZD001).
文摘The Industrial Internet of Things(IIoT)achieves the automation,monitoring,and optimization of industrial processes by interconnecting various sensors,smart devices,and the Internet,which dramatically increases productivity and product quality.Nevertheless,the IIoT comprises a substantial amount of sensitive data,which requires encryption to ensure data privacy and security.Recently,Sun et al.proposed a certificateless searchable encryption scheme for IIoT to enable the retrieval of ciphertext data while protecting data privacy.However,we found that their scheme not only fails to satisfy trapdoor indistinguishability but also lacks defense against keyword guessing attacks.In addition,some schemes use deterministic algorithms in the encryption process,resulting in the same ciphertexts after encryption for the same keyword,thereby leaking the potential frequency distribution of the keyword in the ciphertext space,thereby leaking the potential frequency distribution of the keyword in the ciphertext space,allowing attackers to infer the plaintext information corresponding to the ciphertext through statistical analysis.To better protect data privacy,we propose an improved certificateless searchable encryption scheme with a designated server.With security analysis,we prove that our scheme provides multi-ciphertext indistinguishability and multi-trapdoor indistinguishability security under the random oracle.Experimental results show that the proposed scheme has good overall performance in terms of computational overhead,communication overhead,and security features.
基金supported by the China Electric Power ResearchInstitute and Electric Power Research Institute State Grid AnhuiElectric Power Co.,Ltd.,China(5400-202355201A-1-1-ZN).
文摘The study aims to address the challenge of dynamic assessment in power systems by proposing a design scheme for an intelligent adaptive power distribution system based on runtime verification.The system architecture is built upon cloud-edge-end collaboration,enabling comprehensive monitoring and precise management of the power grid through coordinated efforts across different levels.Specif-ically,the study employs the adaptive observer approach,allowing dynamic adjustments to observers to reflect updates in requirements and ensure system reliability.This method covers both structural and parametric adjustments to specifications,including updating time protection conditions,updating events,and adding or removing responses.The results demonstrate that with the implementation of adaptive observers,the system becomes more flexible in responding to changes,significantly enhancing its level of efficiency.By employing dynamically changing verification specifications,the system achieves real-time and flexible verification.This research provides technical support for the safe,efficient,and reliable operation of electrical power distribution systems.
文摘The potential of cloud computing,an emerging concept to minimize the costs associated with computing has recently drawn the interest of a number of researchers.The fast advancements in cloud computing techniques led to the amazing arrival of cloud services.But data security is a challenging issue for modern civilization.The main issues with cloud computing are cloud security as well as effective cloud distribution over the network.Increasing the privacy of data with encryption methods is the greatest approach,which has highly progressed in recent times.In this aspect,sanitization is also the process of confidentiality of data.The goal of this work is to present a deep learning-assisted data sanitization procedure for data security.The proposed data sanitization process involves the following steps:data preprocessing,optimal key generation,deep learning-assisted key fine-tuning,and Kronecker product.Here,the data preprocessing considers original data as well as the extracted statistical feature.Key generation is the subsequent process,for which,a self-adaptive Namib beetle optimization(SANBO)algorithm is developed in this research.Among the generated keys,appropriate keys are fine-tuned by the improved Deep Maxout classifier.Then,the Kronecker product is done in the sanitization process.Reversing the sanitization procedure will yield the original data during the data restoration phase.The study part notes that the suggested data sanitization technique guarantees cloud data security against malign attacks.Also,the analysis of proposed work in terms of restoration effectiveness and key sensitivity analysis is also done.
基金supported by the Science and Technology Project of State Grid Corporation of China(5700-202328293A-1-1-ZN).
文摘In recent years,the rapid development of Internet of Things(IoT)technology has led to a significant increase in the amount of data stored in the cloud.However,traditional IoT systems rely primarily on cloud data centers for information storage and user access control services.This practice creates the risk of privacy breaches on IoT data sharing platforms,including issues such as data tampering and data breaches.To address these concerns,blockchain technology,with its inherent properties such as tamper-proof and decentralization,has emerged as a promising solution that enables trusted sharing of IoT data.Still,there are challenges to implementing encrypted data search in this context.This paper proposes a novel searchable attribute cryptographic access control mechanism that facilitates trusted cloud data sharing.Users can use keywords To efficiently search for specific data and decrypt content keys when their properties are consistent with access policies.In this way,cloud service providers will not be able to access any data privacy-related information,ensuring the security and trustworthiness of data sharing,as well as the protection of user data privacy.Our simulation results show that our approach outperforms existing studies in terms of time overhead.Compared to traditional access control schemes,our approach reduces data encryption time by 33%,decryption time by 5%,and search time by 75%.