Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machin...The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.展开更多
1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the es...1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the escalating demand for high-performance computing over space.Computing over space entails the deployment of computational resources on platforms such as satellites to process large-scale data under constraints such as high radiation exposure,restricted power consumption,and minimized weight.展开更多
Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it ...Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it is difficult for them to satisfy low-delay and low-energy consumed task processing re-quirements due to their limited computing resources.To address the above problems,this paper presents the LEO satellites cooperative task offloading and computing resource allocation(LEOC-TC)algorithm.Firstly,a LEO satellites cooperative task offloading system was designed so that the multispectral LEO satellites in the system could leave their tasks locally or offload them to other LEO satellites with servers for processing,thus providing high-quality information-processing services for multispectral LEO satellites.Secondly,an optimization problem with the objective of minimizing the weighted sum of the total task pro-cessing delay and total energy consumed for multi-spectral LEO satellite is established,and the optimiza-tion problem is split into an offloading ratio subprob-lem and a computing resource subproblem.Finally,Bernoulli mapping tuna swarm optimization algorithm is used to solve the above two sub-problems separately in order to satisfy the demand of low delay and low energy consumed by the system.Simulation results show that the total task processing cost of the LEOCTC algorithm can be reduced by 63.32%,66.67%,and 80.72%compared to the random offloading ratio algorithm,the average resource offloading algorithm,and the local computing algorithm,respectively.展开更多
Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves ...Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets.展开更多
The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with e...The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with each other.Blockchain-based STS devices need a lot of computer power to change different video feed quality and forms into different versions and structures that meet the needs of different users.On the other hand,existing blockchains can’t support live streaming because they take too long to process and don’t have enough computer power.Large amounts of video data being sent and analyzed put too much stress on networks for vehicles.A video surveillance method is suggested in this paper to improve the performance of the blockchain system’s data and lower the latency across the multiple access edge computing(MEC)system.The integration of MEC and blockchain for video surveillance in autonomous vehicles(IMEC-BVS)framework has been proposed.To deal with this problem,the joint optimization problem is shown using the actor-critical asynchronous advantage(ACAA)method and deep reinforcement training as a Markov Choice Progression(MCP).Simulation results show that the suggested method quickly converges and improves the performance of MEC and blockchain when used together for video surveillance in self-driving cars compared to other methods.展开更多
Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing...Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework.展开更多
The integration of blockchain and edgeto-end collaborative computing offers a solution to address the trust issues arising from untrusted IIoT devices.However,ensuring efficiency and energy-saving in applying blockcha...The integration of blockchain and edgeto-end collaborative computing offers a solution to address the trust issues arising from untrusted IIoT devices.However,ensuring efficiency and energy-saving in applying blockchain to edge-to-end collaborative computing remains a significant challenge.To tackle this,this paper proposes an innovative task-oriented blockchain architecture.The architecture comprises trusted Edge Computing(EC)servers and untrusted Industrial Internet of Things(IIoT)devices.We organize untrusted IIoT devices into several clusters,each executing a task in the form of smart contracts,and package the work logs of a task into a block.Executing a task with smart contracts within a cluster ensures the reliability of the task result.Reducing the scope of nodes involved in block consensus increases the overall throughput of the blockchain.Packaging task logs into blocks,storing and propagating blocks through corresponding Edge Computing(EC)servers reduces network load and avoids computing power competition.The paper also presents the proposed architecture’s theoretical TPS(Transactions Per Second)and failure probability calculations.Experimental results demonstrate that this architecture ensures computational security,improves TPS,and reduces resource consumption.展开更多
Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC ha...Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks.展开更多
Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the p...Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the present days with high performance and scalable distributed computing systems. This fast growth of computing systems was first observed by Gordon E. Moore in 1965 and postulated as Moore’s Law. For the development of the scalable distributed computing systems, the year 2000 was a very special year. The first GHz speed processor, GB size memory and GB/s data transmission through network were achieved. Interestingly, in the same year the usable Grid computing systems emerged, which gave a strong impulse to a rapid development of distributed computing systems. This paper recognizes these facts that occurred in the year 2000, as the G-phenomena, a millennium cornerstone for the rapid development of scalable distributed systems evolved around the Grid and Cloud computing paradigms.展开更多
With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more d...With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more dynamic and more distributed,with higher uncertainty.These new power system behaviors bring significant challenges in power system modeling and simulation as more data need to be analyzed for larger systems and more complex models to be solved in a shorter time period.The conventional computing approaches will not be sufficient for future power systems.This paper provides a historical review of computing for power system operation and planning,discusses technology advancements in high performance computing(HPC),and describes the drivers for employing HPC techniques.Some high performance computing application examples with different HPC techniques,including the latest quantum computing,are also presented to show how HPC techniques can help us be well prepared to meet the requirements of power system computing in a clean energy future.展开更多
Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a r...Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a result many service providers have begun to deploy edge servers to cache intelligent services in order to reduce transmission delay and communication energy consumption.However,finding the optimal service caching strategy remains a significant challenge due to the stochastic nature of service requests and the bulky nature of intelligent services.To deal with this,we propose a distributed service caching scheme integrating deep reinforcement learning(DRL)with mobility prediction,which we refer to as DSDM.Specifically,we employ the D3QN(Deep Double Dueling Q-Network)framework to integrate Long Short-Term Memory(LSTM)predicted mobile device locations into the service caching replacement algorithm and adopt the distributed multi-agent approach for learning and training.Experimental results demonstrate that DSDM achieves significant performance improvements in reducing communication energy consumption compared to traditional methods across various scenarios.展开更多
The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches l...The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.展开更多
This paper introduces a novel numerical method based on an energy-minimizing normalized residual network(EMNorm Res Net)to compute the ground-state solution of Bose-Einstein condensates at zero or low temperatures.Sta...This paper introduces a novel numerical method based on an energy-minimizing normalized residual network(EMNorm Res Net)to compute the ground-state solution of Bose-Einstein condensates at zero or low temperatures.Starting from the three-dimensional Gross-Pitaevskii equation(GPE),we reduce it to the 1D and 2D GPEs because of the radial symmetry and cylindrical symmetry.The ground-state solution is formulated by minimizing the energy functional under constraints,which is directly solved using the EM-Norm Res Net approach.The paper provides detailed solutions for the ground states in 1D,2D(with radial symmetry),and 3D(with cylindrical symmetry).We use the Thomas-Fermi approximation as the target function to pre-train the neural network.Then,the formal network is trained using the energy minimization method.In contrast to traditional numerical methods,our neural network approach introduces two key innovations:(i)a novel normalization technique designed for high-dimensional systems within an energy-based loss function;(ii)improved training efficiency and model robustness by incorporating gradient stabilization techniques into residual networks.Extensive numerical experiments validate the method's accuracy across different spatial dimensions.展开更多
Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes...Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92).展开更多
Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and soft...Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and software resources sufficiently,but also bring a series of security,reliability and credibility problems.To solve these problems,a novel secure-agent-based trustworthy virtual private cloud model named SATVPC was proposed for the integrated and open cloud computing environment.Through the introduction of secure-agent technology,SATVPC provides an independent,safe and trustworthy computing virtual private platform for multi-tenant systems.In order to meet the needs of the credibility of SATVPC and mandate the trust relationship between each task execution agent and task executor node suitable for their security policies,a new dynamic composite credibility evaluation mechanism was presented,including the credit index computing algorithm and the credibility differentiation strategy.The experimental system shows that SATVPC and the credibility evaluation mechanism can ensure the security of open computing environments with feasibility.Experimental results and performance analysis also show that the credit indexes computing algorithm can evaluate the credibilities of task execution agents and task executor nodes quantitatively,correctly and operationally.展开更多
Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobi...Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobile devices. It provides mobile users with data storage and processing services on a cloud computing platform. Because mobile cloud computing is still in its infancy we aim to clarify confusion that has arisen from different views. Existing works are reviewed, and an overview of recent advances in mobile cloud computing is provided. We investigate representative infrastructures of mobile cloud computing and analyze key components. Moreover, emerging MCC models and services are discussed, and challenging issues are identified that will need to be addressed in future work.展开更多
With analysis of limitations Trusted Computing Group (TCG) has encountered, we argued that virtual machine monitor (VMM) is the appropriate architecture for implementing TCG specification. Putting together the VMM...With analysis of limitations Trusted Computing Group (TCG) has encountered, we argued that virtual machine monitor (VMM) is the appropriate architecture for implementing TCG specification. Putting together the VMM architecture, TCG hardware and application-oriented "thin" virtual machine (VM), Trusted VMM-based security architecture is present in this paper with the character of reduced and distributed trusted computing base (TCB). It provides isolation and integrity guarantees based on which general security requirements can be satisfied.展开更多
Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to co...Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems.展开更多
As the 5G architecture gains momentum,interest in 6G is growing.The proliferation of Internet of Things(IoT)devices,capable of capturing sensitive images,has increased the need for secure transmission and robust acces...As the 5G architecture gains momentum,interest in 6G is growing.The proliferation of Internet of Things(IoT)devices,capable of capturing sensitive images,has increased the need for secure transmission and robust access control mechanisms.The vast amount of data generated by low-computing devices poses a challenge to traditional centralized access control,which relies on trusted third parties and complex computations,resulting in intricate interactions,higher hardware costs,and processing delays.To address these issues,this paper introduces a novel distributed access control approach that integrates a decentralized and lightweight encryption mechanism with image transmission.This method enhances data security and resource efficiency without imposing heavy computational and network burdens.In comparison to the best existing approach,it achieves a 7%improvement in accuracy,effectively addressing existing gaps in lightweight encryption and recognition performance.展开更多
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
文摘The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.
基金supported in part by the National Natural Science Foundation of China(62025404)in part by the National Key Research and Development Program of China(2022YFB3902802)+1 种基金in part by the Beijing Natural Science Foundation(L241013)in part by the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA000000).
文摘1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the escalating demand for high-performance computing over space.Computing over space entails the deployment of computational resources on platforms such as satellites to process large-scale data under constraints such as high radiation exposure,restricted power consumption,and minimized weight.
基金supported in part by Sub Project of National Key Research and Development plan in 2020(No.2020YFC1511704)scientific research level improvement project to promote the colleges connotation development of Beijing Information Science&Technology University(No.2020KYNH212,No.2021CGZH302)in part by the National Natural Science Foundation of China(Grant No.61971048).
文摘Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it is difficult for them to satisfy low-delay and low-energy consumed task processing re-quirements due to their limited computing resources.To address the above problems,this paper presents the LEO satellites cooperative task offloading and computing resource allocation(LEOC-TC)algorithm.Firstly,a LEO satellites cooperative task offloading system was designed so that the multispectral LEO satellites in the system could leave their tasks locally or offload them to other LEO satellites with servers for processing,thus providing high-quality information-processing services for multispectral LEO satellites.Secondly,an optimization problem with the objective of minimizing the weighted sum of the total task pro-cessing delay and total energy consumed for multi-spectral LEO satellite is established,and the optimiza-tion problem is split into an offloading ratio subprob-lem and a computing resource subproblem.Finally,Bernoulli mapping tuna swarm optimization algorithm is used to solve the above two sub-problems separately in order to satisfy the demand of low delay and low energy consumed by the system.Simulation results show that the total task processing cost of the LEOCTC algorithm can be reduced by 63.32%,66.67%,and 80.72%compared to the random offloading ratio algorithm,the average resource offloading algorithm,and the local computing algorithm,respectively.
基金funded by Multimedia University(Ref:MMU/RMC/PostDoc/NEW/2024/9804).
文摘Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets.
文摘The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with each other.Blockchain-based STS devices need a lot of computer power to change different video feed quality and forms into different versions and structures that meet the needs of different users.On the other hand,existing blockchains can’t support live streaming because they take too long to process and don’t have enough computer power.Large amounts of video data being sent and analyzed put too much stress on networks for vehicles.A video surveillance method is suggested in this paper to improve the performance of the blockchain system’s data and lower the latency across the multiple access edge computing(MEC)system.The integration of MEC and blockchain for video surveillance in autonomous vehicles(IMEC-BVS)framework has been proposed.To deal with this problem,the joint optimization problem is shown using the actor-critical asynchronous advantage(ACAA)method and deep reinforcement training as a Markov Choice Progression(MCP).Simulation results show that the suggested method quickly converges and improves the performance of MEC and blockchain when used together for video surveillance in self-driving cars compared to other methods.
文摘Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework.
基金supported in part by the National Natural Science Foundation of China(62071396)the National Science Foundation of Sichuan Province(2022NSFSC0531)+3 种基金Sichuan Provincial Key Laboratory of Advanced Cryptographic Technology and System Security Project(SKLACSS-202309)the 2024 Open Project of the Intelligent Policing and National Security Risk Management Laboratory of Sichuan Police College(ZHKFYB2401)Meishan City Guiding Science and Technology Plan Project(2024KJZD168,2024KJZD156)Sichuan Technology and Business University School-level Scientific Research Project(XJ24YB031,XJ24ZD006).
文摘The integration of blockchain and edgeto-end collaborative computing offers a solution to address the trust issues arising from untrusted IIoT devices.However,ensuring efficiency and energy-saving in applying blockchain to edge-to-end collaborative computing remains a significant challenge.To tackle this,this paper proposes an innovative task-oriented blockchain architecture.The architecture comprises trusted Edge Computing(EC)servers and untrusted Industrial Internet of Things(IIoT)devices.We organize untrusted IIoT devices into several clusters,each executing a task in the form of smart contracts,and package the work logs of a task into a block.Executing a task with smart contracts within a cluster ensures the reliability of the task result.Reducing the scope of nodes involved in block consensus increases the overall throughput of the blockchain.Packaging task logs into blocks,storing and propagating blocks through corresponding Edge Computing(EC)servers reduces network load and avoids computing power competition.The paper also presents the proposed architecture’s theoretical TPS(Transactions Per Second)and failure probability calculations.Experimental results demonstrate that this architecture ensures computational security,improves TPS,and reduces resource consumption.
基金The authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through the Large Group Project under grant number(RGP2/337/46)The research team thanks the Deanship of Graduate Studies and Scientific Research at Najran University for supporting the research project through the Nama’a program,with the project code NU/GP/SERC/13/352-4.
文摘Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks.
基金in part,supported by the European Commission through the EU FP7 SEE GRID SCI and SCI BUS projectsby the Grant 098-0982562-2567 awarded by the Ministry of Science,Education and Sports of the Republic of Croatia.
文摘Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the present days with high performance and scalable distributed computing systems. This fast growth of computing systems was first observed by Gordon E. Moore in 1965 and postulated as Moore’s Law. For the development of the scalable distributed computing systems, the year 2000 was a very special year. The first GHz speed processor, GB size memory and GB/s data transmission through network were achieved. Interestingly, in the same year the usable Grid computing systems emerged, which gave a strong impulse to a rapid development of distributed computing systems. This paper recognizes these facts that occurred in the year 2000, as the G-phenomena, a millennium cornerstone for the rapid development of scalable distributed systems evolved around the Grid and Cloud computing paradigms.
基金the support from U.S.Department of Energy through its Advanced Grid Modeling program,Exascale Computing Program(ECP)The Grid Modernization Laboratory Consortium(GMLC)+1 种基金Advanced Research Projects Agency-Energy(ARPA-E),The National Quantum Information Science Research Centers,Co-design Center for Quantum Advantage(C2QA)the Office of Advanced Scientific Computing Research(ASCR).
文摘With the global trend of pursuing clean energy and decarbonization,power systems have been evolving in a fast pace that we have never seen in the history of electrification.This evolution makes the power system more dynamic and more distributed,with higher uncertainty.These new power system behaviors bring significant challenges in power system modeling and simulation as more data need to be analyzed for larger systems and more complex models to be solved in a shorter time period.The conventional computing approaches will not be sufficient for future power systems.This paper provides a historical review of computing for power system operation and planning,discusses technology advancements in high performance computing(HPC),and describes the drivers for employing HPC techniques.Some high performance computing application examples with different HPC techniques,including the latest quantum computing,are also presented to show how HPC techniques can help us be well prepared to meet the requirements of power system computing in a clean energy future.
基金supported by the National Natural Science Foundation of China under grants No.92267104 and 62372242。
文摘Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a result many service providers have begun to deploy edge servers to cache intelligent services in order to reduce transmission delay and communication energy consumption.However,finding the optimal service caching strategy remains a significant challenge due to the stochastic nature of service requests and the bulky nature of intelligent services.To deal with this,we propose a distributed service caching scheme integrating deep reinforcement learning(DRL)with mobility prediction,which we refer to as DSDM.Specifically,we employ the D3QN(Deep Double Dueling Q-Network)framework to integrate Long Short-Term Memory(LSTM)predicted mobile device locations into the service caching replacement algorithm and adopt the distributed multi-agent approach for learning and training.Experimental results demonstrate that DSDM achieves significant performance improvements in reducing communication energy consumption compared to traditional methods across various scenarios.
文摘The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.
基金supported by the National Natural Science Foundation of China(Grant No.11971411)。
文摘This paper introduces a novel numerical method based on an energy-minimizing normalized residual network(EMNorm Res Net)to compute the ground-state solution of Bose-Einstein condensates at zero or low temperatures.Starting from the three-dimensional Gross-Pitaevskii equation(GPE),we reduce it to the 1D and 2D GPEs because of the radial symmetry and cylindrical symmetry.The ground-state solution is formulated by minimizing the energy functional under constraints,which is directly solved using the EM-Norm Res Net approach.The paper provides detailed solutions for the ground states in 1D,2D(with radial symmetry),and 3D(with cylindrical symmetry).We use the Thomas-Fermi approximation as the target function to pre-train the neural network.Then,the formal network is trained using the energy minimization method.In contrast to traditional numerical methods,our neural network approach introduces two key innovations:(i)a novel normalization technique designed for high-dimensional systems within an energy-based loss function;(ii)improved training efficiency and model robustness by incorporating gradient stabilization techniques into residual networks.Extensive numerical experiments validate the method's accuracy across different spatial dimensions.
基金funded by the Northern Border University,Arar,KSA,under the project number“NBU-FFR-2025-3555-07”.
文摘Healthcare networks are transitioning from manual records to electronic health records,but this shift introduces vulnerabilities such as secure communication issues,privacy concerns,and the presence of malicious nodes.Existing machine and deep learning-based anomalies detection methods often rely on centralized training,leading to reduced accuracy and potential privacy breaches.Therefore,this study proposes a Blockchain-based-Federated Learning architecture for Malicious Node Detection(BFL-MND)model.It trains models locally within healthcare clusters,sharing only model updates instead of patient data,preserving privacy and improving accuracy.Cloud and edge computing enhance the model’s scalability,while blockchain ensures secure,tamper-proof access to health data.Using the PhysioNet dataset,the proposed model achieves an accuracy of 0.95,F1 score of 0.93,precision of 0.94,and recall of 0.96,outperforming baseline models like random forest(0.88),adaptive boosting(0.90),logistic regression(0.86),perceptron(0.83),and deep neural networks(0.92).
基金Projects(61202004,61272084)supported by the National Natural Science Foundation of ChinaProjects(2011M500095,2012T50514)supported by the China Postdoctoral Science Foundation+2 种基金Projects(BK2011754,BK2009426)supported by the Natural Science Foundation of Jiangsu Province,ChinaProject(12KJB520007)supported by the Natural Science Fund of Higher Education of Jiangsu Province,ChinaProject(yx002001)supported by the Priority Academic Program Development of Jiangsu Higher Education Institutions,China
文摘Private clouds and public clouds are turning mutually into the open integrated cloud computing environment,which can aggregate and utilize WAN and LAN networks computing,storage,information and other hardware and software resources sufficiently,but also bring a series of security,reliability and credibility problems.To solve these problems,a novel secure-agent-based trustworthy virtual private cloud model named SATVPC was proposed for the integrated and open cloud computing environment.Through the introduction of secure-agent technology,SATVPC provides an independent,safe and trustworthy computing virtual private platform for multi-tenant systems.In order to meet the needs of the credibility of SATVPC and mandate the trust relationship between each task execution agent and task executor node suitable for their security policies,a new dynamic composite credibility evaluation mechanism was presented,including the credit index computing algorithm and the credibility differentiation strategy.The experimental system shows that SATVPC and the credibility evaluation mechanism can ensure the security of open computing environments with feasibility.Experimental results and performance analysis also show that the credit indexes computing algorithm can evaluate the credibilities of task execution agents and task executor nodes quantitatively,correctly and operationally.
基金supported by Hong Kong RGC under the GRF grant PolyU5106/10ENokia Research Lab (Beijing) under the grant H-ZG19+1 种基金supported by the National S&T Major Project of China under No.2009ZX03006-001Guangdong S&T Major Project under No.2009A080207002
文摘Mobile Cloud Computing (MCC) is emerging as one of the most important branches of cloud computing. In this paper, MCC is defined as cloud computing extended by mobility, and a new ad-hoc infrastructure based on mobile devices. It provides mobile users with data storage and processing services on a cloud computing platform. Because mobile cloud computing is still in its infancy we aim to clarify confusion that has arisen from different views. Existing works are reviewed, and an overview of recent advances in mobile cloud computing is provided. We investigate representative infrastructures of mobile cloud computing and analyze key components. Moreover, emerging MCC models and services are discussed, and challenging issues are identified that will need to be addressed in future work.
基金Supported by the National Program on Key Basic Re-search Project of China (G1999035801)
文摘With analysis of limitations Trusted Computing Group (TCG) has encountered, we argued that virtual machine monitor (VMM) is the appropriate architecture for implementing TCG specification. Putting together the VMM architecture, TCG hardware and application-oriented "thin" virtual machine (VM), Trusted VMM-based security architecture is present in this paper with the character of reduced and distributed trusted computing base (TCB). It provides isolation and integrity guarantees based on which general security requirements can be satisfied.
基金Project(61170049) supported by the National Natural Science Foundation of ChinaProject(2012AA010903) supported by the National High Technology Research and Development Program of China
文摘Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems.
基金supported in part by the National Natural Science Foundation of China under Grants(62250410365,62071084)the Youth Program of Humanities and Social Sciences of the MoE(23YJCZH291)+1 种基金the Key Laboratory of Computing Power Network and Information Security,Ministry of Education(2023ZD02)Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Research Project under grant number RGP2/15/46.
文摘As the 5G architecture gains momentum,interest in 6G is growing.The proliferation of Internet of Things(IoT)devices,capable of capturing sensitive images,has increased the need for secure transmission and robust access control mechanisms.The vast amount of data generated by low-computing devices poses a challenge to traditional centralized access control,which relies on trusted third parties and complex computations,resulting in intricate interactions,higher hardware costs,and processing delays.To address these issues,this paper introduces a novel distributed access control approach that integrates a decentralized and lightweight encryption mechanism with image transmission.This method enhances data security and resource efficiency without imposing heavy computational and network burdens.In comparison to the best existing approach,it achieves a 7%improvement in accuracy,effectively addressing existing gaps in lightweight encryption and recognition performance.