The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi...The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.展开更多
Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resourc...Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.展开更多
Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems...Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.展开更多
In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this neces...In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.展开更多
Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and ...Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.展开更多
In vehicular fog computing(VFC),the resource transactions in the Internet of Vehicles(IoV)have become a novel resource management scheme that can improve system resource utilization and the quality of vehicle services...In vehicular fog computing(VFC),the resource transactions in the Internet of Vehicles(IoV)have become a novel resource management scheme that can improve system resource utilization and the quality of vehicle services.In this paper,in order to improve the security and fairness of resource transactions,we design a blockchain-based resource management scheme for VFC.First,we propose the concept of resource coin(RC)and develop a blockchain-based secure computing reource trading mechanism in terms of RC.As a node of the blockchain network,the roadside unit(RSU)participates in verifying the legitimacy of transactions and the creation of new blocks.Next,we propose a resource management scheme based on contract theory,encouraging parked vehicles to contribute computing resource so that RSU could complete proof of work(PoW)quickly,improve the success probability of block creation and get RC rewards.We use the gradient descent method to solve the computing resource utilization that can maximize the RC revenue of RSUs and vehicles during the block creation.Finally,the performance of this model is validated in simulation result and analysis.展开更多
Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocatio...Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocation of the dispersed computing system is a significant challenge.In detail,by jointly managing the task requests of external users and the resource allocation of the internal system to achieve dynamic balance,the efficient and stable operation of the system can be guaranteed.In this paper,we first propose a task-resource joint management model,which quantifies the dynamic transformation relationship between the resources consumed by task requests and the resources occupied by the system in dispersed computing.Secondly,to avoid downtime caused by an overload of resources,we introduce intelligent control into the task-resource joint management model.The existence and stability of the positive periodic solution of the model can be obtained by theoretical analysis,which means that the stable operation of dispersed computing can be guaranteed through the intelligent feedback control strategy.Additionally,to improve the system utilization,the task-resource joint management model with bi-directional intelligent control is further explored.Setting control thresholds for the two resources not only reverse restrains the system resource overload,but also carries out positive incentive control when a large number of idle resources appear.The existence and stability of the positive periodic solution of the model are proved theoretically,that is,the model effectively avoids the two extreme cases and ensure the efficient and stable operation of the system.Finally,numerical simulation verifies the correctness and validity of the theoretical results.展开更多
In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order tor...In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order toreduce the number of active hosts and save power. Whereas resource provisioningattempts to provide additional resource capacity to the VMs as needed in order tomeet Quality of Service (QoS) requirements. However, these techniques have aset of limitations in terms of the additional costs related to migration and scalingtime, and energy overhead that need further consideration. Therefore, this paperpresents a comprehensive literature review on the subject of dynamic resourcemanagement (i.e., VMs consolidation and resource provisioning) in cloud computing environments, along with an overall discussion of the closely relatedworks. The outcomes of this research can be used to enhance the developmentof predictive resource management techniques, by considering the awareness ofperformance variation, energy consumption and cost to efficiently manage thecloud resources.展开更多
In recent years,vehicular cloud computing(VCC)has gained vast attention for providing a variety of services by creating virtual machines(VMs).These VMs use the resources that are present in modern smart vehicles.Many ...In recent years,vehicular cloud computing(VCC)has gained vast attention for providing a variety of services by creating virtual machines(VMs).These VMs use the resources that are present in modern smart vehicles.Many studies reported that some of these VMs hosted on the vehicles are overloaded,whereas others are underloaded.As a circumstance,the energy consumption of overloaded vehicles is drastically increased.On the other hand,underloaded vehicles are also drawing considerable energy in the underutilized situation.Therefore,minimizing the energy consumption of the VMs that are hosted by both overloaded and underloaded is a challenging issue in the VCC environment.The proper and efcient utilization of the vehicle’s resources can reduce energy consumption signicantly.One of the solutions is to improve the resource utilization of underloaded vehicles by migrating the over-utilized VMs of overloaded vehicles.On the other hand,a large number of VM migrations can lead to wastage of energy and time,which ultimately degrades the performance of the VMs.This paper addresses the issues mentioned above by introducing a resource management algorithm,called resource utilization-aware VM migration(RU-VMM)algorithm,to distribute the loads among the overloaded and underloaded vehicles,such that energy consumption is minimized.RU-VMM monitors the trend of resource utilization to select the source and destination vehicles within a predetermined threshold for the process of VM migration.It ensures that any vehicles’resource utilization should not exceed the threshold before or after the migration.RU-VMM also tries to avoid unnecessary VM migrations between the vehicles.RU-VMM is extensively simulated and tested using nine datasets.The results are carried out using three performance metrics,namely number of nal source vehicles(nfsv),percentage of successful VM migrations(psvmm)and percentage of dropped VM migrations(pdvmm),and compared with threshold-based algorithm(i.e.,threshold)and cumulative sum(CUSUM)algorithm.The comparisons show that the RU-VMM algorithm performs better than the existing algorithms.RU-VMM algorithm improves 16.91%than the CUSUM algorithm and 71.59%than the threshold algorithm in terms of nfsv,and 20.62%and 275.34%than the CUSUM and threshold algorithms in terms of psvmm.展开更多
In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of sate...In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of satellites necessitate the use of edge computing to enhance secure communication.While edge computing reduces the burden on cloud computing, it introduces security and reliability challenges in open satellite communication channels. To address these challenges, we propose a blockchain architecture specifically designed for edge computing in mega-constellation communication systems. This architecture narrows down the consensus scope of the blockchain to meet the requirements of edge computing while ensuring comprehensive log storage across the network. Additionally, we introduce a reputation management mechanism for nodes within the blockchain, evaluating their trustworthiness, workload, and efficiency. Nodes with higher reputation scores are selected to participate in tasks and are appropriately incentivized. Simulation results demonstrate that our approach achieves a task result reliability of 95% while improving computational speed.展开更多
The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power suppor...The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.展开更多
Federated learning has been explored as a promising solution for training machine learning models at the network edge,without sharing private user data.With limited resources at the edge,new solutions must be develope...Federated learning has been explored as a promising solution for training machine learning models at the network edge,without sharing private user data.With limited resources at the edge,new solutions must be developed to leverage the software and hardware resources as the existing solutions did not focus on resource management for network edge,specially for federated learning.In this paper,we describe the recent work on resource manage-ment at the edge and explore the challenges and future directions to allow the execution of federated learning at the edge.Problems such as the discovery of resources,deployment,load balancing,migration,and energy effi-ciency are discussed in the paper.展开更多
Edge computing is a cloud computing extension where physical compu-ters are installed closer to the device to minimize latency.The task of edge data cen-ters is to include a growing abundance of applications with a sm...Edge computing is a cloud computing extension where physical compu-ters are installed closer to the device to minimize latency.The task of edge data cen-ters is to include a growing abundance of applications with a small capability in comparison to conventional data centers.Under this framework,Federated Learning was suggested to offer distributed data training strategies by the coordination of many mobile devices for the training of a popular Artificial Intelligence(AI)model without actually revealing the underlying data,which is significantly enhanced in terms of privacy.Federated learning(FL)is a recently developed decentralized profound learning methodology,where customers train their localized neural network models independently using private data,and then combine a global model on the core server together.The models on the edge server use very little time since the edge server is highly calculated.But the amount of time it takes to download data from smartphone users on the edge server has a significant impact on the time it takes to complete a single cycle of FL operations.A machine learning strategic planning system that uses FL in conjunction to minimise model training time and total time utilisation,while recognising mobile appliance energy restrictions,is the focus of this study.To further speed up integration and reduce the amount of data,it implements an optimization agent for the establishment of optimal aggregation policy and asylum architecture with several employees’shared learners.The main solutions and lessons learnt along with the prospects are discussed.Experiments show that our method is superior in terms of the effective and elastic use of resources.展开更多
Risk management often plays an important role in decision making un-der uncertainty.In quantitative risk management,assessing and optimizing risk metrics requires eficient computing techniques and reliable theoretical...Risk management often plays an important role in decision making un-der uncertainty.In quantitative risk management,assessing and optimizing risk metrics requires eficient computing techniques and reliable theoretical guarantees.In this pa-per,we introduce several topics on quantitative risk management and review some of the recent studies and advancements on the topics.We consider several risk metrics and study decision models that involve the metrics,with a main focus on the related com-puting techniques and theoretical properties.We show that stochastic optimization,as a powerful tool,can be leveraged to effectively address these problems.展开更多
In a cloud environment,graphics processing units(GPUs)are the primary devices used for high-performance computation.They exploit flexible resource utilization,a key advantage of cloud environments.Multiple users share...In a cloud environment,graphics processing units(GPUs)are the primary devices used for high-performance computation.They exploit flexible resource utilization,a key advantage of cloud environments.Multiple users share GPUs,which serve as coprocessors of central processing units(CPUs)and are activated only if tasks demand GPU computation.In a container environment,where resources can be shared among multiple users,GPU utilization can be increased by minimizing idle time because the tasks of many users run on a single GPU.However,unlike CPUs and memory,GPUs cannot logically multiplex their resources.Additionally,GPU memory does not support over-utilization:when it runs out,tasks will fail.Therefore,it is necessary to regulate the order of execution of concurrently running GPU tasks to avoid such task failures and to ensure equitable GPU sharing among users.In this paper,we propose a GPU task execution order management technique that controls GPU usage via time-based containers.The technique seeks to ensure equal GPU time among users in a container environment to prevent task failures.In the meantime,we use a deferred processing method to prevent GPU memory shortages when GPU tasks are executed simultaneously and to determine the execution order based on the GPU usage time.As the order of GPU tasks cannot be externally adjusted arbitrarily once the task commences,the GPU task is indirectly paused by pausing the container.In addition,as container pause/unpause status is based on the information about the available GPU memory capacity,overuse of GPU memory can be prevented at the source.As a result,the strategy can prevent task failure and the GPU tasks can be experimentally processed in appropriate order.展开更多
Fine particulatematter(PM_(2.5))samples were collected in two neighboring cities,Beijing and Baoding,China.High-concentration events of PM_(2.5) in which the average mass concentration exceeded 75μg/m^(3) were freque...Fine particulatematter(PM_(2.5))samples were collected in two neighboring cities,Beijing and Baoding,China.High-concentration events of PM_(2.5) in which the average mass concentration exceeded 75μg/m^(3) were frequently observed during the heating season.Dispersion Normalized Positive Matrix Factorization was applied for the source apportionment of PM_(2.5) as minimize the dilution effects of meteorology and better reflect the source strengths in these two cities.Secondary nitrate had the highest contribution for Beijing(37.3%),and residential heating/biomass burning was the largest for Baoding(27.1%).Secondary nitrate,mobile,biomass burning,district heating,oil combustion,aged sea salt sources showed significant differences between the heating and non-heating seasons in Beijing for same period(2019.01.10–2019.08.22)(Mann-Whitney Rank Sum Test P<0.05).In case of Baoding,soil,residential heating/biomass burning,incinerator,coal combustion,oil combustion sources showed significant differences.The results of Pearson correlation analysis for the common sources between the two cities showed that long-range transported sources and some sources with seasonal patterns such as oil combustion and soil had high correlation coefficients.Conditional Bivariate Probability Function(CBPF)was used to identify the inflow directions for the sources,and joint-PSCF(Potential Source Contribution Function)was performed to determine the common potential source areas for sources affecting both cities.These models facilitated a more precise verification of city-specific influences on PM_(2.5) sources.The results of this study will aid in prioritizing air pollution mitigation strategies during the heating season and strengthening air quality management to reduce the impact of downwind neighboring cities.展开更多
As cloud computing continues to evolve,managing CPU resources effectively has become a critical task for ensuring system performance and efficiency.Traditional CPU resource management methods,such as static allocation...As cloud computing continues to evolve,managing CPU resources effectively has become a critical task for ensuring system performance and efficiency.Traditional CPU resource management methods,such as static allocation and manual optimization,are increasingly inadequate in handling dynamic,fluctuating workloads characteristic of modern cloud environments.This paper explores the use of Reinforcement Learning(RL)for adaptive CPU resource management,offering a dynamic,data-driven approach to optimizing resource allocation in real-time.Reinforcement learning,particularly Q-learning and Deep Q Networks(DQNs),enables cloud systems to autonomously adjust CPU resources based on workload demands,improving system efficiency and minimizing resource wastage.This paper discusses the key principles of reinforcement learning,its applications in CPU resource management,the benefits of its implementation,and the challenges that need to be addressed for broader adoption.Finally,the paper highlights future directions for integrating RL with other machine learning techniques and its potential impact on cloud infrastructure optimization.展开更多
One of the crucial elements that is directly tied to the quality of living organisms is the quality of the water.How-ever,water quality has been adversely affected by plastic pollution,a global environmental disaster ...One of the crucial elements that is directly tied to the quality of living organisms is the quality of the water.How-ever,water quality has been adversely affected by plastic pollution,a global environmental disaster that has an effect on aquatic life,wildlife,and human health.To prevent these effects,better monitoring,detection,characterisation,quanti-fication,and tracking of aquatic plastic pollution at regional and global scales is urgently needed.Remote sensing tech-nology is regarded as a useful technique,as it offers a promising new and less labour-intensive tool for the detection,quantification,and characterisation of aquatic plastic pollution.The study seeks to supplement to the body of scientific literature by compiling original data on the monitoring of plastic pollution in aquatic environments using remote sensing technology,which can function as a cost saving method for water pollution and risk management in developing nations.This article provides a profound analysis of plastic pollution,including its categories,sources,distribution,chemical properties,and potential risks.It also provides an in-depth review of remote sensing technologies,satellite-derived in-dices,and research trends related to their applicability.Additionally,the study clarifies the difficulties in using remote sensing technologies for aquatic plastic monitoring and practical ways to reduce aquatic plastic pollution.The study will improve the understanding of aquatic plastic pollution,health hazards,and the suitability of remote sensing technology for aquatic plastic contamination monitoring studies among researchers and interested parties.展开更多
With the rapid development of information technology,the scale of the network is expanding,and the complexity is increasing day by day.The traditional network management is facing great challenges.The emergence of sof...With the rapid development of information technology,the scale of the network is expanding,and the complexity is increasing day by day.The traditional network management is facing great challenges.The emergence of software-defined network(SDN)technology has brought revolutionary changes to modern network management.This paper aims to discuss the application and prospects of SDN technology in modern network management.Firstly,the basic principle and architecture of SDN are introduced,including the separation of control plane and data plane,centralized control and open programmable interface.Then,it analyzes the advantages of SDN technology in network management,such as simplifying network configuration,improving network flexibility,optimizing network resource utilization,and realizing fast fault recovery.The application examples of SDN in data center networks and WAN optimization management are analyzed.This paper also discusses the development status and trend of SDN in enterprise networks,including the integration of technologies such as cloud computing,big data,and artificial intelligence,the construction of an intelligent and automated network management platform,the improvement of network management efficiency and quality,and the openness and interoperability of network equipment.Finally,the advantages and challenges of SDN technology are summarized,and its future development direction is provided.展开更多
The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation ...The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation and independent task scheduling aim to deliver short response time services demanded by the IoT devices and performed by fog servers.The heterogeneity of the IoT-Fog resources and the huge amount of data that needs to be processed by the IoT-Fog tasks make scheduling fog computing tasks a challenging problem.This study proposes an Adaptive Firefly Algorithm(AFA)for dependent task scheduling in IoT-Fog computing.The proposed AFA is a modified version of the standard Firefly Algorithm(FA),considering the execution times of the submitted tasks,the impact of synchronization requirements,and the communication time between dependent tasks.As IoT-Fog computing depends mainly on distributed fog node servers that receive tasks in a dynamic manner,tackling the communications and synchronization issues between dependent tasks is becoming a challenging problem.The proposed AFA aims to address the dynamic nature of IoT-Fog computing environments.The proposed AFA mechanism considers a dynamic light absorption coefficient to control the decrease in attractiveness over iterations.The proposed AFA mechanism performance was benchmarked against the standard Firefly Algorithm(FA),Puma Optimizer(PO),Genetic Algorithm(GA),and Ant Colony Optimization(ACO)through simulations under light,typical,and heavy workload scenarios.In heavy workloads,the proposed AFA mechanism obtained the shortest average execution time,968.98 ms compared to 970.96,1352.87,1247.28,and 1773.62 of FA,PO,GA,and ACO,respectively.The simulation results demonstrate the proposed AFA’s ability to rapidly converge to optimal solutions,emphasizing its adaptability and efficiency in typical and heavy workloads.展开更多
文摘The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.
基金funded by Researchers Supporting Project Number(RSPD2025R947)King Saud University,Riyadh,Saudi Arabia.
文摘Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.
基金supported by the National Natural Science Foundation of China(61571149,62001139)the Initiation Fund for Postdoctoral Research in Heilongjiang Province(LBH-Q19098)the Natural Science Foundation of Heilongjiang Province(LH2020F0178).
文摘Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.
基金supported by the National Key Research and Development Program of China(No.2021YFB2900504).
文摘In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.
基金supported by the Deanship of Scientific Research and Graduate Studies at King Khalid University under research grant number(R.G.P.2/93/45).
文摘Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.
基金This work was supported in part by the National Natural Science Foundation of China(U2001213,61971191 and 61661021)in part by the Beijing Natural Science Foundation under Grant L182018 and L201011,in part by National Key Research and Development Project(2020YFB1807204)+1 种基金in part by the open project of Shanghai Institute of Microsystem and Information Technology(20190910)in part by the Key project of Natural Science Foundation of Jiangxi Province(20202ACBL202006).
文摘In vehicular fog computing(VFC),the resource transactions in the Internet of Vehicles(IoV)have become a novel resource management scheme that can improve system resource utilization and the quality of vehicle services.In this paper,in order to improve the security and fairness of resource transactions,we design a blockchain-based resource management scheme for VFC.First,we propose the concept of resource coin(RC)and develop a blockchain-based secure computing reource trading mechanism in terms of RC.As a node of the blockchain network,the roadside unit(RSU)participates in verifying the legitimacy of transactions and the creation of new blocks.Next,we propose a resource management scheme based on contract theory,encouraging parked vehicles to contribute computing resource so that RSU could complete proof of work(PoW)quickly,improve the success probability of block creation and get RC rewards.We use the gradient descent method to solve the computing resource utilization that can maximize the RC revenue of RSUs and vehicles during the block creation.Finally,the performance of this model is validated in simulation result and analysis.
基金supported in part by the National Science Foundation Project of P.R.China(No.61931001)the Scientific and Technological Innovation Foundation of Foshan,USTB(No.BK20AF003)。
文摘Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocation of the dispersed computing system is a significant challenge.In detail,by jointly managing the task requests of external users and the resource allocation of the internal system to achieve dynamic balance,the efficient and stable operation of the system can be guaranteed.In this paper,we first propose a task-resource joint management model,which quantifies the dynamic transformation relationship between the resources consumed by task requests and the resources occupied by the system in dispersed computing.Secondly,to avoid downtime caused by an overload of resources,we introduce intelligent control into the task-resource joint management model.The existence and stability of the positive periodic solution of the model can be obtained by theoretical analysis,which means that the stable operation of dispersed computing can be guaranteed through the intelligent feedback control strategy.Additionally,to improve the system utilization,the task-resource joint management model with bi-directional intelligent control is further explored.Setting control thresholds for the two resources not only reverse restrains the system resource overload,but also carries out positive incentive control when a large number of idle resources appear.The existence and stability of the positive periodic solution of the model are proved theoretically,that is,the model effectively avoids the two extreme cases and ensure the efficient and stable operation of the system.Finally,numerical simulation verifies the correctness and validity of the theoretical results.
文摘In a cloud environment, Virtual Machines (VMs) consolidation andresource provisioning are used to address the issues of workload fluctuations.VM consolidation aims to move the VMs from one host to another in order toreduce the number of active hosts and save power. Whereas resource provisioningattempts to provide additional resource capacity to the VMs as needed in order tomeet Quality of Service (QoS) requirements. However, these techniques have aset of limitations in terms of the additional costs related to migration and scalingtime, and energy overhead that need further consideration. Therefore, this paperpresents a comprehensive literature review on the subject of dynamic resourcemanagement (i.e., VMs consolidation and resource provisioning) in cloud computing environments, along with an overall discussion of the closely relatedworks. The outcomes of this research can be used to enhance the developmentof predictive resource management techniques, by considering the awareness ofperformance variation, energy consumption and cost to efficiently manage thecloud resources.
文摘In recent years,vehicular cloud computing(VCC)has gained vast attention for providing a variety of services by creating virtual machines(VMs).These VMs use the resources that are present in modern smart vehicles.Many studies reported that some of these VMs hosted on the vehicles are overloaded,whereas others are underloaded.As a circumstance,the energy consumption of overloaded vehicles is drastically increased.On the other hand,underloaded vehicles are also drawing considerable energy in the underutilized situation.Therefore,minimizing the energy consumption of the VMs that are hosted by both overloaded and underloaded is a challenging issue in the VCC environment.The proper and efcient utilization of the vehicle’s resources can reduce energy consumption signicantly.One of the solutions is to improve the resource utilization of underloaded vehicles by migrating the over-utilized VMs of overloaded vehicles.On the other hand,a large number of VM migrations can lead to wastage of energy and time,which ultimately degrades the performance of the VMs.This paper addresses the issues mentioned above by introducing a resource management algorithm,called resource utilization-aware VM migration(RU-VMM)algorithm,to distribute the loads among the overloaded and underloaded vehicles,such that energy consumption is minimized.RU-VMM monitors the trend of resource utilization to select the source and destination vehicles within a predetermined threshold for the process of VM migration.It ensures that any vehicles’resource utilization should not exceed the threshold before or after the migration.RU-VMM also tries to avoid unnecessary VM migrations between the vehicles.RU-VMM is extensively simulated and tested using nine datasets.The results are carried out using three performance metrics,namely number of nal source vehicles(nfsv),percentage of successful VM migrations(psvmm)and percentage of dropped VM migrations(pdvmm),and compared with threshold-based algorithm(i.e.,threshold)and cumulative sum(CUSUM)algorithm.The comparisons show that the RU-VMM algorithm performs better than the existing algorithms.RU-VMM algorithm improves 16.91%than the CUSUM algorithm and 71.59%than the threshold algorithm in terms of nfsv,and 20.62%and 275.34%than the CUSUM and threshold algorithms in terms of psvmm.
基金supported in part by the National Natural Science Foundation of China under Grant No.U2268204,62172061 and 61871422National Key R&D Program of China under Grant No.2020YFB1711800 and 2020YFB1707900+2 种基金the Science and Technology Project of Sichuan Province under Grant No.2023ZHCG0014,2023ZHCG0011,2022YFG0155,2022YFG0157,2021GFW019,2021YFG0152,2021YFG0025,2020YFG0322Central Universities of Southwest Minzu University under Grant No.ZYN2022032,2023NYXXS034the State Scholarship Fund of the China Scholarship Council under Grant No.202008510081。
文摘In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of satellites necessitate the use of edge computing to enhance secure communication.While edge computing reduces the burden on cloud computing, it introduces security and reliability challenges in open satellite communication channels. To address these challenges, we propose a blockchain architecture specifically designed for edge computing in mega-constellation communication systems. This architecture narrows down the consensus scope of the blockchain to meet the requirements of edge computing while ensuring comprehensive log storage across the network. Additionally, we introduce a reputation management mechanism for nodes within the blockchain, evaluating their trustworthiness, workload, and efficiency. Nodes with higher reputation scores are selected to participate in tasks and are appropriately incentivized. Simulation results demonstrate that our approach achieves a task result reliability of 95% while improving computational speed.
基金supported by the National Key Research and Development Plan(No.2022YFB2902701)the key Natural Science Foundation of Shenzhen(No.JCYJ20220818102209020).
文摘The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.
基金supported by CAPES,CNPq,and grant 15/24494-8,Sao Paulo Research Foundation(FAPESP).
文摘Federated learning has been explored as a promising solution for training machine learning models at the network edge,without sharing private user data.With limited resources at the edge,new solutions must be developed to leverage the software and hardware resources as the existing solutions did not focus on resource management for network edge,specially for federated learning.In this paper,we describe the recent work on resource manage-ment at the edge and explore the challenges and future directions to allow the execution of federated learning at the edge.Problems such as the discovery of resources,deployment,load balancing,migration,and energy effi-ciency are discussed in the paper.
文摘Edge computing is a cloud computing extension where physical compu-ters are installed closer to the device to minimize latency.The task of edge data cen-ters is to include a growing abundance of applications with a small capability in comparison to conventional data centers.Under this framework,Federated Learning was suggested to offer distributed data training strategies by the coordination of many mobile devices for the training of a popular Artificial Intelligence(AI)model without actually revealing the underlying data,which is significantly enhanced in terms of privacy.Federated learning(FL)is a recently developed decentralized profound learning methodology,where customers train their localized neural network models independently using private data,and then combine a global model on the core server together.The models on the edge server use very little time since the edge server is highly calculated.But the amount of time it takes to download data from smartphone users on the edge server has a significant impact on the time it takes to complete a single cycle of FL operations.A machine learning strategic planning system that uses FL in conjunction to minimise model training time and total time utilisation,while recognising mobile appliance energy restrictions,is the focus of this study.To further speed up integration and reduce the amount of data,it implements an optimization agent for the establishment of optimal aggregation policy and asylum architecture with several employees’shared learners.The main solutions and lessons learnt along with the prospects are discussed.Experiments show that our method is superior in terms of the effective and elastic use of resources.
文摘Risk management often plays an important role in decision making un-der uncertainty.In quantitative risk management,assessing and optimizing risk metrics requires eficient computing techniques and reliable theoretical guarantees.In this pa-per,we introduce several topics on quantitative risk management and review some of the recent studies and advancements on the topics.We consider several risk metrics and study decision models that involve the metrics,with a main focus on the related com-puting techniques and theoretical properties.We show that stochastic optimization,as a powerful tool,can be leveraged to effectively address these problems.
基金supported by“Regional Innovation Strategy(RIS)”through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(MOE)(2023RIS-009).
文摘In a cloud environment,graphics processing units(GPUs)are the primary devices used for high-performance computation.They exploit flexible resource utilization,a key advantage of cloud environments.Multiple users share GPUs,which serve as coprocessors of central processing units(CPUs)and are activated only if tasks demand GPU computation.In a container environment,where resources can be shared among multiple users,GPU utilization can be increased by minimizing idle time because the tasks of many users run on a single GPU.However,unlike CPUs and memory,GPUs cannot logically multiplex their resources.Additionally,GPU memory does not support over-utilization:when it runs out,tasks will fail.Therefore,it is necessary to regulate the order of execution of concurrently running GPU tasks to avoid such task failures and to ensure equitable GPU sharing among users.In this paper,we propose a GPU task execution order management technique that controls GPU usage via time-based containers.The technique seeks to ensure equal GPU time among users in a container environment to prevent task failures.In the meantime,we use a deferred processing method to prevent GPU memory shortages when GPU tasks are executed simultaneously and to determine the execution order based on the GPU usage time.As the order of GPU tasks cannot be externally adjusted arbitrarily once the task commences,the GPU task is indirectly paused by pausing the container.In addition,as container pause/unpause status is based on the information about the available GPU memory capacity,overuse of GPU memory can be prevented at the source.As a result,the strategy can prevent task failure and the GPU tasks can be experimentally processed in appropriate order.
基金supported by the National Institute of Environmental Research(NIER)funded by the Ministry of Environment(No.NIER-2019-04-02-039)supported by Particulate Matter Management Specialized Graduate Program through the Korea Environmental Industry&Technology Institute(KEITI)funded by the Ministry of Environment(MOE).
文摘Fine particulatematter(PM_(2.5))samples were collected in two neighboring cities,Beijing and Baoding,China.High-concentration events of PM_(2.5) in which the average mass concentration exceeded 75μg/m^(3) were frequently observed during the heating season.Dispersion Normalized Positive Matrix Factorization was applied for the source apportionment of PM_(2.5) as minimize the dilution effects of meteorology and better reflect the source strengths in these two cities.Secondary nitrate had the highest contribution for Beijing(37.3%),and residential heating/biomass burning was the largest for Baoding(27.1%).Secondary nitrate,mobile,biomass burning,district heating,oil combustion,aged sea salt sources showed significant differences between the heating and non-heating seasons in Beijing for same period(2019.01.10–2019.08.22)(Mann-Whitney Rank Sum Test P<0.05).In case of Baoding,soil,residential heating/biomass burning,incinerator,coal combustion,oil combustion sources showed significant differences.The results of Pearson correlation analysis for the common sources between the two cities showed that long-range transported sources and some sources with seasonal patterns such as oil combustion and soil had high correlation coefficients.Conditional Bivariate Probability Function(CBPF)was used to identify the inflow directions for the sources,and joint-PSCF(Potential Source Contribution Function)was performed to determine the common potential source areas for sources affecting both cities.These models facilitated a more precise verification of city-specific influences on PM_(2.5) sources.The results of this study will aid in prioritizing air pollution mitigation strategies during the heating season and strengthening air quality management to reduce the impact of downwind neighboring cities.
文摘As cloud computing continues to evolve,managing CPU resources effectively has become a critical task for ensuring system performance and efficiency.Traditional CPU resource management methods,such as static allocation and manual optimization,are increasingly inadequate in handling dynamic,fluctuating workloads characteristic of modern cloud environments.This paper explores the use of Reinforcement Learning(RL)for adaptive CPU resource management,offering a dynamic,data-driven approach to optimizing resource allocation in real-time.Reinforcement learning,particularly Q-learning and Deep Q Networks(DQNs),enables cloud systems to autonomously adjust CPU resources based on workload demands,improving system efficiency and minimizing resource wastage.This paper discusses the key principles of reinforcement learning,its applications in CPU resource management,the benefits of its implementation,and the challenges that need to be addressed for broader adoption.Finally,the paper highlights future directions for integrating RL with other machine learning techniques and its potential impact on cloud infrastructure optimization.
文摘One of the crucial elements that is directly tied to the quality of living organisms is the quality of the water.How-ever,water quality has been adversely affected by plastic pollution,a global environmental disaster that has an effect on aquatic life,wildlife,and human health.To prevent these effects,better monitoring,detection,characterisation,quanti-fication,and tracking of aquatic plastic pollution at regional and global scales is urgently needed.Remote sensing tech-nology is regarded as a useful technique,as it offers a promising new and less labour-intensive tool for the detection,quantification,and characterisation of aquatic plastic pollution.The study seeks to supplement to the body of scientific literature by compiling original data on the monitoring of plastic pollution in aquatic environments using remote sensing technology,which can function as a cost saving method for water pollution and risk management in developing nations.This article provides a profound analysis of plastic pollution,including its categories,sources,distribution,chemical properties,and potential risks.It also provides an in-depth review of remote sensing technologies,satellite-derived in-dices,and research trends related to their applicability.Additionally,the study clarifies the difficulties in using remote sensing technologies for aquatic plastic monitoring and practical ways to reduce aquatic plastic pollution.The study will improve the understanding of aquatic plastic pollution,health hazards,and the suitability of remote sensing technology for aquatic plastic contamination monitoring studies among researchers and interested parties.
文摘With the rapid development of information technology,the scale of the network is expanding,and the complexity is increasing day by day.The traditional network management is facing great challenges.The emergence of software-defined network(SDN)technology has brought revolutionary changes to modern network management.This paper aims to discuss the application and prospects of SDN technology in modern network management.Firstly,the basic principle and architecture of SDN are introduced,including the separation of control plane and data plane,centralized control and open programmable interface.Then,it analyzes the advantages of SDN technology in network management,such as simplifying network configuration,improving network flexibility,optimizing network resource utilization,and realizing fast fault recovery.The application examples of SDN in data center networks and WAN optimization management are analyzed.This paper also discusses the development status and trend of SDN in enterprise networks,including the integration of technologies such as cloud computing,big data,and artificial intelligence,the construction of an intelligent and automated network management platform,the improvement of network management efficiency and quality,and the openness and interoperability of network equipment.Finally,the advantages and challenges of SDN technology are summarized,and its future development direction is provided.
基金the Deanship of Graduate Studies and Scientific Research at Najran University for funding this work under the Easy Funding Program grant code(NU/EFP/SERC/13/166).
文摘The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation and independent task scheduling aim to deliver short response time services demanded by the IoT devices and performed by fog servers.The heterogeneity of the IoT-Fog resources and the huge amount of data that needs to be processed by the IoT-Fog tasks make scheduling fog computing tasks a challenging problem.This study proposes an Adaptive Firefly Algorithm(AFA)for dependent task scheduling in IoT-Fog computing.The proposed AFA is a modified version of the standard Firefly Algorithm(FA),considering the execution times of the submitted tasks,the impact of synchronization requirements,and the communication time between dependent tasks.As IoT-Fog computing depends mainly on distributed fog node servers that receive tasks in a dynamic manner,tackling the communications and synchronization issues between dependent tasks is becoming a challenging problem.The proposed AFA aims to address the dynamic nature of IoT-Fog computing environments.The proposed AFA mechanism considers a dynamic light absorption coefficient to control the decrease in attractiveness over iterations.The proposed AFA mechanism performance was benchmarked against the standard Firefly Algorithm(FA),Puma Optimizer(PO),Genetic Algorithm(GA),and Ant Colony Optimization(ACO)through simulations under light,typical,and heavy workload scenarios.In heavy workloads,the proposed AFA mechanism obtained the shortest average execution time,968.98 ms compared to 970.96,1352.87,1247.28,and 1773.62 of FA,PO,GA,and ACO,respectively.The simulation results demonstrate the proposed AFA’s ability to rapidly converge to optimal solutions,emphasizing its adaptability and efficiency in typical and heavy workloads.