Marine container terminal(MCT)plays a key role in the marine intelligent transportation system and international logistics system.However,the efficiency of resource scheduling significantly influences the operation pe...Marine container terminal(MCT)plays a key role in the marine intelligent transportation system and international logistics system.However,the efficiency of resource scheduling significantly influences the operation performance of MCT.To solve the practical resource scheduling problem(RSP)in MCT efficiently,this paper has contributions to both the problem model and the algorithm design.Firstly,in the problem model,different from most of the existing studies that only consider scheduling part of the resources in MCT,we propose a unified mathematical model for formulating an integrated RSP.The new integrated RSP model allocates and schedules multiple MCT resources simultaneously by taking the total cost minimization as the objective.Secondly,in the algorithm design,a pre-selection-based ant colony system(PACS)approach is proposed based on graphic structure solution representation and a pre-selection strategy.On the one hand,as the RSP can be formulated as the shortest path problem on the directed complete graph,the graphic structure is proposed to represent the solution encoding to consider multiple constraints and multiple factors of the RSP,which effectively avoids the generation of infeasible solutions.On the other hand,the pre-selection strategy aims to reduce the computational burden of PACS and to fast obtain a higher-quality solution.To evaluate the performance of the proposed novel PACS in solving the new integrated RSP model,a set of test cases with different sizes is conducted.Experimental results and comparisons show the effectiveness and efficiency of the PACS algorithm,which can significantly outperform other state-of-the-art algorithms.展开更多
The communication resources scheduling problem of the satellite network contains the user-satellite association problem,the user-beam matching problem,and the beam power allocation problem.Different optimization probl...The communication resources scheduling problem of the satellite network contains the user-satellite association problem,the user-beam matching problem,and the beam power allocation problem.Different optimization problems contain different types of variables.Mult-type variables and three coupling problems cause the cooperative scheduling problem to be complex.In this paper,we propose a mixed vector encoding heuristic algorithm(MVEHA)to optimize the joint resources allocation problem of the multiple beams satellite network.Specifically,we use the 0-1 encoding vector to represent the user-satellite association scheme,the user-beam matching scheme is denoted by the continuous vector with priority,and a normalization vector is designed to encoded the beam power allocation scheme.Due to the mixed vector encoding method,we design two optimization operators to guide the search direction of the population.Compared to the conventional optimization algorithm,the simulation experiment shows MVEHA has better solution performance and robustness to solve the communication resources allocation problem of the satellite network.展开更多
Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and ...Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.展开更多
This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environmen...This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environments.This innovative approach not only significantly improves the system’s real-time responsiveness and resource utilization efficiency but also addresses critical challenges in Internet of Things(IoT)ecosystems—such as high demand variability,resource allocation uncertainties,and data privacy concerns—through practical solutions.Initially,the framework employs an adaptive adjustment mechanism to dynamically manage task and resource states,complemented by online learning models for precise predictive analytics.Secondly,it accelerates the search for optimal solutions using Grover’s algorithm while efficiently evaluating complex constraints through multi-controlled Toffoli gates,thereby markedly enhancing the practicality and robustness of the proposed solution.Furthermore,to bolster the system’s adaptability and response speed in dynamic environments,an efficientmonitoring mechanism and event-driven architecture are incorporated,ensuring timely responses to environmental changes and maintaining synchronization between internal and external systems.Experimental evaluations confirm that the proposed algorithm demonstrates superior performance in complex application scenarios,characterized by faster convergence,enhanced stability,and superior data privacy protection,alongside notable reductions in latency and optimized resource utilization.This research paves the way for transformative advancements in edge computing and IoT technologies,driving smart edge computing towards unprecedented levels of intelligence and automation.展开更多
The rapid growth of low-Earth-orbit satellites has injected new vitality into future service provisioning.However,given the inherent volatility of network traffic,ensuring differentiated quality of service in highly d...The rapid growth of low-Earth-orbit satellites has injected new vitality into future service provisioning.However,given the inherent volatility of network traffic,ensuring differentiated quality of service in highly dynamic networks remains a significant challenge.In this paper,we propose an online learning-based resource scheduling scheme for satellite-terrestrial integrated networks(STINs)aimed at providing on-demand services with minimal resource utilization.Specifically,we focus on:①accurately characterizing the STIN channel,②predicting resource demand with uncertainty guarantees,and③implementing mixed timescale resource scheduling.For the STIN channel,we adopt the 3rd Generation Partnership Project channel and antenna models for non-terrestrial networks.We employ a one-dimensional convolution and attention-assisted long short-term memory architecture for average demand prediction,while introducing conformal prediction to mitigate uncertainties arising from burst traffic.Additionally,we develop a dual-timescale optimization framework that includes resource reservation on a larger timescale and resource adjustment on a smaller timescale.We also designed an online resource scheduling algorithm based on online convex optimization to guarantee long-term performance with limited knowledge of time-varying network information.Based on the Network Simulator 3 implementation of the STIN channel under our high-fidelity satellite Internet simulation platform,numerical results using a real-world dataset demonstrate the accuracy and efficiency of the prediction algorithms and online resource scheduling scheme.展开更多
Dear Editor,This letter presents a joint probabilistic scheduling and resource allocation method(PSRA) for 5G-based wireless networked control systems(WNCSs). As a control-aware optimization method, PSRA minimizes the...Dear Editor,This letter presents a joint probabilistic scheduling and resource allocation method(PSRA) for 5G-based wireless networked control systems(WNCSs). As a control-aware optimization method, PSRA minimizes the linear quadratic Gaussian(LQG) control cost of WNCSs by optimizing the activation probability of subsystems, the number of uplink repetitions, and the durations of uplink and downlink phases. Simulation results show that PSRA achieves smaller LQG control costs than existing works.展开更多
Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems...Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.展开更多
When an emergency happens, the scheduling of relief resources to multiple emergency locations is a realistic and intricate problem, especially when the available resources are limited. A non-cooperative games model an...When an emergency happens, the scheduling of relief resources to multiple emergency locations is a realistic and intricate problem, especially when the available resources are limited. A non-cooperative games model and an algorithm for scheduling of relief resources are presented. In the model, the players correspond to the multiple emergency locations, strategies correspond to all resources scheduling and the payoff of each emergency location corresponds to the reciprocal of its scheduling cost. Thus, the optimal results are determined by the Nash equilibrium point of this game. Then the iterative algorithm is introduced to seek the Nash equilibrium point. Simulation and analysis are given to demonstrate the feasibility and availability of the model.展开更多
As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays...As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.展开更多
One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consider...One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consideration. We introduce a Dynamic and Integrated Resource Scheduling algorithm (DAIRS) for Cloud data centers. Unlike traditional load-balance scheduling algorithms which often consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time.展开更多
In view of the fact that traditional job shop scheduling only considers a single factor, which affects the effect of resource allocation, the dual-resource integrated scheduling problem between AGV and machine in inte...In view of the fact that traditional job shop scheduling only considers a single factor, which affects the effect of resource allocation, the dual-resource integrated scheduling problem between AGV and machine in intelligent manufacturing job shop environment was studied. The dual-resource integrated scheduling model of AGV and machine was established by comprehensively considering constraints of machines, workpieces and AGVs. The bidirectional single path fixed guidance system based on topological map was determined, and the AGV transportation task model was defined. The improved A* path optimization algorithm was used to determine the optimal path, and the path conflict elimination mechanism was described. The improved NSGA-Ⅱ algorithm was used to determine the machining workpiece sequence, and the competition mechanism was introduced to allocate AGV transportation tasks. The proposed model and method were verified by a workshop production example, the results showed that the dual resource integrated scheduling strategy of AGV and machine is effective.展开更多
For the mobile edge computing network consisting of multiple base stations and resourceconstrained user devices,network cost in terms of energy and delay will incur during task offloading from the user to the edge ser...For the mobile edge computing network consisting of multiple base stations and resourceconstrained user devices,network cost in terms of energy and delay will incur during task offloading from the user to the edge server.With the limitations imposed on transmission capacity,computing resource,and connection capacity,the per-slot online learning algorithm is first proposed to minimize the time-averaged network cost.In particular,by leveraging the theories of stochastic gradient descent and minimum cost maximum flow,the user association is jointly optimized with resource scheduling in each time slot.The theoretical analysis proves that the proposed approach can achieve asymptotic optimality without any prior knowledge of the network environment.Moreover,to alleviate the high network overhead incurred during user handover and task migration,a two-timescale optimization approach is proposed to avoid frequent changes in user association.With user association executed on a large timescale and the resource scheduling decided on the single time slot,the asymptotic optimality is preserved.Simulation results verify the effectiveness of the proposed online learning algorithms.展开更多
Goud computing is a new paradigm in which dynamic and virtualized computing resources are provided as services over the Internet. However, because cloud resource is open and dynamically configured, resource allocation...Goud computing is a new paradigm in which dynamic and virtualized computing resources are provided as services over the Internet. However, because cloud resource is open and dynamically configured, resource allocation and scheduling are extremely important challenges in cloud infrastructure. Based on distributed agents, this paper presents trusted data acquisition mechanism for efficient scheduling cloud resources to satisfy various user requests. Our mechanism defines, collects and analyzes multiple key trust targets of cloud service resources based on historical information of servers in a cloud data center. As a result, using our trust computing mechanism, cloud providers can utilize their resources efficiently and also provide highly trusted resources and services to many users.展开更多
Unmanned aerial vehicle(UAV) resource scheduling means to allocate and aggregate the available UAV resources depending on the mission requirements and the battlefield situation assessment.In previous studies,the mod...Unmanned aerial vehicle(UAV) resource scheduling means to allocate and aggregate the available UAV resources depending on the mission requirements and the battlefield situation assessment.In previous studies,the models cannot reflect the mission synchronization;the targets are treated respectively,which results in the large scale of the problem and high computational complexity.To overcome these disadvantages,a model for UAV resource scheduling under mission synchronization is proposed,which is based on single-objective non-linear integer programming.And several cooperative teams are aggregated for the target clusters from the available resources.The evaluation indices of weapon allocation are referenced in establishing the objective function and the constraints for the issue.The scales of the target clusters are considered as the constraints for the scales of the cooperative teams to make them match in scale.The functions of the intersection between the "mission time-window" and the UAV "arrival time-window" are introduced into the objective function and the constraints in order to describe the mission synchronization effectively.The results demonstrate that the proposed expanded model can meet the requirement of mission synchronization,guide the aggregation of cooperative teams for the target clusters and control the scale of the problem effectively.展开更多
Real-time resource allocation is crucial for phased array radar to undertake multi-task with limited resources,such as the situation of multi-target tracking,in which targets need to be prioritized so that resources c...Real-time resource allocation is crucial for phased array radar to undertake multi-task with limited resources,such as the situation of multi-target tracking,in which targets need to be prioritized so that resources can be allocated accordingly and effectively.A three-way decision-based model is proposed for adaptive scheduling of phased radar dwell time.Using the model,the threat posed by a target is measured by an evaluation function,and therefore,a target is assigned to one of the three possible decision regions,i.e.,positive region,negative region,and boundary region.A different region has a various priority in terms of resource demand,and as such,a different radar resource allocation decision is applied to each region to satisfy different tracking accuracies of multi-target.In addition,the dwell time scheduling model can be further optimized by implementing a strategy for determining a proper threshold of three-way decision making to optimize the thresholds adaptively in real-time.The advantages and the performance of the proposed model have been verified by experimental simulations with comparison to the traditional twoway decision model and the three-way decision model without threshold optimization.The experiential results demonstrate that the performance of the proposed model has a certain advantage in detecting high threat targets.展开更多
With the rapid development of data applications in the scene of Industrial Internet of Things(IIoT),how to schedule resources in IIoT environment has become an urgent problem to be solved.Due to benefit of its strong ...With the rapid development of data applications in the scene of Industrial Internet of Things(IIoT),how to schedule resources in IIoT environment has become an urgent problem to be solved.Due to benefit of its strong scalability and compatibility,Kubernetes has been applied to resource scheduling in IIoT scenarios.However,the limited types of resources,the default scheduling scoring strategy,and the lack of delay control module limit its resource scheduling performance.To address these problems,this paper proposes a multi-resource scheduling(MRS)scheme of Kubernetes for IIoT.The MRS scheme dynamically balances resource utilization by taking both requirements of tasks and the current system state into consideration.Furthermore,the experiments demonstrate the effectiveness of the MRS scheme in terms of delay control and resource utilization.展开更多
Selecting appropriate resources for running a job efficiently is one of the common objectives in a computational grid. Resource scheduling should consider the specific characteristics of the application, and decide th...Selecting appropriate resources for running a job efficiently is one of the common objectives in a computational grid. Resource scheduling should consider the specific characteristics of the application, and decide the metrics to be used accordingly. This paper presents a distributed resource scheduling framework mainly consisting of a job scheduler and a local scheduler. In order to meet the requirements of different applications, we adopt HGSA, a Heuristic-based Greedy Scheduling Algorithm, to schedule jobs in the grid, where the heuristic knowledge is the metric weights of the computing resources and the metric workload impact factors. The metric weight is used to control the effect of the metric on the application. For different applications, only metric weights and the metric workload impact factors need to be changed, while the scheduling algorithm remains the same. Experimental results are presented to demonstrate the adaptability of the HGSA.展开更多
Resource Scheduling is crucial to data centers. However, most previous works focus only on one-dimensional resource models which ignoring the fact that multiple resources simultaneously utilized, including CPU, memory...Resource Scheduling is crucial to data centers. However, most previous works focus only on one-dimensional resource models which ignoring the fact that multiple resources simultaneously utilized, including CPU, memory and network bandwidth. As cloud computing allows uncoordinated and heterogeneous users to share a data center, competition for multiple resources has become increasingly severe. Motivated by the differences on integrated utilization obtained from different packing schemes, in this paper we take the scheduling problem as a multi-dimensional combinatorial optimization problem with constraint satisfaction. With NP hardness, we present Multiple attribute decision based Integrated Resource Scheduling (MIRS), and a novel heuristic algorithm to gain the approximate optimal solution. Refers to simulation results, in face of various workload sets, our algorithm has significant superiorities in terms of efficiency and performance compared with previous methods.展开更多
Edge Computing is a new technology in Internet of Things(IoT)paradigm that allows sensitive data to be sent to disperse devices quickly and without delay.Edge is identical to Fog,except its positioning in the end devi...Edge Computing is a new technology in Internet of Things(IoT)paradigm that allows sensitive data to be sent to disperse devices quickly and without delay.Edge is identical to Fog,except its positioning in the end devices is much nearer to end-users,making it process and respond to clients in less time.Further,it aids sensor networks,real-time streaming apps,and the IoT,all of which require high-speed and dependable internet access.For such an IoT system,Resource Scheduling Process(RSP)seems to be one of the most important tasks.This paper presents a RSP for Edge Computing(EC).The resource characteristics are first standardized and normalized.Next,for task scheduling,a Fuzzy Control based Edge Resource Scheduling(FCERS)is suggested.The results demonstrate that this technique enhances resource scheduling efficiency in EC and Quality of Service(QoS).The experimental study revealed that the suggested FCERS method in this work converges quicker than the other methods.Our method reduces the total computing cost,execution time,and energy consumption on average compared to the baseline.The ES allocates higher processing resources to each user in case of limited availability of MDs;this results in improved task execution time and a reduced total task computation cost.Additionally,the proposed FCERS m 1m may more efficiently fetch user requests to suitable resource categories,increasing user requirements.展开更多
With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficienc...With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficiencies werefound during data processing through the cloud centric architecture. The data generated by terminals at the edge of the network is an urgent problem to be solved atpresent. In 5 g environments, edge computing can better meet the needs of lowdelay and wide connection applications, and support the fast request of terminalusers. However, edge computing only has the edge layer computing advantage,and it is difficult to achieve global resource scheduling and configuration, whichmay lead to the problems of low resource utilization rate, long task processingdelay and unbalanced system load, so as to lead to affect the service quality ofusers. To solve this problem, this paper studies task scheduling and resource collaboration based on a Cloud-Edge-Terminal collaborative architecture, proposes agenetic simulated annealing fusion algorithm, called GSA-EDGE, to achieve taskscheduling and resource allocation, and designs a series of experiments to verifythe effectiveness of the GSA-EDGE algorithm. The experimental results showthat the proposed method can reduce the time delay of task processing comparedwith the local task processing method and the task average allocation method.展开更多
基金This research was supported in part by the National Key Research and Development Program of China under Grant 2022YFB3305303in part by the National Natural Science Foundations of China(NSFC)under Grant 62106055+1 种基金in part by the Guangdong Natural Science Foundation under Grant 2022A1515011825in part by the Guangzhou Science and Technology Planning Project under Grants 2023A04J0388 and 2023A03J0662.
文摘Marine container terminal(MCT)plays a key role in the marine intelligent transportation system and international logistics system.However,the efficiency of resource scheduling significantly influences the operation performance of MCT.To solve the practical resource scheduling problem(RSP)in MCT efficiently,this paper has contributions to both the problem model and the algorithm design.Firstly,in the problem model,different from most of the existing studies that only consider scheduling part of the resources in MCT,we propose a unified mathematical model for formulating an integrated RSP.The new integrated RSP model allocates and schedules multiple MCT resources simultaneously by taking the total cost minimization as the objective.Secondly,in the algorithm design,a pre-selection-based ant colony system(PACS)approach is proposed based on graphic structure solution representation and a pre-selection strategy.On the one hand,as the RSP can be formulated as the shortest path problem on the directed complete graph,the graphic structure is proposed to represent the solution encoding to consider multiple constraints and multiple factors of the RSP,which effectively avoids the generation of infeasible solutions.On the other hand,the pre-selection strategy aims to reduce the computational burden of PACS and to fast obtain a higher-quality solution.To evaluate the performance of the proposed novel PACS in solving the new integrated RSP model,a set of test cases with different sizes is conducted.Experimental results and comparisons show the effectiveness and efficiency of the PACS algorithm,which can significantly outperform other state-of-the-art algorithms.
文摘The communication resources scheduling problem of the satellite network contains the user-satellite association problem,the user-beam matching problem,and the beam power allocation problem.Different optimization problems contain different types of variables.Mult-type variables and three coupling problems cause the cooperative scheduling problem to be complex.In this paper,we propose a mixed vector encoding heuristic algorithm(MVEHA)to optimize the joint resources allocation problem of the multiple beams satellite network.Specifically,we use the 0-1 encoding vector to represent the user-satellite association scheme,the user-beam matching scheme is denoted by the continuous vector with priority,and a normalization vector is designed to encoded the beam power allocation scheme.Due to the mixed vector encoding method,we design two optimization operators to guide the search direction of the population.Compared to the conventional optimization algorithm,the simulation experiment shows MVEHA has better solution performance and robustness to solve the communication resources allocation problem of the satellite network.
基金supported by the Deanship of Scientific Research and Graduate Studies at King Khalid University under research grant number(R.G.P.2/93/45).
文摘Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.
基金supported by National Natural Science Foundation of China(Nos.62071481 and 61501471).
文摘This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environments.This innovative approach not only significantly improves the system’s real-time responsiveness and resource utilization efficiency but also addresses critical challenges in Internet of Things(IoT)ecosystems—such as high demand variability,resource allocation uncertainties,and data privacy concerns—through practical solutions.Initially,the framework employs an adaptive adjustment mechanism to dynamically manage task and resource states,complemented by online learning models for precise predictive analytics.Secondly,it accelerates the search for optimal solutions using Grover’s algorithm while efficiently evaluating complex constraints through multi-controlled Toffoli gates,thereby markedly enhancing the practicality and robustness of the proposed solution.Furthermore,to bolster the system’s adaptability and response speed in dynamic environments,an efficientmonitoring mechanism and event-driven architecture are incorporated,ensuring timely responses to environmental changes and maintaining synchronization between internal and external systems.Experimental evaluations confirm that the proposed algorithm demonstrates superior performance in complex application scenarios,characterized by faster convergence,enhanced stability,and superior data privacy protection,alongside notable reductions in latency and optimized resource utilization.This research paves the way for transformative advancements in edge computing and IoT technologies,driving smart edge computing towards unprecedented levels of intelligence and automation.
基金supported in part by the Major Program of the National Natural Science Foundation of China(62495021 and 62495020).
文摘The rapid growth of low-Earth-orbit satellites has injected new vitality into future service provisioning.However,given the inherent volatility of network traffic,ensuring differentiated quality of service in highly dynamic networks remains a significant challenge.In this paper,we propose an online learning-based resource scheduling scheme for satellite-terrestrial integrated networks(STINs)aimed at providing on-demand services with minimal resource utilization.Specifically,we focus on:①accurately characterizing the STIN channel,②predicting resource demand with uncertainty guarantees,and③implementing mixed timescale resource scheduling.For the STIN channel,we adopt the 3rd Generation Partnership Project channel and antenna models for non-terrestrial networks.We employ a one-dimensional convolution and attention-assisted long short-term memory architecture for average demand prediction,while introducing conformal prediction to mitigate uncertainties arising from burst traffic.Additionally,we develop a dual-timescale optimization framework that includes resource reservation on a larger timescale and resource adjustment on a smaller timescale.We also designed an online resource scheduling algorithm based on online convex optimization to guarantee long-term performance with limited knowledge of time-varying network information.Based on the Network Simulator 3 implementation of the STIN channel under our high-fidelity satellite Internet simulation platform,numerical results using a real-world dataset demonstrate the accuracy and efficiency of the prediction algorithms and online resource scheduling scheme.
基金supported by the Liaoning Revitalization Talents Program(XLYC2203148)
文摘Dear Editor,This letter presents a joint probabilistic scheduling and resource allocation method(PSRA) for 5G-based wireless networked control systems(WNCSs). As a control-aware optimization method, PSRA minimizes the linear quadratic Gaussian(LQG) control cost of WNCSs by optimizing the activation probability of subsystems, the number of uplink repetitions, and the durations of uplink and downlink phases. Simulation results show that PSRA achieves smaller LQG control costs than existing works.
基金supported by the National Natural Science Foundation of China(61571149,62001139)the Initiation Fund for Postdoctoral Research in Heilongjiang Province(LBH-Q19098)the Natural Science Foundation of Heilongjiang Province(LH2020F0178).
文摘Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.
文摘When an emergency happens, the scheduling of relief resources to multiple emergency locations is a realistic and intricate problem, especially when the available resources are limited. A non-cooperative games model and an algorithm for scheduling of relief resources are presented. In the model, the players correspond to the multiple emergency locations, strategies correspond to all resources scheduling and the payoff of each emergency location corresponds to the reciprocal of its scheduling cost. Thus, the optimal results are determined by the Nash equilibrium point of this game. Then the iterative algorithm is introduced to seek the Nash equilibrium point. Simulation and analysis are given to demonstrate the feasibility and availability of the model.
基金supported by Youth Talent Project of Scientific Research Program of Hubei Provincial Department of Education under Grant Q20241809Doctoral Scientific Research Foundation of Hubei University of Automotive Technology under Grant 202404.
文摘As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.
基金supported by Scientific Research Foundation for the Returned Overseas Chinese ScholarsState Education Ministry under Grant No.2010-2011 and Chinese Post-doctoral Research Foundation
文摘One of the challenging scheduling problems in Cloud data centers is to take the allocation and migration of reconfigurable virtual machines as well as the integrated features of hosting physical machines into consideration. We introduce a Dynamic and Integrated Resource Scheduling algorithm (DAIRS) for Cloud data centers. Unlike traditional load-balance scheduling algorithms which often consider only one factor such as the CPU load in physical servers, DAIRS treats CPU, memory and network bandwidth integrated for both physical machines and virtual machines. We develop integrated measurement for the total imbalance level of a Cloud datacenter as well as the average imbalance level of each server. Simulation results show that DAIRS has good performance with regard to total imbalance level, average imbalance level of each server, as well as overall running time.
基金Project(BK20201162)supported by the General Program of Natural Science Foundation of Jiangsu Province,ChinaProject(JC2019126)supported by the Science and Technology Plan Fundamental Scientific Research Funding Project of Nantong,China+1 种基金Project(CE20205045)supported by the Changzhou Science and Technology Support Plan(Social Development),ChinaProject(51875171)supported by the National Nature Science Foundation of China。
文摘In view of the fact that traditional job shop scheduling only considers a single factor, which affects the effect of resource allocation, the dual-resource integrated scheduling problem between AGV and machine in intelligent manufacturing job shop environment was studied. The dual-resource integrated scheduling model of AGV and machine was established by comprehensively considering constraints of machines, workpieces and AGVs. The bidirectional single path fixed guidance system based on topological map was determined, and the AGV transportation task model was defined. The improved A* path optimization algorithm was used to determine the optimal path, and the path conflict elimination mechanism was described. The improved NSGA-Ⅱ algorithm was used to determine the machining workpiece sequence, and the competition mechanism was introduced to allocate AGV transportation tasks. The proposed model and method were verified by a workshop production example, the results showed that the dual resource integrated scheduling strategy of AGV and machine is effective.
基金the National Natural Science Foundation of China(61971066,61941114)the Beijing Natural Science Foundation(No.L182038)National Youth Top-notch Talent Support Program.
文摘For the mobile edge computing network consisting of multiple base stations and resourceconstrained user devices,network cost in terms of energy and delay will incur during task offloading from the user to the edge server.With the limitations imposed on transmission capacity,computing resource,and connection capacity,the per-slot online learning algorithm is first proposed to minimize the time-averaged network cost.In particular,by leveraging the theories of stochastic gradient descent and minimum cost maximum flow,the user association is jointly optimized with resource scheduling in each time slot.The theoretical analysis proves that the proposed approach can achieve asymptotic optimality without any prior knowledge of the network environment.Moreover,to alleviate the high network overhead incurred during user handover and task migration,a two-timescale optimization approach is proposed to avoid frequent changes in user association.With user association executed on a large timescale and the resource scheduling decided on the single time slot,the asymptotic optimality is preserved.Simulation results verify the effectiveness of the proposed online learning algorithms.
基金supported by the National Basic Research Program of China (973 Program) (No. 2012CB821200 (2012CB821206))the National Nature Science Foundation of China (No.61003281, No.91024001 and No.61070142)+1 种基金Beijing Natural Science Foundation (Study on Internet Multi-mode Area Information Accurate Searching and Mining Based on Agent, No.4111002)the Chinese Universities Scientific Fund under Grant No.BUPT 2009RC0201
文摘Goud computing is a new paradigm in which dynamic and virtualized computing resources are provided as services over the Internet. However, because cloud resource is open and dynamically configured, resource allocation and scheduling are extremely important challenges in cloud infrastructure. Based on distributed agents, this paper presents trusted data acquisition mechanism for efficient scheduling cloud resources to satisfy various user requests. Our mechanism defines, collects and analyzes multiple key trust targets of cloud service resources based on historical information of servers in a cloud data center. As a result, using our trust computing mechanism, cloud providers can utilize their resources efficiently and also provide highly trusted resources and services to many users.
文摘Unmanned aerial vehicle(UAV) resource scheduling means to allocate and aggregate the available UAV resources depending on the mission requirements and the battlefield situation assessment.In previous studies,the models cannot reflect the mission synchronization;the targets are treated respectively,which results in the large scale of the problem and high computational complexity.To overcome these disadvantages,a model for UAV resource scheduling under mission synchronization is proposed,which is based on single-objective non-linear integer programming.And several cooperative teams are aggregated for the target clusters from the available resources.The evaluation indices of weapon allocation are referenced in establishing the objective function and the constraints for the issue.The scales of the target clusters are considered as the constraints for the scales of the cooperative teams to make them match in scale.The functions of the intersection between the "mission time-window" and the UAV "arrival time-window" are introduced into the objective function and the constraints in order to describe the mission synchronization effectively.The results demonstrate that the proposed expanded model can meet the requirement of mission synchronization,guide the aggregation of cooperative teams for the target clusters and control the scale of the problem effectively.
基金the Aeronautical Science Foundation of China(2017ZC53021)the Open Project Fund of CETC Key Laboratory of Data Link Technology(CLDL-20182101).
文摘Real-time resource allocation is crucial for phased array radar to undertake multi-task with limited resources,such as the situation of multi-target tracking,in which targets need to be prioritized so that resources can be allocated accordingly and effectively.A three-way decision-based model is proposed for adaptive scheduling of phased radar dwell time.Using the model,the threat posed by a target is measured by an evaluation function,and therefore,a target is assigned to one of the three possible decision regions,i.e.,positive region,negative region,and boundary region.A different region has a various priority in terms of resource demand,and as such,a different radar resource allocation decision is applied to each region to satisfy different tracking accuracies of multi-target.In addition,the dwell time scheduling model can be further optimized by implementing a strategy for determining a proper threshold of three-way decision making to optimize the thresholds adaptively in real-time.The advantages and the performance of the proposed model have been verified by experimental simulations with comparison to the traditional twoway decision model and the three-way decision model without threshold optimization.The experiential results demonstrate that the performance of the proposed model has a certain advantage in detecting high threat targets.
基金This work was supported by the National Natural Science Foundation of China(61872423)the Industry Prospective Primary Research&Development Plan of Jiangsu Province(BE2017111)the Scientific Research Foundation of the Higher Education Institutions of Jiangsu Province(19KJA180006).
文摘With the rapid development of data applications in the scene of Industrial Internet of Things(IIoT),how to schedule resources in IIoT environment has become an urgent problem to be solved.Due to benefit of its strong scalability and compatibility,Kubernetes has been applied to resource scheduling in IIoT scenarios.However,the limited types of resources,the default scheduling scoring strategy,and the lack of delay control module limit its resource scheduling performance.To address these problems,this paper proposes a multi-resource scheduling(MRS)scheme of Kubernetes for IIoT.The MRS scheme dynamically balances resource utilization by taking both requirements of tasks and the current system state into consideration.Furthermore,the experiments demonstrate the effectiveness of the MRS scheme in terms of delay control and resource utilization.
基金Project supported by the National Natural Science Foundation of China (No. 60225009), and the National Science Fund for Distin-guished Young Scholars, China
文摘Selecting appropriate resources for running a job efficiently is one of the common objectives in a computational grid. Resource scheduling should consider the specific characteristics of the application, and decide the metrics to be used accordingly. This paper presents a distributed resource scheduling framework mainly consisting of a job scheduler and a local scheduler. In order to meet the requirements of different applications, we adopt HGSA, a Heuristic-based Greedy Scheduling Algorithm, to schedule jobs in the grid, where the heuristic knowledge is the metric weights of the computing resources and the metric workload impact factors. The metric weight is used to control the effect of the metric on the application. For different applications, only metric weights and the metric workload impact factors need to be changed, while the scheduling algorithm remains the same. Experimental results are presented to demonstrate the adaptability of the HGSA.
基金supported in part by National Key Basic Research Program of China (973 program) under Grant No.2011CB302506Important National Science & Technology Specific Projects: Next-Generation Broadband Wireless Mobile Communications Network under Grant No.2011ZX03002-001-01Innovative Research Groups of the National Natural Science Foundation of China under Grant No.60821001
文摘Resource Scheduling is crucial to data centers. However, most previous works focus only on one-dimensional resource models which ignoring the fact that multiple resources simultaneously utilized, including CPU, memory and network bandwidth. As cloud computing allows uncoordinated and heterogeneous users to share a data center, competition for multiple resources has become increasingly severe. Motivated by the differences on integrated utilization obtained from different packing schemes, in this paper we take the scheduling problem as a multi-dimensional combinatorial optimization problem with constraint satisfaction. With NP hardness, we present Multiple attribute decision based Integrated Resource Scheduling (MIRS), and a novel heuristic algorithm to gain the approximate optimal solution. Refers to simulation results, in face of various workload sets, our algorithm has significant superiorities in terms of efficiency and performance compared with previous methods.
文摘Edge Computing is a new technology in Internet of Things(IoT)paradigm that allows sensitive data to be sent to disperse devices quickly and without delay.Edge is identical to Fog,except its positioning in the end devices is much nearer to end-users,making it process and respond to clients in less time.Further,it aids sensor networks,real-time streaming apps,and the IoT,all of which require high-speed and dependable internet access.For such an IoT system,Resource Scheduling Process(RSP)seems to be one of the most important tasks.This paper presents a RSP for Edge Computing(EC).The resource characteristics are first standardized and normalized.Next,for task scheduling,a Fuzzy Control based Edge Resource Scheduling(FCERS)is suggested.The results demonstrate that this technique enhances resource scheduling efficiency in EC and Quality of Service(QoS).The experimental study revealed that the suggested FCERS method in this work converges quicker than the other methods.Our method reduces the total computing cost,execution time,and energy consumption on average compared to the baseline.The ES allocates higher processing resources to each user in case of limited availability of MDs;this results in improved task execution time and a reduced total task computation cost.Additionally,the proposed FCERS m 1m may more efficiently fetch user requests to suitable resource categories,increasing user requirements.
基金supported by the Social Science Foundation of Hebei Province(No.HB19JL007)the Education technology Foundation of the Ministry of Education(No.2017A01020)the Natural Science Foundation of Hebei Province(F2021207005).
文摘With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficiencies werefound during data processing through the cloud centric architecture. The data generated by terminals at the edge of the network is an urgent problem to be solved atpresent. In 5 g environments, edge computing can better meet the needs of lowdelay and wide connection applications, and support the fast request of terminalusers. However, edge computing only has the edge layer computing advantage,and it is difficult to achieve global resource scheduling and configuration, whichmay lead to the problems of low resource utilization rate, long task processingdelay and unbalanced system load, so as to lead to affect the service quality ofusers. To solve this problem, this paper studies task scheduling and resource collaboration based on a Cloud-Edge-Terminal collaborative architecture, proposes agenetic simulated annealing fusion algorithm, called GSA-EDGE, to achieve taskscheduling and resource allocation, and designs a series of experiments to verifythe effectiveness of the GSA-EDGE algorithm. The experimental results showthat the proposed method can reduce the time delay of task processing comparedwith the local task processing method and the task average allocation method.