Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and ...Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.展开更多
This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environmen...This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environments.This innovative approach not only significantly improves the system’s real-time responsiveness and resource utilization efficiency but also addresses critical challenges in Internet of Things(IoT)ecosystems—such as high demand variability,resource allocation uncertainties,and data privacy concerns—through practical solutions.Initially,the framework employs an adaptive adjustment mechanism to dynamically manage task and resource states,complemented by online learning models for precise predictive analytics.Secondly,it accelerates the search for optimal solutions using Grover’s algorithm while efficiently evaluating complex constraints through multi-controlled Toffoli gates,thereby markedly enhancing the practicality and robustness of the proposed solution.Furthermore,to bolster the system’s adaptability and response speed in dynamic environments,an efficientmonitoring mechanism and event-driven architecture are incorporated,ensuring timely responses to environmental changes and maintaining synchronization between internal and external systems.Experimental evaluations confirm that the proposed algorithm demonstrates superior performance in complex application scenarios,characterized by faster convergence,enhanced stability,and superior data privacy protection,alongside notable reductions in latency and optimized resource utilization.This research paves the way for transformative advancements in edge computing and IoT technologies,driving smart edge computing towards unprecedented levels of intelligence and automation.展开更多
Aiming at the problems of increasing uncertainty of low-carbon generation energy in active distribution network(ADN)and the difficulty of security assessment of distribution network,this paper proposes a two-phase sch...Aiming at the problems of increasing uncertainty of low-carbon generation energy in active distribution network(ADN)and the difficulty of security assessment of distribution network,this paper proposes a two-phase scheduling model for flexible resources in ADN based on probabilistic risk perception.First,a full-cycle probabilistic trend sequence is constructed based on the source-load historical data,and in the day-ahead scheduling phase,the response interval of the flexibility resources on the load and storage side is optimized based on the probabilistic trend,with the probability of the security boundary as the security constraint,and with the economy as the objective.Then in the intraday phase,the core security and economic operation boundary of theADNis screened in real time.Fromthere,it quantitatively senses the degree of threat to the core security and economic operation boundary under the current source-load prediction information,and identifies the strictly secure and low/high-risk time periods.Flexibility resources within the response interval are dynamically adjusted in real-time by focusing on high-risk periods to cope with future core risks of the distribution grid.Finally,the improved IEEE 33-node distribution system is simulated to obtain the flexibility resource scheduling scheme on the load and storage side.Thescheduling results are evaluated from the perspectives of risk probability and flexible resource utilization efficiency,and the analysis shows that the scheduling model in this paper can promote the consumption of low-carbon energy from wind and photovoltaic sourceswhile reducing the operational risk of the distribution network.展开更多
Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems...Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.展开更多
Dear Editor,This letter presents a joint probabilistic scheduling and resource allocation method(PSRA) for 5G-based wireless networked control systems(WNCSs). As a control-aware optimization method, PSRA minimizes the...Dear Editor,This letter presents a joint probabilistic scheduling and resource allocation method(PSRA) for 5G-based wireless networked control systems(WNCSs). As a control-aware optimization method, PSRA minimizes the linear quadratic Gaussian(LQG) control cost of WNCSs by optimizing the activation probability of subsystems, the number of uplink repetitions, and the durations of uplink and downlink phases. Simulation results show that PSRA achieves smaller LQG control costs than existing works.展开更多
The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worke...The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.展开更多
Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of se...Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.展开更多
Unrelated parallel machine scheduling problem(UPMSP)is a typical scheduling one and UPMSP with various reallife constraints such as additional resources has been widely studied;however,UPMSP with additional resources,...Unrelated parallel machine scheduling problem(UPMSP)is a typical scheduling one and UPMSP with various reallife constraints such as additional resources has been widely studied;however,UPMSP with additional resources,maintenance,and energy-related objectives is seldom investigated.The Artificial Bee Colony(ABC)algorithm has been successfully applied to various production scheduling problems and demonstrates potential search advantages in solving UPMSP with additional resources,among other factors.In this study,an energy-efficient UPMSP with additional resources and maintenance is considered.A dynamical artificial bee colony(DABC)algorithm is presented to minimize makespan and total energy consumption simultaneously.Three heuristics are applied to produce the initial population.Employed bee swarm and onlooker bee swarm are constructed.Computing resources are shifted from the dominated solutions to non-dominated solutions in each swarm when the given condition is met.Dynamical employed bee phase is implemented by computing resource shifting and solution migration.Computing resource shifting and feedback are used to construct dynamical onlooker bee phase.Computational experiments are conducted on 300 instances from the literature and three comparative algorithms and ABC are compared after parameter settings of all algorithms are given.The computational results demonstrate that the new strategies of DABC are effective and that DABC has promising advantages in solving the considered UPMSP.展开更多
This paper investigates the problem of Joint Radar Node Selection and Power Allocation(JRNSPA)in the Multiple Radar System(MRS)in the blanket jamming environment.Each radar node independently tracks moving target and ...This paper investigates the problem of Joint Radar Node Selection and Power Allocation(JRNSPA)in the Multiple Radar System(MRS)in the blanket jamming environment.Each radar node independently tracks moving target and subsequently transmits the raw observation data to the fusion center,which formulates a centralized tracking network structure.In order to establish a practical blanket jamming environment,we suppose that each target carries the self-defense jammer which automatically implements blanket jamming to the radar nodes that exceed the preset interception probability.Subsequently,the Predicted Conditional Cramer-Rao Lower Bound(PC-CRLB)is derived and utilized as the tracking accuracy criterion.Aimed at ensuring both the tracking performance and the Low Probability of Intercept(LPI)performance,the resource-saving scheduling model is formulated to minimize the transmit power consumption while meeting the requirements of tracking accuracy.Finally,the Modified Zoutendijk Method Of Feasible Directions(MZMFD)-based two-stage solution technique is adopted to solve the formulated non-convex optimization model.Simulation results show the effectiveness of the proposed JRNSPA scheme.展开更多
Marine container terminal(MCT)plays a key role in the marine intelligent transportation system and international logistics system.However,the efficiency of resource scheduling significantly influences the operation pe...Marine container terminal(MCT)plays a key role in the marine intelligent transportation system and international logistics system.However,the efficiency of resource scheduling significantly influences the operation performance of MCT.To solve the practical resource scheduling problem(RSP)in MCT efficiently,this paper has contributions to both the problem model and the algorithm design.Firstly,in the problem model,different from most of the existing studies that only consider scheduling part of the resources in MCT,we propose a unified mathematical model for formulating an integrated RSP.The new integrated RSP model allocates and schedules multiple MCT resources simultaneously by taking the total cost minimization as the objective.Secondly,in the algorithm design,a pre-selection-based ant colony system(PACS)approach is proposed based on graphic structure solution representation and a pre-selection strategy.On the one hand,as the RSP can be formulated as the shortest path problem on the directed complete graph,the graphic structure is proposed to represent the solution encoding to consider multiple constraints and multiple factors of the RSP,which effectively avoids the generation of infeasible solutions.On the other hand,the pre-selection strategy aims to reduce the computational burden of PACS and to fast obtain a higher-quality solution.To evaluate the performance of the proposed novel PACS in solving the new integrated RSP model,a set of test cases with different sizes is conducted.Experimental results and comparisons show the effectiveness and efficiency of the PACS algorithm,which can significantly outperform other state-of-the-art algorithms.展开更多
Safe and efficient sortie scheduling on the confined flight deck is crucial for maintaining high combat effectiveness of the aircraft carrier.The primary difficulty exactly lies in the spatiotemporal coordination,i.e....Safe and efficient sortie scheduling on the confined flight deck is crucial for maintaining high combat effectiveness of the aircraft carrier.The primary difficulty exactly lies in the spatiotemporal coordination,i.e.,allocation of limited supporting resources and collision-avoidance between heterogeneous dispatch entities.In this paper,the problem is investigated in the perspective of hybrid flow-shop scheduling problem by synthesizing the precedence,space and resource constraints.Specifically,eight processing procedures are abstracted,where tractors,preparing spots,catapults,and launching are virtualized as machines.By analyzing the constraints in sortie scheduling,a mixed-integer planning model is constructed.In particular,the constraint on preparing spot occupancy is improved to further enhance the sortie efficiency.The basic trajectory library for each dispatch entity is generated and a delayed strategy is integrated to address the collision-avoidance issue.To efficiently solve the formulated HFSP,which is essentially a combinatorial problem with tightly coupled constraints,a chaos-initialized genetic algorithm is developed.The solution framework is validated by the simulation environment referring to the Fort-class carrier,exhibiting higher sortie efficiency when compared to existing strategies.And animation of the simulation results is available at www.bilibili.com/video/BV14t421A7Tt/.The study presents a promising supporting technique for autonomous flight deck operation in the foreseeable future,and can be easily extended to other supporting scenarios,e.g.,ammunition delivery and aircraft maintenance.展开更多
To solve the deadlock problem of tasks that the interdependence between tasks fails to consider during the course of resource assignment and task scheduling based on the heuristics algorithm, an improved ant colony sy...To solve the deadlock problem of tasks that the interdependence between tasks fails to consider during the course of resource assignment and task scheduling based on the heuristics algorithm, an improved ant colony system (ACS) based algorithm is proposed. First, how to map the resource assignment and task scheduling (RATS) problem into the optimization selection problem of task resource assignment graph (TRAG) and to add the semaphore mechanism in the optimal TRAG to solve deadlocks are explained. Secondly, how to utilize the grid pheromone system model to realize the algorithm based on ACS is explicated. This refers to the construction of TRAG by the random selection of appropriate resources for each task by the user agent and the optimization of TRAG through the positive feedback and distributed parallel computing mechanism of the ACS. Simulation results show that the proposed algorithm is effective and efficient in solving the deadlock problem.展开更多
The ease of accessing a virtually unlimited pool of resources makes Infrastructure as a Service (IaaS) clouds an ideal platform for running data-intensive workflow applications comprising hundreds of computational tas...The ease of accessing a virtually unlimited pool of resources makes Infrastructure as a Service (IaaS) clouds an ideal platform for running data-intensive workflow applications comprising hundreds of computational tasks. However, executing scientific workflows in IaaS cloud environments poses significant challenges due to conflicting objectives, such as minimizing execution time (makespan) and reducing resource utilization costs. This study responds to the increasing need for efficient and adaptable optimization solutions in dynamic and complex environments, which are critical for meeting the evolving demands of modern users and applications. This study presents an innovative multi-objective approach for scheduling scientific workflows in IaaS cloud environments. The proposed algorithm, MOS-MWMC, aims to minimize total execution time (makespan) and resource utilization costs by leveraging key features of virtual machine instances, such as a high number of cores and fast local SSD storage. By integrating realistic simulations based on the WRENCH framework, the method effectively dimensions the cloud infrastructure and optimizes resource usage. Experimental results highlight the superiority of MOS-MWMC compared to benchmark algorithms HEFT and Max-Min. The Pareto fronts obtained for the CyberShake, Epigenomics, and Montage workflows demonstrate closer proximity to the optimal front, confirming the algorithm’s ability to balance conflicting objectives. This study contributes to optimizing scientific workflows in complex environments by providing solutions tailored to specific user needs while minimizing costs and execution times.展开更多
Spark performs excellently in large-scale data-parallel computing and iterative processing.However,with the increase in data size and program complexity,the default scheduling strategy has difficultymeeting the demand...Spark performs excellently in large-scale data-parallel computing and iterative processing.However,with the increase in data size and program complexity,the default scheduling strategy has difficultymeeting the demands of resource utilization and performance optimization.Scheduling strategy optimization,as a key direction for improving Spark’s execution efficiency,has attracted widespread attention.This paper first introduces the basic theories of Spark,compares several default scheduling strategies,and discusses common scheduling performance evaluation indicators and factors affecting scheduling efficiency.Subsequently,existing scheduling optimization schemes are summarized based on three scheduling modes:load characteristics,cluster characteristics,and matching of both,and representative algorithms are analyzed in terms of performance indicators and applicable scenarios,comparing the advantages and disadvantages of different scheduling modes.The article also explores in detail the integration of Spark scheduling strategies with specific application scenarios and the challenges in production environments.Finally,the limitations of the existing schemes are analyzed,and prospects are envisioned.展开更多
Frequent extreme disasters have led to frequent large-scale power outages in recent years.To quickly restore power,it is necessary to understand the damage information of the distribution network accurately.However,th...Frequent extreme disasters have led to frequent large-scale power outages in recent years.To quickly restore power,it is necessary to understand the damage information of the distribution network accurately.However,the public network communication system is easily damaged after disasters,causing the operation center to lose control of the distribution network.In this paper,we considered using satellites to transmit the distribution network data and focus on the resource scheduling problem of the satellite emergency communication system for the distribution network.Specifically,this paper first formulates the satellite beam-pointing problem and the accesschannel joint resource allocation problem.Then,this paper proposes the Priority-based Beam-pointing and Access-Channel joint optimization algorithm(PBAC),which uses convex optimization theory to solve the satellite beam pointing problem,and adopts the block coordinate descent method,Lagrangian dual method,and a greedy algorithm to solve the access-channel joint resource allocation problem,thereby obtaining the optimal resource scheduling scheme for the satellite network.Finally,this paper conducts comparative experiments with existing methods to verify the effec-tiveness of the proposed methods.The results show that the total weighted transmitted data of the proposed algorithm is increased by about 19.29∼26.29%compared with other algorithms.展开更多
The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation ...The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation and independent task scheduling aim to deliver short response time services demanded by the IoT devices and performed by fog servers.The heterogeneity of the IoT-Fog resources and the huge amount of data that needs to be processed by the IoT-Fog tasks make scheduling fog computing tasks a challenging problem.This study proposes an Adaptive Firefly Algorithm(AFA)for dependent task scheduling in IoT-Fog computing.The proposed AFA is a modified version of the standard Firefly Algorithm(FA),considering the execution times of the submitted tasks,the impact of synchronization requirements,and the communication time between dependent tasks.As IoT-Fog computing depends mainly on distributed fog node servers that receive tasks in a dynamic manner,tackling the communications and synchronization issues between dependent tasks is becoming a challenging problem.The proposed AFA aims to address the dynamic nature of IoT-Fog computing environments.The proposed AFA mechanism considers a dynamic light absorption coefficient to control the decrease in attractiveness over iterations.The proposed AFA mechanism performance was benchmarked against the standard Firefly Algorithm(FA),Puma Optimizer(PO),Genetic Algorithm(GA),and Ant Colony Optimization(ACO)through simulations under light,typical,and heavy workload scenarios.In heavy workloads,the proposed AFA mechanism obtained the shortest average execution time,968.98 ms compared to 970.96,1352.87,1247.28,and 1773.62 of FA,PO,GA,and ACO,respectively.The simulation results demonstrate the proposed AFA’s ability to rapidly converge to optimal solutions,emphasizing its adaptability and efficiency in typical and heavy workloads.展开更多
In this paper,we propose a joint power and frequency allocation algorithm considering interference protection in the integrated satellite and terrestrial network(ISTN).We efficiently utilize spectrum resources by allo...In this paper,we propose a joint power and frequency allocation algorithm considering interference protection in the integrated satellite and terrestrial network(ISTN).We efficiently utilize spectrum resources by allowing user equipment(UE)of terrestrial networks to share frequencies with satellite networks.In order to protect the satellite terminal(ST),the base station(BS)needs to control the transmit power and frequency resources of the UE.The optimization problem involves maximizing the achievable throughput while satisfying the interference protection constraints of the ST and the quality of service(QoS)of the UE.However,this problem is highly nonconvex,and we decompose it into power allocation and frequency resource scheduling subproblems.In the power allocation subproblem,we propose a power allocation algorithm based on interference probability(PAIP)to address channel uncertainty.We obtain the suboptimal power allocation solution through iterative optimization.In the frequency resource scheduling subproblem,we develop a heuristic algorithm to handle the non-convexity of the problem.The simulation results show that the combination of power allocation and frequency resource scheduling algorithms can improve spectrum utilization.展开更多
Before the dispatch of the carrier-based aircraft,a series of pre-flight preparation operations need to be completed on the flight deck.Flight deck fixed aviation support resource station configuration has an importan...Before the dispatch of the carrier-based aircraft,a series of pre-flight preparation operations need to be completed on the flight deck.Flight deck fixed aviation support resource station configuration has an important impact on operation efficiency and sortie rate.However,the resource station configuration is determined during the aircraft carrier design phase and is rarely modified as required,which may not be suitable for some pre-flight preparation missions.In order to solve the above defects,the joint optimization of flight deck resource station configuration and aircraft carrier pre-flight preparation scheduling is studied in this paper,which is formulated as a two-tier optimization decision-making framework.An improved variable neighborhood search algorithm with four original neighborhood structures is presented.Dispatch mission experiment and algorithm performance comparison experiment are carried out in the computational experiment section.The correlation between the pre-flight preparation time(makespan)and flight deck cabin occupancy percentage is given,and advantages of the proposed algorithm in solving the mathematical model are verified.展开更多
Goud computing is a new paradigm in which dynamic and virtualized computing resources are provided as services over the Internet. However, because cloud resource is open and dynamically configured, resource allocation...Goud computing is a new paradigm in which dynamic and virtualized computing resources are provided as services over the Internet. However, because cloud resource is open and dynamically configured, resource allocation and scheduling are extremely important challenges in cloud infrastructure. Based on distributed agents, this paper presents trusted data acquisition mechanism for efficient scheduling cloud resources to satisfy various user requests. Our mechanism defines, collects and analyzes multiple key trust targets of cloud service resources based on historical information of servers in a cloud data center. As a result, using our trust computing mechanism, cloud providers can utilize their resources efficiently and also provide highly trusted resources and services to many users.展开更多
An improved differential evolution(IDE)algorithm that adopts a novel mutation strategy to speed up the convergence rate is introduced to solve the resource-constrained project scheduling problem(RCPSP)with the obj...An improved differential evolution(IDE)algorithm that adopts a novel mutation strategy to speed up the convergence rate is introduced to solve the resource-constrained project scheduling problem(RCPSP)with the objective of minimizing project duration Activities priorities for scheduling are represented by individual vectors and a senal scheme is utilized to transform the individual-represented priorities to a feasible schedule according to the precedence and resource constraints so as to be evaluated.To investigate the performance of the IDE-based approach for the RCPSP,it is compared against the meta-heuristic methods of hybrid genetic algorithm(HGA),particle swarm optimization(PSO) and several well selected heuristics.The results show that the proposed scheduling method is better than general heuristic rules and is able to obtain the same optimal result as the HGA and PSO approaches but more efficient than the two algorithms.展开更多
基金supported by the Deanship of Scientific Research and Graduate Studies at King Khalid University under research grant number(R.G.P.2/93/45).
文摘Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.
基金supported by National Natural Science Foundation of China(Nos.62071481 and 61501471).
文摘This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environments.This innovative approach not only significantly improves the system’s real-time responsiveness and resource utilization efficiency but also addresses critical challenges in Internet of Things(IoT)ecosystems—such as high demand variability,resource allocation uncertainties,and data privacy concerns—through practical solutions.Initially,the framework employs an adaptive adjustment mechanism to dynamically manage task and resource states,complemented by online learning models for precise predictive analytics.Secondly,it accelerates the search for optimal solutions using Grover’s algorithm while efficiently evaluating complex constraints through multi-controlled Toffoli gates,thereby markedly enhancing the practicality and robustness of the proposed solution.Furthermore,to bolster the system’s adaptability and response speed in dynamic environments,an efficientmonitoring mechanism and event-driven architecture are incorporated,ensuring timely responses to environmental changes and maintaining synchronization between internal and external systems.Experimental evaluations confirm that the proposed algorithm demonstrates superior performance in complex application scenarios,characterized by faster convergence,enhanced stability,and superior data privacy protection,alongside notable reductions in latency and optimized resource utilization.This research paves the way for transformative advancements in edge computing and IoT technologies,driving smart edge computing towards unprecedented levels of intelligence and automation.
基金supported by Key Technology Research and Application of Online Control Simulation and Intelligent Decision Making for Active Distribution Network(5108-202218280A-2-377-XG).
文摘Aiming at the problems of increasing uncertainty of low-carbon generation energy in active distribution network(ADN)and the difficulty of security assessment of distribution network,this paper proposes a two-phase scheduling model for flexible resources in ADN based on probabilistic risk perception.First,a full-cycle probabilistic trend sequence is constructed based on the source-load historical data,and in the day-ahead scheduling phase,the response interval of the flexibility resources on the load and storage side is optimized based on the probabilistic trend,with the probability of the security boundary as the security constraint,and with the economy as the objective.Then in the intraday phase,the core security and economic operation boundary of theADNis screened in real time.Fromthere,it quantitatively senses the degree of threat to the core security and economic operation boundary under the current source-load prediction information,and identifies the strictly secure and low/high-risk time periods.Flexibility resources within the response interval are dynamically adjusted in real-time by focusing on high-risk periods to cope with future core risks of the distribution grid.Finally,the improved IEEE 33-node distribution system is simulated to obtain the flexibility resource scheduling scheme on the load and storage side.Thescheduling results are evaluated from the perspectives of risk probability and flexible resource utilization efficiency,and the analysis shows that the scheduling model in this paper can promote the consumption of low-carbon energy from wind and photovoltaic sourceswhile reducing the operational risk of the distribution network.
基金supported by the National Natural Science Foundation of China(61571149,62001139)the Initiation Fund for Postdoctoral Research in Heilongjiang Province(LBH-Q19098)the Natural Science Foundation of Heilongjiang Province(LH2020F0178).
文摘Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.
基金supported by the Liaoning Revitalization Talents Program(XLYC2203148)
文摘Dear Editor,This letter presents a joint probabilistic scheduling and resource allocation method(PSRA) for 5G-based wireless networked control systems(WNCSs). As a control-aware optimization method, PSRA minimizes the linear quadratic Gaussian(LQG) control cost of WNCSs by optimizing the activation probability of subsystems, the number of uplink repetitions, and the durations of uplink and downlink phases. Simulation results show that PSRA achieves smaller LQG control costs than existing works.
基金supported by the Natural Science Foundation of Anhui Province(Grant Number 2208085MG181)the Science Research Project of Higher Education Institutions in Anhui Province,Philosophy and Social Sciences(Grant Number 2023AH051063)the Open Fund of Key Laboratory of Anhui Higher Education Institutes(Grant Number CS2021-ZD01).
文摘The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.
基金supported in part by the National Natural Science Foundation of China under Grant 62172192,U20A20228,and 62171203in part by the Science and Technology Demonstration Project of Social Development of Jiangsu Province under Grant BE2019631。
文摘Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.
基金the National Natural Science Foundation of China(grant number 61573264)。
文摘Unrelated parallel machine scheduling problem(UPMSP)is a typical scheduling one and UPMSP with various reallife constraints such as additional resources has been widely studied;however,UPMSP with additional resources,maintenance,and energy-related objectives is seldom investigated.The Artificial Bee Colony(ABC)algorithm has been successfully applied to various production scheduling problems and demonstrates potential search advantages in solving UPMSP with additional resources,among other factors.In this study,an energy-efficient UPMSP with additional resources and maintenance is considered.A dynamical artificial bee colony(DABC)algorithm is presented to minimize makespan and total energy consumption simultaneously.Three heuristics are applied to produce the initial population.Employed bee swarm and onlooker bee swarm are constructed.Computing resources are shifted from the dominated solutions to non-dominated solutions in each swarm when the given condition is met.Dynamical employed bee phase is implemented by computing resource shifting and solution migration.Computing resource shifting and feedback are used to construct dynamical onlooker bee phase.Computational experiments are conducted on 300 instances from the literature and three comparative algorithms and ABC are compared after parameter settings of all algorithms are given.The computational results demonstrate that the new strategies of DABC are effective and that DABC has promising advantages in solving the considered UPMSP.
基金This study was supported by the National Natural Science Foundation of China(No.62001506).
文摘This paper investigates the problem of Joint Radar Node Selection and Power Allocation(JRNSPA)in the Multiple Radar System(MRS)in the blanket jamming environment.Each radar node independently tracks moving target and subsequently transmits the raw observation data to the fusion center,which formulates a centralized tracking network structure.In order to establish a practical blanket jamming environment,we suppose that each target carries the self-defense jammer which automatically implements blanket jamming to the radar nodes that exceed the preset interception probability.Subsequently,the Predicted Conditional Cramer-Rao Lower Bound(PC-CRLB)is derived and utilized as the tracking accuracy criterion.Aimed at ensuring both the tracking performance and the Low Probability of Intercept(LPI)performance,the resource-saving scheduling model is formulated to minimize the transmit power consumption while meeting the requirements of tracking accuracy.Finally,the Modified Zoutendijk Method Of Feasible Directions(MZMFD)-based two-stage solution technique is adopted to solve the formulated non-convex optimization model.Simulation results show the effectiveness of the proposed JRNSPA scheme.
基金This research was supported in part by the National Key Research and Development Program of China under Grant 2022YFB3305303in part by the National Natural Science Foundations of China(NSFC)under Grant 62106055+1 种基金in part by the Guangdong Natural Science Foundation under Grant 2022A1515011825in part by the Guangzhou Science and Technology Planning Project under Grants 2023A04J0388 and 2023A03J0662.
文摘Marine container terminal(MCT)plays a key role in the marine intelligent transportation system and international logistics system.However,the efficiency of resource scheduling significantly influences the operation performance of MCT.To solve the practical resource scheduling problem(RSP)in MCT efficiently,this paper has contributions to both the problem model and the algorithm design.Firstly,in the problem model,different from most of the existing studies that only consider scheduling part of the resources in MCT,we propose a unified mathematical model for formulating an integrated RSP.The new integrated RSP model allocates and schedules multiple MCT resources simultaneously by taking the total cost minimization as the objective.Secondly,in the algorithm design,a pre-selection-based ant colony system(PACS)approach is proposed based on graphic structure solution representation and a pre-selection strategy.On the one hand,as the RSP can be formulated as the shortest path problem on the directed complete graph,the graphic structure is proposed to represent the solution encoding to consider multiple constraints and multiple factors of the RSP,which effectively avoids the generation of infeasible solutions.On the other hand,the pre-selection strategy aims to reduce the computational burden of PACS and to fast obtain a higher-quality solution.To evaluate the performance of the proposed novel PACS in solving the new integrated RSP model,a set of test cases with different sizes is conducted.Experimental results and comparisons show the effectiveness and efficiency of the PACS algorithm,which can significantly outperform other state-of-the-art algorithms.
基金the financial support of the National Key Research and Development Plan(2021YFB3302501)the financial support of the National Natural Science Foundation of China(12102077)。
文摘Safe and efficient sortie scheduling on the confined flight deck is crucial for maintaining high combat effectiveness of the aircraft carrier.The primary difficulty exactly lies in the spatiotemporal coordination,i.e.,allocation of limited supporting resources and collision-avoidance between heterogeneous dispatch entities.In this paper,the problem is investigated in the perspective of hybrid flow-shop scheduling problem by synthesizing the precedence,space and resource constraints.Specifically,eight processing procedures are abstracted,where tractors,preparing spots,catapults,and launching are virtualized as machines.By analyzing the constraints in sortie scheduling,a mixed-integer planning model is constructed.In particular,the constraint on preparing spot occupancy is improved to further enhance the sortie efficiency.The basic trajectory library for each dispatch entity is generated and a delayed strategy is integrated to address the collision-avoidance issue.To efficiently solve the formulated HFSP,which is essentially a combinatorial problem with tightly coupled constraints,a chaos-initialized genetic algorithm is developed.The solution framework is validated by the simulation environment referring to the Fort-class carrier,exhibiting higher sortie efficiency when compared to existing strategies.And animation of the simulation results is available at www.bilibili.com/video/BV14t421A7Tt/.The study presents a promising supporting technique for autonomous flight deck operation in the foreseeable future,and can be easily extended to other supporting scenarios,e.g.,ammunition delivery and aircraft maintenance.
文摘To solve the deadlock problem of tasks that the interdependence between tasks fails to consider during the course of resource assignment and task scheduling based on the heuristics algorithm, an improved ant colony system (ACS) based algorithm is proposed. First, how to map the resource assignment and task scheduling (RATS) problem into the optimization selection problem of task resource assignment graph (TRAG) and to add the semaphore mechanism in the optimal TRAG to solve deadlocks are explained. Secondly, how to utilize the grid pheromone system model to realize the algorithm based on ACS is explicated. This refers to the construction of TRAG by the random selection of appropriate resources for each task by the user agent and the optimization of TRAG through the positive feedback and distributed parallel computing mechanism of the ACS. Simulation results show that the proposed algorithm is effective and efficient in solving the deadlock problem.
文摘The ease of accessing a virtually unlimited pool of resources makes Infrastructure as a Service (IaaS) clouds an ideal platform for running data-intensive workflow applications comprising hundreds of computational tasks. However, executing scientific workflows in IaaS cloud environments poses significant challenges due to conflicting objectives, such as minimizing execution time (makespan) and reducing resource utilization costs. This study responds to the increasing need for efficient and adaptable optimization solutions in dynamic and complex environments, which are critical for meeting the evolving demands of modern users and applications. This study presents an innovative multi-objective approach for scheduling scientific workflows in IaaS cloud environments. The proposed algorithm, MOS-MWMC, aims to minimize total execution time (makespan) and resource utilization costs by leveraging key features of virtual machine instances, such as a high number of cores and fast local SSD storage. By integrating realistic simulations based on the WRENCH framework, the method effectively dimensions the cloud infrastructure and optimizes resource usage. Experimental results highlight the superiority of MOS-MWMC compared to benchmark algorithms HEFT and Max-Min. The Pareto fronts obtained for the CyberShake, Epigenomics, and Montage workflows demonstrate closer proximity to the optimal front, confirming the algorithm’s ability to balance conflicting objectives. This study contributes to optimizing scientific workflows in complex environments by providing solutions tailored to specific user needs while minimizing costs and execution times.
基金supported in part by the Key Research and Development Program of Shaanxi under Grant 2023-ZDLGY-34.
文摘Spark performs excellently in large-scale data-parallel computing and iterative processing.However,with the increase in data size and program complexity,the default scheduling strategy has difficultymeeting the demands of resource utilization and performance optimization.Scheduling strategy optimization,as a key direction for improving Spark’s execution efficiency,has attracted widespread attention.This paper first introduces the basic theories of Spark,compares several default scheduling strategies,and discusses common scheduling performance evaluation indicators and factors affecting scheduling efficiency.Subsequently,existing scheduling optimization schemes are summarized based on three scheduling modes:load characteristics,cluster characteristics,and matching of both,and representative algorithms are analyzed in terms of performance indicators and applicable scenarios,comparing the advantages and disadvantages of different scheduling modes.The article also explores in detail the integration of Spark scheduling strategies with specific application scenarios and the challenges in production environments.Finally,the limitations of the existing schemes are analyzed,and prospects are envisioned.
基金supported by the Science and Technology Project of the State Grid Corporation of China(5400-202255158A-1-1-ZN).
文摘Frequent extreme disasters have led to frequent large-scale power outages in recent years.To quickly restore power,it is necessary to understand the damage information of the distribution network accurately.However,the public network communication system is easily damaged after disasters,causing the operation center to lose control of the distribution network.In this paper,we considered using satellites to transmit the distribution network data and focus on the resource scheduling problem of the satellite emergency communication system for the distribution network.Specifically,this paper first formulates the satellite beam-pointing problem and the accesschannel joint resource allocation problem.Then,this paper proposes the Priority-based Beam-pointing and Access-Channel joint optimization algorithm(PBAC),which uses convex optimization theory to solve the satellite beam pointing problem,and adopts the block coordinate descent method,Lagrangian dual method,and a greedy algorithm to solve the access-channel joint resource allocation problem,thereby obtaining the optimal resource scheduling scheme for the satellite network.Finally,this paper conducts comparative experiments with existing methods to verify the effec-tiveness of the proposed methods.The results show that the total weighted transmitted data of the proposed algorithm is increased by about 19.29∼26.29%compared with other algorithms.
基金the Deanship of Graduate Studies and Scientific Research at Najran University for funding this work under the Easy Funding Program grant code(NU/EFP/SERC/13/166).
文摘The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation and independent task scheduling aim to deliver short response time services demanded by the IoT devices and performed by fog servers.The heterogeneity of the IoT-Fog resources and the huge amount of data that needs to be processed by the IoT-Fog tasks make scheduling fog computing tasks a challenging problem.This study proposes an Adaptive Firefly Algorithm(AFA)for dependent task scheduling in IoT-Fog computing.The proposed AFA is a modified version of the standard Firefly Algorithm(FA),considering the execution times of the submitted tasks,the impact of synchronization requirements,and the communication time between dependent tasks.As IoT-Fog computing depends mainly on distributed fog node servers that receive tasks in a dynamic manner,tackling the communications and synchronization issues between dependent tasks is becoming a challenging problem.The proposed AFA aims to address the dynamic nature of IoT-Fog computing environments.The proposed AFA mechanism considers a dynamic light absorption coefficient to control the decrease in attractiveness over iterations.The proposed AFA mechanism performance was benchmarked against the standard Firefly Algorithm(FA),Puma Optimizer(PO),Genetic Algorithm(GA),and Ant Colony Optimization(ACO)through simulations under light,typical,and heavy workload scenarios.In heavy workloads,the proposed AFA mechanism obtained the shortest average execution time,968.98 ms compared to 970.96,1352.87,1247.28,and 1773.62 of FA,PO,GA,and ACO,respectively.The simulation results demonstrate the proposed AFA’s ability to rapidly converge to optimal solutions,emphasizing its adaptability and efficiency in typical and heavy workloads.
基金funded by State Key Laboratory of Micro-Spacecraft Rapid Design and Intelligent Cluster under Grant MS01240103the National Natural Science Foundation of China under Grant 62071146National 2011 Collaborative Innovation Center of Wireless Communication Technologies under Grant 2242022k60006.
文摘In this paper,we propose a joint power and frequency allocation algorithm considering interference protection in the integrated satellite and terrestrial network(ISTN).We efficiently utilize spectrum resources by allowing user equipment(UE)of terrestrial networks to share frequencies with satellite networks.In order to protect the satellite terminal(ST),the base station(BS)needs to control the transmit power and frequency resources of the UE.The optimization problem involves maximizing the achievable throughput while satisfying the interference protection constraints of the ST and the quality of service(QoS)of the UE.However,this problem is highly nonconvex,and we decompose it into power allocation and frequency resource scheduling subproblems.In the power allocation subproblem,we propose a power allocation algorithm based on interference probability(PAIP)to address channel uncertainty.We obtain the suboptimal power allocation solution through iterative optimization.In the frequency resource scheduling subproblem,we develop a heuristic algorithm to handle the non-convexity of the problem.The simulation results show that the combination of power allocation and frequency resource scheduling algorithms can improve spectrum utilization.
文摘Before the dispatch of the carrier-based aircraft,a series of pre-flight preparation operations need to be completed on the flight deck.Flight deck fixed aviation support resource station configuration has an important impact on operation efficiency and sortie rate.However,the resource station configuration is determined during the aircraft carrier design phase and is rarely modified as required,which may not be suitable for some pre-flight preparation missions.In order to solve the above defects,the joint optimization of flight deck resource station configuration and aircraft carrier pre-flight preparation scheduling is studied in this paper,which is formulated as a two-tier optimization decision-making framework.An improved variable neighborhood search algorithm with four original neighborhood structures is presented.Dispatch mission experiment and algorithm performance comparison experiment are carried out in the computational experiment section.The correlation between the pre-flight preparation time(makespan)and flight deck cabin occupancy percentage is given,and advantages of the proposed algorithm in solving the mathematical model are verified.
基金supported by the National Basic Research Program of China (973 Program) (No. 2012CB821200 (2012CB821206))the National Nature Science Foundation of China (No.61003281, No.91024001 and No.61070142)+1 种基金Beijing Natural Science Foundation (Study on Internet Multi-mode Area Information Accurate Searching and Mining Based on Agent, No.4111002)the Chinese Universities Scientific Fund under Grant No.BUPT 2009RC0201
文摘Goud computing is a new paradigm in which dynamic and virtualized computing resources are provided as services over the Internet. However, because cloud resource is open and dynamically configured, resource allocation and scheduling are extremely important challenges in cloud infrastructure. Based on distributed agents, this paper presents trusted data acquisition mechanism for efficient scheduling cloud resources to satisfy various user requests. Our mechanism defines, collects and analyzes multiple key trust targets of cloud service resources based on historical information of servers in a cloud data center. As a result, using our trust computing mechanism, cloud providers can utilize their resources efficiently and also provide highly trusted resources and services to many users.
基金supported by the National Natural Science Foundation of China(6083500460775047+4 种基金60974048)the National High Technology Research and Development Program of China(863 Program)(2007AA0422442008AA04Z214)the Natural Science Foundation of Hunan Province(09JJ9012)Scientific Research Fund of Hunan Provincial Education Department(08C337)
文摘An improved differential evolution(IDE)algorithm that adopts a novel mutation strategy to speed up the convergence rate is introduced to solve the resource-constrained project scheduling problem(RCPSP)with the objective of minimizing project duration Activities priorities for scheduling are represented by individual vectors and a senal scheme is utilized to transform the individual-represented priorities to a feasible schedule according to the precedence and resource constraints so as to be evaluated.To investigate the performance of the IDE-based approach for the RCPSP,it is compared against the meta-heuristic methods of hybrid genetic algorithm(HGA),particle swarm optimization(PSO) and several well selected heuristics.The results show that the proposed scheduling method is better than general heuristic rules and is able to obtain the same optimal result as the HGA and PSO approaches but more efficient than the two algorithms.