The exponential growth of Internet of Things(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet...The exponential growth of Internet of Things(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.展开更多
Workflow scheduling is critical for efficient cloud resource management.This paper proposes Tunicate Swarm-Highest Response Ratio Next,a novel scheduler that synergistically combines the Tunicate Swarm Algorithm with ...Workflow scheduling is critical for efficient cloud resource management.This paper proposes Tunicate Swarm-Highest Response Ratio Next,a novel scheduler that synergistically combines the Tunicate Swarm Algorithm with the Highest Response Ratio Next policy.The Tunicate Swarm Algorithm generates a cost-minimizing task-to-VM mapping scheme,while the Highest Response Ratio Next dynamically dispatches tasks in the ready queue with the highest-priority.Experimental results demonstrate that the Tunicate Swarm-Highest Response RatioNext reduces costs by up to 94.8%compared to meta-heuristic baselines.It also achieves competitive cost efficiency vs.a learning-based method while offering superior operational simplicity and efficiency,establishing it as a highly practical solution for dynamic cloud environments.展开更多
WE observe that the response speed of a linear timeinvariant system to a step reference input depends not only on the system parameters but also on the magnitude of the step input.Based on this observation,we demonstr...WE observe that the response speed of a linear timeinvariant system to a step reference input depends not only on the system parameters but also on the magnitude of the step input.Based on this observation,we demonstrate a method to schedule the magnitude of the reference input to achieve a faster response.展开更多
In response to the challenges faced by unmanned swarms in mountain obstacle-breaching missions within complex terrains,such as poor task-resource coupling,lengthy solution generation times,and poor inter-platform coll...In response to the challenges faced by unmanned swarms in mountain obstacle-breaching missions within complex terrains,such as poor task-resource coupling,lengthy solution generation times,and poor inter-platform collaboration,an unmanned swarm scheduling strategy tailored is proposed for mountain obstacle-breaching missions.Initially,by formalizing the descriptions of obstacle breaching operations,the swarm,and obstacle targets,an optimization model is constructed with the objectives of expected global benefit,timeliness,and task completion degree.A meta-task decomposition and reassembly strategy is then introduced to more precisely match the capabilities of unmanned platforms with task requirements.Additionally,a meta-task decomposition optimization model and a meta-task allocation operator are incorporated to achieve efficient allocation of swarm resources and collaborative scheduling.Simulation results demonstrate that the model can accurately generate reasonable and feasible obstacle breaching execution plans for unmanned swarms based on specific task requirements and environmental conditions.Moreover,compared to conventional strategies,the proposed strategy enhances task completion degree and expected returns while reducing the execution time of the plans.展开更多
Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.Howev...Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.展开更多
In the era of the Internet of Things,distributed computing alleviates the problem of insufficient terminal computing power by integrating idle resources of heterogeneous devices.However,the imbalance between task exec...In the era of the Internet of Things,distributed computing alleviates the problem of insufficient terminal computing power by integrating idle resources of heterogeneous devices.However,the imbalance between task execution delay and node energy consumption,and the scheduling and adaptation challenges brought about by device heterogeneity,urgently need to be addressed.To tackle this problem,this paper constructs a multi-objective real-time task scheduling model that considers task real-time performance,execution delay,system energy consumption,and node interests.The model aims to minimize the delay upper bound and total energy consumption while maximizing system satisfaction.A real-time task scheduling algorithm based on bilateral matching game is proposed.By designing a bidirectional preference mechanism between tasks and computing nodes,combined with a multi-round stable matching strategy,accurate matching between tasks and nodes is achieved.Simulation results show that compared with the baseline scheme,the proposed algorithm significantly reduces the total execution cost,effectively balances the task execution delay and the energy consumption of compute nodes,and takes into account the interests of each network compute node.展开更多
The proliferation of carrier aircraft and the integration of unmanned aerial vehicles(UAVs)on aircraft carriers present new challenges to the automation of launch and recovery operations.This paper investigates a coll...The proliferation of carrier aircraft and the integration of unmanned aerial vehicles(UAVs)on aircraft carriers present new challenges to the automation of launch and recovery operations.This paper investigates a collaborative scheduling problem inherent to the operational processes of carrier aircraft,where launch and recovery tasks are conducted concurrently on the flight deck.The objective is to minimize the cumulative weighted waiting time in the air for recovering aircraft and the cumulative weighted delay time for launching aircraft.To tackle this challenge,a multiple population self-adaptive differential evolution(MPSADE)algorithm is proposed.This method features a self-adaptive parameter updating mechanism that is contingent upon population diversity,an asynchronous updating scheme,an individual migration operator,and a global crossover mechanism.Additionally,comprehensive experiments are conducted to validate the effectiveness of the proposed model and algorithm.Ultimately,a comparative analysis with existing operation modes confirms the enhanced efficiency of the collaborative operation mode.展开更多
With the rapid development of power Internet of Things(IoT)scenarios such as smart factories and smart homes,numerous intelligent terminal devices and real-time interactive applications impose higher demands on comput...With the rapid development of power Internet of Things(IoT)scenarios such as smart factories and smart homes,numerous intelligent terminal devices and real-time interactive applications impose higher demands on computing latency and resource supply efficiency.Multi-access edge computing technology deploys cloud computing capabilities at the network edge;constructs distributed computing nodes and multi-access systems and offers infrastructure support for services with low latency and high reliability.Existing research relies on a strong assumption that the environmental state is fully observable and fails to thoroughly consider the continuous time-varying features of edge server load fluctuations,leading to insufficient adaptability of the model in a heterogeneous dynamic environment.Thus,this paper establishes a framework for end-edge collaborative task offloading based on a partially observable Markov decision-making process(POMDP)and proposes a method for end-edge collaborative task offloading in heterogeneous scenarios.It achieves time-series modeling of the historical load characteristics of edge servers and endows the agent with the ability to be aware of the load in dynamic environmental states.Moreover,by dynamically assessing the exploration value of historical trajectories in the central trajectory pool and adjusting the sample weight distribution,directional exploration and strategy optimization of high-value trajectories are realized.Experimental results indicate that the proposed method exhibits distinct advantages compared with existing methods in terms of average delay and task failure rate and also verifies the method’s robustness in a dynamic environment.展开更多
Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in oper...Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in operator workloads and significantly increase the complexity of scheduling.To address this challenge,this study investigates the Aircraft Pulsating Assembly Line Scheduling Problem(APALSP)under skilled operator allocation,with the objective of minimizing assembly completion time.A mathematical model considering skilled operator allocation is developed,and a Q-Learning improved Particle Swarm Optimization algorithm(QLPSO)is proposed.In the algorithm design,a reverse scheduling strategy is adopted to effectively manage large-scale precedence constraints.Moreover,a reverse sequence encoding method is introduced to generate operation sequences,while a time decoding mechanism is employed to determine completion times.The problem is further reformulated as a Markov Decision Process(MDP)with explicitly defined state and action spaces.Within QLPSO,the Q-learning mechanism adaptively adjusts inertia weights and learning factors,thereby achieving a balance between exploration capability and convergence performance.To validate the effectiveness of the proposed approach,extensive computational experiments are conducted on benchmark instances of different scales,including small,medium,large,and ultra-large cases.The results demonstrate that QLPSO consistently delivers stable and high-quality solutions across all scenarios.In ultra-large-scale instances,it improves the best solution by 25.2%compared with the Genetic Algorithm(GA)and enhances the average solution by 16.9%over the Q-learning algorithm,showing clear advantages over the comparative methods.These findings not only confirm the effectiveness of the proposed algorithm but also provide valuable theoretical references and practical guidance for the intelligent scheduling optimization of aircraft pulsating assembly lines.展开更多
The airplane refueling problem can be stated as follows.We are given n airplanes which can refuel one another during the flight.Each airplane has a reservoir volume wj(liters)and a consumption rate pj(liters per kilom...The airplane refueling problem can be stated as follows.We are given n airplanes which can refuel one another during the flight.Each airplane has a reservoir volume wj(liters)and a consumption rate pj(liters per kilometer).As soon as one airplane runs out of fuel,it is dropping out of the flight.The problem asks for finding a refueling scheme such that the last plane in the air reach a maximal distance.An equivalent version is the n-vehicle exploration problem.The computational complexity of this non-linear combinatorial optimization problem is open so far.This paper employs the neighborhood exchange method of single-machine scheduling to study the precedence relations of jobs,so as to improve the necessary and sufficiency conditions of optimal solutions,and establish an efficient heuristic algorithm which is a generalization of several existing special algorithms.展开更多
Advanced technologies like Cyber-Physical Systems(CPS)and the Internet of Things(IoT)have supported modernizing and automating the transportation region through the introduction of Intelligent Transportation Systems(I...Advanced technologies like Cyber-Physical Systems(CPS)and the Internet of Things(IoT)have supported modernizing and automating the transportation region through the introduction of Intelligent Transportation Systems(ITS).Integrating CPS-ITS and IoT provides real-time Vehicle-to-Infrastructure(V2I)communication,supporting better traffic management,safety,and efficiency.These technological innovations generate complex problems that need to be addressed,uniquely about data routing and Task Scheduling(TS)in ITS.Attempts to solve those problems were primarily based on traditional and experimental methods,and the solutions were not so successful due to the dynamic nature of ITS.This is where the scope of Machine learning(ML)and Swarm Intelligence(SI)has significantly impacted dealing with these challenges;in this line,this research paper presents a novel method for TS and data routing in the CPS-ITS.This paper proposes using a cutting-edge ML algorithm for data transmission from CPS-ITS.This ML has Gated Linear Unit-approximated Reinforcement Learning(GLRL).Greedy Iterative-Particle Swarm Optimization(GI-PSO)has been recommended to develop the Particle Swarm Optimization(PSO)for TS.The primary objective of this study is to enhance the security and effectiveness of ITS systems that utilize CPS-ITS.This study trained and validated the models using a network simulation dataset of 50 nodes from numerous ITS environments.The experiments demonstrate that the proposed GLRL reduces End-toEnd Delay(EED)by 12%,enhances data size use from 83.6%to 88.6%,and achieves higher bandwidth allocation,particularly in high-demand scenarios such as multimedia data streams where adherence improved to 98.15%.Furthermore,the GLRL reduced Network Congestion(NC)by 5.5%,demonstrating its efficiency in managing complex traffic conditions across several environments.The model passed simulation tests in three different environments:urban(UE),suburban(SE),and rural(RE).It met the high bandwidth requirements,made task scheduling more efficient,and increased network throughput(NT).This proved that it was robust and flexible enough for scalable ITS applications.These innovations provide robust,scalable solutions for real-time traffic management,ultimately improving safety,reducing NC,and increasing overall NT.This study can affect ITS by developing it to be more responsive,safe,and effective and by creating a perfect method to set up UE,SE,and RE.展开更多
In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic h...In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.展开更多
To address the issue that hybrid flow shop production struggles to handle order disturbance events,a dynamic scheduling model was constructed.The model takes minimizing the maximum makespan,delivery time deviation,and...To address the issue that hybrid flow shop production struggles to handle order disturbance events,a dynamic scheduling model was constructed.The model takes minimizing the maximum makespan,delivery time deviation,and scheme deviation degree as the optimization objectives.An adaptive dynamic scheduling strategy based on the degree of order disturbance is proposed.An improved multi-objective Grey Wolf(IMOGWO)optimization algorithm is designed by combining the“job-machine”two-layer encoding strategy,the timing-driven two-stage decoding strategy,the opposition-based learning initialization population strategy,the POX crossover strategy,the dualoperation dynamic mutation strategy,and the variable neighborhood search strategy for problem solving.A variety of test cases with different scales were designed,and ablation experiments were conducted to verify the effectiveness of the improved strategies.The results show that each improved strategy can effectively enhance the performance of the IMOGWO.Additionally,performance analysis was conducted by comparing the proposed algorithm with three mature and classical algorithms.The results demonstrate that the proposed algorithm exhibits superior performance in solving the hybrid flow-shop scheduling problem(HFSP).Case validations were conducted for different types of order disturbance scenarios.The results demonstrate that the proposed adaptive dynamic scheduling strategy and the IMOGWO algorithm can effectively address order disturbance events.They enable rapid response to order disturbance while ensuring the stability of the production system.展开更多
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c...The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.展开更多
With the increasing number of geosynchronous orbit satellites with expiring lifetime,spacecraft refueling is crucial in enhancing the economic benefits of on-orbit services.The existing studies tend to be based on pre...With the increasing number of geosynchronous orbit satellites with expiring lifetime,spacecraft refueling is crucial in enhancing the economic benefits of on-orbit services.The existing studies tend to be based on predetermined refueling duration;however,the precise mission scheduling solution will be difficult to apply due to uncertain refueling duration caused by orbital transfer deviations and stochastic actuator faults during actual on-orbit service.Therefore,this paper proposes a robust mission scheduling strategy for geosynchronous orbit spacecraft on-orbit refueling missions with uncertain refueling duration.Firstly,a robust mission scheduling model is constructed by introducing the budget uncertainty set to describe the uncertain refueling duration.Secondly,a hybrid harris hawks optimization algorithm is designed to explore the optimal mission allocation and refueling sequences,which combines cubic chaotic mapping to initialize the population,and the crossover in the genetic algorithm is introduced to enhance global convergence.Finally,the typical simulation examples are constructed with real-mission scenarios in three aspects to analyze:performance comparisons with various algorithms;robustness analyses via comparisons of different on-orbit refueling durations;investigations into the impacts of different initial population strategies on algorithm performance,demonstrating the proposed mission scheduling framework's robustness and effectiveness by comparing it with the exact mission scheduling.展开更多
With the large-scale integration of new energy sources,various resources such as energy storage,electric vehicles(EVs),and photovoltaics(PV) have participated in the scheduling of active distribution networks(ADNs),po...With the large-scale integration of new energy sources,various resources such as energy storage,electric vehicles(EVs),and photovoltaics(PV) have participated in the scheduling of active distribution networks(ADNs),posing new challenges to the operation and scheduling of distribution networks.Aiming at the uncertainty of PV and EV,an optimal scheduling model for ADNs based on multi-scenario fuzzy set based charging station resource forecasting is constructed.To address the scheduling uncertainties caused by PV and load forecasting errors,a day-ahead optimal scheduling model based on conditional value at risk(CVaR) for cost assessment is established,with the optimization objectives of minimizing the operation cost of distribution networks and the risk cost caused by forecasting errors.An improved subtractive optimizer algorithm is proposed to solve the model and formulate day-ahead optimization schemes.Secondly,a forecasting model for dispatchable resources in charging stations is constructed based on event-based fuzzy set theory.On this basis,an intraday scheduling model is built to comprehensively utilize the dispatchable resources of charging stations to coordinate with the output of distributed power sources,achieving optimal scheduling with the goal of minimizing operation costs.Finally,an experimental scenario based on the IEEE-33 node system is designed for simulation verification.The comparison of optimal scheduling results shows that the proposed method can fully exploit the potential scheduling resources of charging stations,improving the operation stability of ADNs and the accommodution capacity of new energy.展开更多
A solution is imperatively expected to meet the efficient contention resolution schemes for managing simultaneous access requests to the communication resources on the Network on Chip (NoC). Based on the ideas of conf...A solution is imperatively expected to meet the efficient contention resolution schemes for managing simultaneous access requests to the communication resources on the Network on Chip (NoC). Based on the ideas of conflict-free transmission, priority-based service, and dynamic self-adaptation to loading, this paper presents a novel scheduling algorithm for Medium Access Control (MAC) in NoC with the researches of the communication structure features of 2D mesh. The algorithm gives priority to guarantee the Quality of Service (QoS) for local input port as well as dynamic adjustment of the performance of the other ports along with input load change. The theoretical model of this algorithm is established with Markov chain and probability generating function. Mathematical analysis is made on the mean queue length and the mean inquiry cyclic time of the system. Simulated experiments are conducted to test the accuracy of the model. It turns out that the findings from theoretical analysis correspond well with those from simulated experiments. Further more, the analytical findings of the system performance demonstrate that the algorithm enables effectively strengthen the fairness and stability of data transmissions in NoC.展开更多
In this paper,a bilevel optimization model of an integrated energy operator(IEO)–load aggregator(LA)is constructed to address the coordinate optimization challenge of multiple stakeholder island integrated energy sys...In this paper,a bilevel optimization model of an integrated energy operator(IEO)–load aggregator(LA)is constructed to address the coordinate optimization challenge of multiple stakeholder island integrated energy system(IIES).The upper level represents the integrated energy operator,and the lower level is the electricity-heatgas load aggregator.Owing to the benefit conflict between the upper and lower levels of the IIES,a dynamic pricing mechanism for coordinating the interests of the upper and lower levels is proposed,combined with factors such as the carbon emissions of the IIES,as well as the lower load interruption power.The price of selling energy can be dynamically adjusted to the lower LA in the mechanism,according to the information on carbon emissions and load interruption power.Mutual benefits and win-win situations are achieved between the upper and lower multistakeholders.Finally,CPLEX is used to iteratively solve the bilevel optimization model.The optimal solution is selected according to the joint optimal discrimination mechanism.Thesimulation results indicate that the sourceload coordinate operation can reduce the upper and lower operation costs.Using the proposed pricingmechanism,the carbon emissions and load interruption power of IEO-LA are reduced by 9.78%and 70.19%,respectively,and the capture power of the carbon capture equipment is improved by 36.24%.The validity of the proposed model and method is verified.展开更多
Safe and efficient sortie scheduling on the confined flight deck is crucial for maintaining high combat effectiveness of the aircraft carrier.The primary difficulty exactly lies in the spatiotemporal coordination,i.e....Safe and efficient sortie scheduling on the confined flight deck is crucial for maintaining high combat effectiveness of the aircraft carrier.The primary difficulty exactly lies in the spatiotemporal coordination,i.e.,allocation of limited supporting resources and collision-avoidance between heterogeneous dispatch entities.In this paper,the problem is investigated in the perspective of hybrid flow-shop scheduling problem by synthesizing the precedence,space and resource constraints.Specifically,eight processing procedures are abstracted,where tractors,preparing spots,catapults,and launching are virtualized as machines.By analyzing the constraints in sortie scheduling,a mixed-integer planning model is constructed.In particular,the constraint on preparing spot occupancy is improved to further enhance the sortie efficiency.The basic trajectory library for each dispatch entity is generated and a delayed strategy is integrated to address the collision-avoidance issue.To efficiently solve the formulated HFSP,which is essentially a combinatorial problem with tightly coupled constraints,a chaos-initialized genetic algorithm is developed.The solution framework is validated by the simulation environment referring to the Fort-class carrier,exhibiting higher sortie efficiency when compared to existing strategies.And animation of the simulation results is available at www.bilibili.com/video/BV14t421A7Tt/.The study presents a promising supporting technique for autonomous flight deck operation in the foreseeable future,and can be easily extended to other supporting scenarios,e.g.,ammunition delivery and aircraft maintenance.展开更多
A centralized-distributed scheduling strategy for distribution networks based on multi-temporal and hierarchical cooperative game is proposed to address the issues of difficult operation control and energy optimizatio...A centralized-distributed scheduling strategy for distribution networks based on multi-temporal and hierarchical cooperative game is proposed to address the issues of difficult operation control and energy optimization interaction in distribution network transformer areas,as well as the problem of significant photovoltaic curtailment due to the inability to consume photovoltaic power locally.A scheduling architecture combiningmulti-temporal scales with a three-level decision-making hierarchy is established:the overall approach adopts a centralized-distributed method,analyzing the operational characteristics and interaction relationships of the distribution network center layer,cluster layer,and transformer area layer,providing a“spatial foundation”for subsequent optimization.The optimization process is divided into two stages on the temporal scale:in the first stage,based on forecasted electricity load and demand response characteristics,time-of-use electricity prices are utilized to formulate day-ahead optimization strategies;in the second stage,based on the charging and discharging characteristics of energy storage vehicles and multi-agent cooperative game relationships,rolling electricity prices and optimal interactive energy solutions are determined among clusters and transformer areas using the Nash bargaining theory.Finally,a distributed optimization algorithm using the bisection method is employed to solve the constructed model.Simulation results demonstrate that the proposed optimization strategy can facilitate photovoltaic consumption in the distribution network and enhance grid economy.展开更多
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R909),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of Internet of Things(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.
基金supported by the National Natural Science Foundation of China under Grant 62472264the Natural Science Distinguished Youth Foundation of Shandong Province under Grant ZR2025QA13.
文摘Workflow scheduling is critical for efficient cloud resource management.This paper proposes Tunicate Swarm-Highest Response Ratio Next,a novel scheduler that synergistically combines the Tunicate Swarm Algorithm with the Highest Response Ratio Next policy.The Tunicate Swarm Algorithm generates a cost-minimizing task-to-VM mapping scheme,while the Highest Response Ratio Next dynamically dispatches tasks in the ready queue with the highest-priority.Experimental results demonstrate that the Tunicate Swarm-Highest Response RatioNext reduces costs by up to 94.8%compared to meta-heuristic baselines.It also achieves competitive cost efficiency vs.a learning-based method while offering superior operational simplicity and efficiency,establishing it as a highly practical solution for dynamic cloud environments.
文摘WE observe that the response speed of a linear timeinvariant system to a step reference input depends not only on the system parameters but also on the magnitude of the step input.Based on this observation,we demonstrate a method to schedule the magnitude of the reference input to achieve a faster response.
基金supported by the National Natural Science Foundation of China(61374186)。
文摘In response to the challenges faced by unmanned swarms in mountain obstacle-breaching missions within complex terrains,such as poor task-resource coupling,lengthy solution generation times,and poor inter-platform collaboration,an unmanned swarm scheduling strategy tailored is proposed for mountain obstacle-breaching missions.Initially,by formalizing the descriptions of obstacle breaching operations,the swarm,and obstacle targets,an optimization model is constructed with the objectives of expected global benefit,timeliness,and task completion degree.A meta-task decomposition and reassembly strategy is then introduced to more precisely match the capabilities of unmanned platforms with task requirements.Additionally,a meta-task decomposition optimization model and a meta-task allocation operator are incorporated to achieve efficient allocation of swarm resources and collaborative scheduling.Simulation results demonstrate that the model can accurately generate reasonable and feasible obstacle breaching execution plans for unmanned swarms based on specific task requirements and environmental conditions.Moreover,compared to conventional strategies,the proposed strategy enhances task completion degree and expected returns while reducing the execution time of the plans.
文摘Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.
基金Supported by the National Program on Key Basic Research Project(2020YFA0713600)the National Natural Science Foundation of China(62272214)。
文摘In the era of the Internet of Things,distributed computing alleviates the problem of insufficient terminal computing power by integrating idle resources of heterogeneous devices.However,the imbalance between task execution delay and node energy consumption,and the scheduling and adaptation challenges brought about by device heterogeneity,urgently need to be addressed.To tackle this problem,this paper constructs a multi-objective real-time task scheduling model that considers task real-time performance,execution delay,system energy consumption,and node interests.The model aims to minimize the delay upper bound and total energy consumption while maximizing system satisfaction.A real-time task scheduling algorithm based on bilateral matching game is proposed.By designing a bidirectional preference mechanism between tasks and computing nodes,combined with a multi-round stable matching strategy,accurate matching between tasks and nodes is achieved.Simulation results show that compared with the baseline scheme,the proposed algorithm significantly reduces the total execution cost,effectively balances the task execution delay and the energy consumption of compute nodes,and takes into account the interests of each network compute node.
文摘The proliferation of carrier aircraft and the integration of unmanned aerial vehicles(UAVs)on aircraft carriers present new challenges to the automation of launch and recovery operations.This paper investigates a collaborative scheduling problem inherent to the operational processes of carrier aircraft,where launch and recovery tasks are conducted concurrently on the flight deck.The objective is to minimize the cumulative weighted waiting time in the air for recovering aircraft and the cumulative weighted delay time for launching aircraft.To tackle this challenge,a multiple population self-adaptive differential evolution(MPSADE)algorithm is proposed.This method features a self-adaptive parameter updating mechanism that is contingent upon population diversity,an asynchronous updating scheme,an individual migration operator,and a global crossover mechanism.Additionally,comprehensive experiments are conducted to validate the effectiveness of the proposed model and algorithm.Ultimately,a comparative analysis with existing operation modes confirms the enhanced efficiency of the collaborative operation mode.
基金funded by the State Grid Corporation Science and Technology Project“Research and Application of Key Technologies for Integrated Sensing and Computing for Intelligent Operation of Power Grid”(Grant No.5700-202318596A-3-2-ZN).
文摘With the rapid development of power Internet of Things(IoT)scenarios such as smart factories and smart homes,numerous intelligent terminal devices and real-time interactive applications impose higher demands on computing latency and resource supply efficiency.Multi-access edge computing technology deploys cloud computing capabilities at the network edge;constructs distributed computing nodes and multi-access systems and offers infrastructure support for services with low latency and high reliability.Existing research relies on a strong assumption that the environmental state is fully observable and fails to thoroughly consider the continuous time-varying features of edge server load fluctuations,leading to insufficient adaptability of the model in a heterogeneous dynamic environment.Thus,this paper establishes a framework for end-edge collaborative task offloading based on a partially observable Markov decision-making process(POMDP)and proposes a method for end-edge collaborative task offloading in heterogeneous scenarios.It achieves time-series modeling of the historical load characteristics of edge servers and endows the agent with the ability to be aware of the load in dynamic environmental states.Moreover,by dynamically assessing the exploration value of historical trajectories in the central trajectory pool and adjusting the sample weight distribution,directional exploration and strategy optimization of high-value trajectories are realized.Experimental results indicate that the proposed method exhibits distinct advantages compared with existing methods in terms of average delay and task failure rate and also verifies the method’s robustness in a dynamic environment.
基金supported by the National Natural Science Foundation of China(Grant No.52475543)Natural Science Foundation of Henan(Grant No.252300421101)+1 种基金Henan Province University Science and Technology Innovation Talent Support Plan(Grant No.24HASTIT048)Science and Technology Innovation Team Project of Zhengzhou University of Light Industry(Grant No.23XNKJTD0101).
文摘Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in operator workloads and significantly increase the complexity of scheduling.To address this challenge,this study investigates the Aircraft Pulsating Assembly Line Scheduling Problem(APALSP)under skilled operator allocation,with the objective of minimizing assembly completion time.A mathematical model considering skilled operator allocation is developed,and a Q-Learning improved Particle Swarm Optimization algorithm(QLPSO)is proposed.In the algorithm design,a reverse scheduling strategy is adopted to effectively manage large-scale precedence constraints.Moreover,a reverse sequence encoding method is introduced to generate operation sequences,while a time decoding mechanism is employed to determine completion times.The problem is further reformulated as a Markov Decision Process(MDP)with explicitly defined state and action spaces.Within QLPSO,the Q-learning mechanism adaptively adjusts inertia weights and learning factors,thereby achieving a balance between exploration capability and convergence performance.To validate the effectiveness of the proposed approach,extensive computational experiments are conducted on benchmark instances of different scales,including small,medium,large,and ultra-large cases.The results demonstrate that QLPSO consistently delivers stable and high-quality solutions across all scenarios.In ultra-large-scale instances,it improves the best solution by 25.2%compared with the Genetic Algorithm(GA)and enhances the average solution by 16.9%over the Q-learning algorithm,showing clear advantages over the comparative methods.These findings not only confirm the effectiveness of the proposed algorithm but also provide valuable theoretical references and practical guidance for the intelligent scheduling optimization of aircraft pulsating assembly lines.
基金Supported by Natural Science Foundation of Henan Province(Grant Nos.232300421218 and 252300421483).
文摘The airplane refueling problem can be stated as follows.We are given n airplanes which can refuel one another during the flight.Each airplane has a reservoir volume wj(liters)and a consumption rate pj(liters per kilometer).As soon as one airplane runs out of fuel,it is dropping out of the flight.The problem asks for finding a refueling scheme such that the last plane in the air reach a maximal distance.An equivalent version is the n-vehicle exploration problem.The computational complexity of this non-linear combinatorial optimization problem is open so far.This paper employs the neighborhood exchange method of single-machine scheduling to study the precedence relations of jobs,so as to improve the necessary and sufficiency conditions of optimal solutions,and establish an efficient heuristic algorithm which is a generalization of several existing special algorithms.
基金funded by Taif University,Taif,Saudi Arabia,project number(TU-DSPP-2024-17)。
文摘Advanced technologies like Cyber-Physical Systems(CPS)and the Internet of Things(IoT)have supported modernizing and automating the transportation region through the introduction of Intelligent Transportation Systems(ITS).Integrating CPS-ITS and IoT provides real-time Vehicle-to-Infrastructure(V2I)communication,supporting better traffic management,safety,and efficiency.These technological innovations generate complex problems that need to be addressed,uniquely about data routing and Task Scheduling(TS)in ITS.Attempts to solve those problems were primarily based on traditional and experimental methods,and the solutions were not so successful due to the dynamic nature of ITS.This is where the scope of Machine learning(ML)and Swarm Intelligence(SI)has significantly impacted dealing with these challenges;in this line,this research paper presents a novel method for TS and data routing in the CPS-ITS.This paper proposes using a cutting-edge ML algorithm for data transmission from CPS-ITS.This ML has Gated Linear Unit-approximated Reinforcement Learning(GLRL).Greedy Iterative-Particle Swarm Optimization(GI-PSO)has been recommended to develop the Particle Swarm Optimization(PSO)for TS.The primary objective of this study is to enhance the security and effectiveness of ITS systems that utilize CPS-ITS.This study trained and validated the models using a network simulation dataset of 50 nodes from numerous ITS environments.The experiments demonstrate that the proposed GLRL reduces End-toEnd Delay(EED)by 12%,enhances data size use from 83.6%to 88.6%,and achieves higher bandwidth allocation,particularly in high-demand scenarios such as multimedia data streams where adherence improved to 98.15%.Furthermore,the GLRL reduced Network Congestion(NC)by 5.5%,demonstrating its efficiency in managing complex traffic conditions across several environments.The model passed simulation tests in three different environments:urban(UE),suburban(SE),and rural(RE).It met the high bandwidth requirements,made task scheduling more efficient,and increased network throughput(NT).This proved that it was robust and flexible enough for scalable ITS applications.These innovations provide robust,scalable solutions for real-time traffic management,ultimately improving safety,reducing NC,and increasing overall NT.This study can affect ITS by developing it to be more responsive,safe,and effective and by creating a perfect method to set up UE,SE,and RE.
基金funding from the European Commission by the Ruralities project(grant agreement no.101060876).
文摘In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.
基金funded by National Key Research and Development Program Projects of China under Grant No.2020YFB1713500.
文摘To address the issue that hybrid flow shop production struggles to handle order disturbance events,a dynamic scheduling model was constructed.The model takes minimizing the maximum makespan,delivery time deviation,and scheme deviation degree as the optimization objectives.An adaptive dynamic scheduling strategy based on the degree of order disturbance is proposed.An improved multi-objective Grey Wolf(IMOGWO)optimization algorithm is designed by combining the“job-machine”two-layer encoding strategy,the timing-driven two-stage decoding strategy,the opposition-based learning initialization population strategy,the POX crossover strategy,the dualoperation dynamic mutation strategy,and the variable neighborhood search strategy for problem solving.A variety of test cases with different scales were designed,and ablation experiments were conducted to verify the effectiveness of the improved strategies.The results show that each improved strategy can effectively enhance the performance of the IMOGWO.Additionally,performance analysis was conducted by comparing the proposed algorithm with three mature and classical algorithms.The results demonstrate that the proposed algorithm exhibits superior performance in solving the hybrid flow-shop scheduling problem(HFSP).Case validations were conducted for different types of order disturbance scenarios.The results demonstrate that the proposed adaptive dynamic scheduling strategy and the IMOGWO algorithm can effectively address order disturbance events.They enable rapid response to order disturbance while ensuring the stability of the production system.
基金appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.
基金co-supported by the National Natural Science Foundation of China(Nos.62473110,62403166)the Fundamental Research Funds for the Central Universities,China(No.2023FRFK02043)+1 种基金the Natural Science Foundation of Heilongjiang Province,China(No.LH2022F023)the National Key Laboratory of Space Intelligent Control Foundation,China(No.2023-JCJQ-LB-006-19)。
文摘With the increasing number of geosynchronous orbit satellites with expiring lifetime,spacecraft refueling is crucial in enhancing the economic benefits of on-orbit services.The existing studies tend to be based on predetermined refueling duration;however,the precise mission scheduling solution will be difficult to apply due to uncertain refueling duration caused by orbital transfer deviations and stochastic actuator faults during actual on-orbit service.Therefore,this paper proposes a robust mission scheduling strategy for geosynchronous orbit spacecraft on-orbit refueling missions with uncertain refueling duration.Firstly,a robust mission scheduling model is constructed by introducing the budget uncertainty set to describe the uncertain refueling duration.Secondly,a hybrid harris hawks optimization algorithm is designed to explore the optimal mission allocation and refueling sequences,which combines cubic chaotic mapping to initialize the population,and the crossover in the genetic algorithm is introduced to enhance global convergence.Finally,the typical simulation examples are constructed with real-mission scenarios in three aspects to analyze:performance comparisons with various algorithms;robustness analyses via comparisons of different on-orbit refueling durations;investigations into the impacts of different initial population strategies on algorithm performance,demonstrating the proposed mission scheduling framework's robustness and effectiveness by comparing it with the exact mission scheduling.
基金Supported by the Technology Project of State Grid Corporation Headquarters(No.5100-202322029A-1-1-ZN)the 2024 Youth Science Foundation Project of China (No.62303006)。
文摘With the large-scale integration of new energy sources,various resources such as energy storage,electric vehicles(EVs),and photovoltaics(PV) have participated in the scheduling of active distribution networks(ADNs),posing new challenges to the operation and scheduling of distribution networks.Aiming at the uncertainty of PV and EV,an optimal scheduling model for ADNs based on multi-scenario fuzzy set based charging station resource forecasting is constructed.To address the scheduling uncertainties caused by PV and load forecasting errors,a day-ahead optimal scheduling model based on conditional value at risk(CVaR) for cost assessment is established,with the optimization objectives of minimizing the operation cost of distribution networks and the risk cost caused by forecasting errors.An improved subtractive optimizer algorithm is proposed to solve the model and formulate day-ahead optimization schemes.Secondly,a forecasting model for dispatchable resources in charging stations is constructed based on event-based fuzzy set theory.On this basis,an intraday scheduling model is built to comprehensively utilize the dispatchable resources of charging stations to coordinate with the output of distributed power sources,achieving optimal scheduling with the goal of minimizing operation costs.Finally,an experimental scenario based on the IEEE-33 node system is designed for simulation verification.The comparison of optimal scheduling results shows that the proposed method can fully exploit the potential scheduling resources of charging stations,improving the operation stability of ADNs and the accommodution capacity of new energy.
基金Supported by the National Natural Science Foundation of China(No.61072079)
文摘A solution is imperatively expected to meet the efficient contention resolution schemes for managing simultaneous access requests to the communication resources on the Network on Chip (NoC). Based on the ideas of conflict-free transmission, priority-based service, and dynamic self-adaptation to loading, this paper presents a novel scheduling algorithm for Medium Access Control (MAC) in NoC with the researches of the communication structure features of 2D mesh. The algorithm gives priority to guarantee the Quality of Service (QoS) for local input port as well as dynamic adjustment of the performance of the other ports along with input load change. The theoretical model of this algorithm is established with Markov chain and probability generating function. Mathematical analysis is made on the mean queue length and the mean inquiry cyclic time of the system. Simulated experiments are conducted to test the accuracy of the model. It turns out that the findings from theoretical analysis correspond well with those from simulated experiments. Further more, the analytical findings of the system performance demonstrate that the algorithm enables effectively strengthen the fairness and stability of data transmissions in NoC.
基金supported by the Central Government Guides Local Science and Technology Development Fund Project(2023ZY0020)Key R&D and Achievement Transformation Project in InnerMongolia Autonomous Region(2022YFHH0019)+3 种基金the Fundamental Research Funds for Inner Mongolia University of Science&Technology(2022053)Natural Science Foundation of Inner Mongolia(2022LHQN05002)National Natural Science Foundation of China(52067018)Metallurgical Engineering First-Class Discipline Construction Project in Inner Mongolia University of Science and Technology,Control Science and Engineering Quality Improvement and Cultivation Discipline Project in Inner Mongolia University of Science and Technology。
文摘In this paper,a bilevel optimization model of an integrated energy operator(IEO)–load aggregator(LA)is constructed to address the coordinate optimization challenge of multiple stakeholder island integrated energy system(IIES).The upper level represents the integrated energy operator,and the lower level is the electricity-heatgas load aggregator.Owing to the benefit conflict between the upper and lower levels of the IIES,a dynamic pricing mechanism for coordinating the interests of the upper and lower levels is proposed,combined with factors such as the carbon emissions of the IIES,as well as the lower load interruption power.The price of selling energy can be dynamically adjusted to the lower LA in the mechanism,according to the information on carbon emissions and load interruption power.Mutual benefits and win-win situations are achieved between the upper and lower multistakeholders.Finally,CPLEX is used to iteratively solve the bilevel optimization model.The optimal solution is selected according to the joint optimal discrimination mechanism.Thesimulation results indicate that the sourceload coordinate operation can reduce the upper and lower operation costs.Using the proposed pricingmechanism,the carbon emissions and load interruption power of IEO-LA are reduced by 9.78%and 70.19%,respectively,and the capture power of the carbon capture equipment is improved by 36.24%.The validity of the proposed model and method is verified.
基金the financial support of the National Key Research and Development Plan(2021YFB3302501)the financial support of the National Natural Science Foundation of China(12102077)。
文摘Safe and efficient sortie scheduling on the confined flight deck is crucial for maintaining high combat effectiveness of the aircraft carrier.The primary difficulty exactly lies in the spatiotemporal coordination,i.e.,allocation of limited supporting resources and collision-avoidance between heterogeneous dispatch entities.In this paper,the problem is investigated in the perspective of hybrid flow-shop scheduling problem by synthesizing the precedence,space and resource constraints.Specifically,eight processing procedures are abstracted,where tractors,preparing spots,catapults,and launching are virtualized as machines.By analyzing the constraints in sortie scheduling,a mixed-integer planning model is constructed.In particular,the constraint on preparing spot occupancy is improved to further enhance the sortie efficiency.The basic trajectory library for each dispatch entity is generated and a delayed strategy is integrated to address the collision-avoidance issue.To efficiently solve the formulated HFSP,which is essentially a combinatorial problem with tightly coupled constraints,a chaos-initialized genetic algorithm is developed.The solution framework is validated by the simulation environment referring to the Fort-class carrier,exhibiting higher sortie efficiency when compared to existing strategies.And animation of the simulation results is available at www.bilibili.com/video/BV14t421A7Tt/.The study presents a promising supporting technique for autonomous flight deck operation in the foreseeable future,and can be easily extended to other supporting scenarios,e.g.,ammunition delivery and aircraft maintenance.
基金funded by the Jilin Province Science and Technology Development Plan Project(20230101344JC).
文摘A centralized-distributed scheduling strategy for distribution networks based on multi-temporal and hierarchical cooperative game is proposed to address the issues of difficult operation control and energy optimization interaction in distribution network transformer areas,as well as the problem of significant photovoltaic curtailment due to the inability to consume photovoltaic power locally.A scheduling architecture combiningmulti-temporal scales with a three-level decision-making hierarchy is established:the overall approach adopts a centralized-distributed method,analyzing the operational characteristics and interaction relationships of the distribution network center layer,cluster layer,and transformer area layer,providing a“spatial foundation”for subsequent optimization.The optimization process is divided into two stages on the temporal scale:in the first stage,based on forecasted electricity load and demand response characteristics,time-of-use electricity prices are utilized to formulate day-ahead optimization strategies;in the second stage,based on the charging and discharging characteristics of energy storage vehicles and multi-agent cooperative game relationships,rolling electricity prices and optimal interactive energy solutions are determined among clusters and transformer areas using the Nash bargaining theory.Finally,a distributed optimization algorithm using the bisection method is employed to solve the constructed model.Simulation results demonstrate that the proposed optimization strategy can facilitate photovoltaic consumption in the distribution network and enhance grid economy.