Fast and accurate forecasting of schedulable capacity of electric vehicles(EVs)plays an important role in enabling the integration of EVs into future smart grids as distributed energy storage systems.Traditional metho...Fast and accurate forecasting of schedulable capacity of electric vehicles(EVs)plays an important role in enabling the integration of EVs into future smart grids as distributed energy storage systems.Traditional methods are insufficient to deal with large-scale actual schedulable capacity data.This paper proposes forecasting models for schedulable capacity of EVs through the parallel gradient boosting decision tree algorithm and big data analysis for multi-time scales.The time scale of these data analysis comprises the real time of one minute,ultra-short-term of one hour and one-day-ahead scale of 24 hours.The predicted results for different time scales can be used for various ancillary services.The proposed algorithm is validated using operation data of 521 EVs in the field.The results show that compared with other machine learning methods such as the parallel random forest algorithm and parallel k-nearest neighbor algorithm,the proposed algorithm requires less training time with better forecasting accuracy and analytical processing ability in big data environment.展开更多
WE observe that the response speed of a linear timeinvariant system to a step reference input depends not only on the system parameters but also on the magnitude of the step input.Based on this observation,we demonstr...WE observe that the response speed of a linear timeinvariant system to a step reference input depends not only on the system parameters but also on the magnitude of the step input.Based on this observation,we demonstrate a method to schedule the magnitude of the reference input to achieve a faster response.展开更多
Workflow scheduling is critical for efficient cloud resource management.This paper proposes Tunicate Swarm-Highest Response Ratio Next,a novel scheduler that synergistically combines the Tunicate Swarm Algorithm with ...Workflow scheduling is critical for efficient cloud resource management.This paper proposes Tunicate Swarm-Highest Response Ratio Next,a novel scheduler that synergistically combines the Tunicate Swarm Algorithm with the Highest Response Ratio Next policy.The Tunicate Swarm Algorithm generates a cost-minimizing task-to-VM mapping scheme,while the Highest Response Ratio Next dynamically dispatches tasks in the ready queue with the highest-priority.Experimental results demonstrate that the Tunicate Swarm-Highest Response RatioNext reduces costs by up to 94.8%compared to meta-heuristic baselines.It also achieves competitive cost efficiency vs.a learning-based method while offering superior operational simplicity and efficiency,establishing it as a highly practical solution for dynamic cloud environments.展开更多
One of the key research focuses in quantum annealing is the design and optimization of annealing schedules to enhance computational efficiency,enabling large-scale applications.QuantumZero(QZero)pioneered the integrat...One of the key research focuses in quantum annealing is the design and optimization of annealing schedules to enhance computational efficiency,enabling large-scale applications.QuantumZero(QZero)pioneered the integration of Monte Carlo Tree Search(MCTS)with neural networks to autonomously design annealing schedules within a hybrid quantum-classical framework.This approach is distinguished by its ability to enhance the performance of Monte Carlo Tree Search through the integration of neural networks,enabling the efficient design of annealing paths even with limited annealing time.The paper presents an optimized QZero method based on intuitive reasoning theory and MindSpore,which further enhances QZero’s ability to conserve computational resources and resist noise.In terms of learning efficiency,the optimized QZero algorithm improves the convergence speed of the neural network by 93%compared to the original algorithm.Notably,the average number of quantum annealing queries required to achieve 99%fidelity is reduced by 45.09%.Regarding noise resistance,the optimized QZero algorithm requires 34.27%fewer quantum annealing queries to reach 99%fidelity compared to the original algorithm.The optimized QZero algorithm demonstrates strong competitiveness in optimizing quantum annealing schedules.展开更多
Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.Howev...Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.展开更多
Ride-hailing electric vehicles are mobile resources with dispatch potential to improve resilience.However,they have not been well investigated because their charging and order-serving are affected or managed by the po...Ride-hailing electric vehicles are mobile resources with dispatch potential to improve resilience.However,they have not been well investigated because their charging and order-serving are affected or managed by the power grid dispatching center and the ride-hailing platform.Effective pre-strategies can improve the prevention ability for high-impact and low-probability(HILP)events and provide the foundation for measures in the response and restoration stages.First,this paper proposes a resilience reserve to expand the existing research on power system resilience.Secondly,this paper puts forward an interactive method of deep reinforcement learning,which considers the interests of both the power grid dispatching center and the ride-hailing platform.It improves the resilience reserve by achieving the order dispatch,orderly charging management of ride-hailing electric vehicles,and the pricing strategy of charging stations.Finally,this paper uses a practical example covering about 107.32 km2 in the center of Chengdu to verify that the proposed method improves the resilience reserve of the power system without obviously damaging the interests of the ride-hailing platform.展开更多
For mixed-integer programming(MIP)problems in new power systems with uncertainties,existing studies tend to address uncertainty modeling or MIP solution methods in isolation.They overlook core bottlenecks arising from...For mixed-integer programming(MIP)problems in new power systems with uncertainties,existing studies tend to address uncertainty modeling or MIP solution methods in isolation.They overlook core bottlenecks arising from their coupling,such as variable dimension explosion,disrupted constraint separability,and conflicts in solution logic.To address this gap,this paper focuses on the coupling effects between the two and systematically conducts three aspects of work:first,the paper summarizes the uncertainty optimization methods suitable for addressing uncertainty-related issues in power systems,along with their respective advantages and disadvantages.It also clarifies the specific forms and operational mechanisms through which these uncertainty optimization methods are integrated into MIP models.Meanwhile,based on the application scenarios of new power systems,the paper delineates the applicable boundaries of different optimization methods;second,the paper organizes three categories of solution methods,which are exact solution methods,decomposition-based methods,and meta-heuristic algorithms.It focuses on analyzing the improvement paths of various solution methods for resolving coupling bottlenecks,as well as their applicability in different types of power system optimization problems;finally,providing a summary and presenting an outlook on future directions:artificial intelligence-enabled optimization,development of dedicated solvers for extreme scenarios,and dynamic modeling of multi-source uncertainties.This study aims to help researchers in the field of new power systems quickly grasp uncertainty optimization methods and core solution methods,bridge existing research gaps,and promote the development of this field.展开更多
In response to the challenges faced by unmanned swarms in mountain obstacle-breaching missions within complex terrains,such as poor task-resource coupling,lengthy solution generation times,and poor inter-platform coll...In response to the challenges faced by unmanned swarms in mountain obstacle-breaching missions within complex terrains,such as poor task-resource coupling,lengthy solution generation times,and poor inter-platform collaboration,an unmanned swarm scheduling strategy tailored is proposed for mountain obstacle-breaching missions.Initially,by formalizing the descriptions of obstacle breaching operations,the swarm,and obstacle targets,an optimization model is constructed with the objectives of expected global benefit,timeliness,and task completion degree.A meta-task decomposition and reassembly strategy is then introduced to more precisely match the capabilities of unmanned platforms with task requirements.Additionally,a meta-task decomposition optimization model and a meta-task allocation operator are incorporated to achieve efficient allocation of swarm resources and collaborative scheduling.Simulation results demonstrate that the model can accurately generate reasonable and feasible obstacle breaching execution plans for unmanned swarms based on specific task requirements and environmental conditions.Moreover,compared to conventional strategies,the proposed strategy enhances task completion degree and expected returns while reducing the execution time of the plans.展开更多
Virtual power plant(VPP)integrates a variety of distributed renewable energy and energy storage to participate in electricity market transactions,promote the consumption of renewable energy,and improve economic effici...Virtual power plant(VPP)integrates a variety of distributed renewable energy and energy storage to participate in electricity market transactions,promote the consumption of renewable energy,and improve economic efficiency.In this paper,aiming at the uncertainty of distributed wind power and photovoltaic output,considering the coupling relationship between power,carbon trading,and green cardmarket,the optimal operationmodel and bidding scheme of VPP in spot market,carbon trading market,and green card market are established.On this basis,through the Shapley value and independent risk contribution theory in cooperative game theory,the quantitative analysis of the total income and risk contribution of various distributed resources in the virtual power plant is realized.Moreover,the scheduling strategies of virtual power plants under different risk preferences are systematically compared,and the feasibility and accuracy of the combination of Shapley value and independent risk contribution theory in ensuring fair income distribution and reasonable risk assessment are emphasized.A comprehensive solution for virtual power plants in the multi-market environment is constructed,which integrates operation strategy,income distribution mechanism,and risk control system into a unified analysis framework.Through the simulation of multi-scenario examples,the CPLEXsolver inMATLAB software is used to optimize themodel.The proposed joint optimization scheme can increase the profit of VPP participating in carbon trading and green certificate market by 29%.The total revenue of distributed resources managed by VPP is 9%higher than that of individual participation.展开更多
The proliferation of carrier aircraft and the integration of unmanned aerial vehicles(UAVs)on aircraft carriers present new challenges to the automation of launch and recovery operations.This paper investigates a coll...The proliferation of carrier aircraft and the integration of unmanned aerial vehicles(UAVs)on aircraft carriers present new challenges to the automation of launch and recovery operations.This paper investigates a collaborative scheduling problem inherent to the operational processes of carrier aircraft,where launch and recovery tasks are conducted concurrently on the flight deck.The objective is to minimize the cumulative weighted waiting time in the air for recovering aircraft and the cumulative weighted delay time for launching aircraft.To tackle this challenge,a multiple population self-adaptive differential evolution(MPSADE)algorithm is proposed.This method features a self-adaptive parameter updating mechanism that is contingent upon population diversity,an asynchronous updating scheme,an individual migration operator,and a global crossover mechanism.Additionally,comprehensive experiments are conducted to validate the effectiveness of the proposed model and algorithm.Ultimately,a comparative analysis with existing operation modes confirms the enhanced efficiency of the collaborative operation mode.展开更多
Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in oper...Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in operator workloads and significantly increase the complexity of scheduling.To address this challenge,this study investigates the Aircraft Pulsating Assembly Line Scheduling Problem(APALSP)under skilled operator allocation,with the objective of minimizing assembly completion time.A mathematical model considering skilled operator allocation is developed,and a Q-Learning improved Particle Swarm Optimization algorithm(QLPSO)is proposed.In the algorithm design,a reverse scheduling strategy is adopted to effectively manage large-scale precedence constraints.Moreover,a reverse sequence encoding method is introduced to generate operation sequences,while a time decoding mechanism is employed to determine completion times.The problem is further reformulated as a Markov Decision Process(MDP)with explicitly defined state and action spaces.Within QLPSO,the Q-learning mechanism adaptively adjusts inertia weights and learning factors,thereby achieving a balance between exploration capability and convergence performance.To validate the effectiveness of the proposed approach,extensive computational experiments are conducted on benchmark instances of different scales,including small,medium,large,and ultra-large cases.The results demonstrate that QLPSO consistently delivers stable and high-quality solutions across all scenarios.In ultra-large-scale instances,it improves the best solution by 25.2%compared with the Genetic Algorithm(GA)and enhances the average solution by 16.9%over the Q-learning algorithm,showing clear advantages over the comparative methods.These findings not only confirm the effectiveness of the proposed algorithm but also provide valuable theoretical references and practical guidance for the intelligent scheduling optimization of aircraft pulsating assembly lines.展开更多
In the era of the Internet of Things,distributed computing alleviates the problem of insufficient terminal computing power by integrating idle resources of heterogeneous devices.However,the imbalance between task exec...In the era of the Internet of Things,distributed computing alleviates the problem of insufficient terminal computing power by integrating idle resources of heterogeneous devices.However,the imbalance between task execution delay and node energy consumption,and the scheduling and adaptation challenges brought about by device heterogeneity,urgently need to be addressed.To tackle this problem,this paper constructs a multi-objective real-time task scheduling model that considers task real-time performance,execution delay,system energy consumption,and node interests.The model aims to minimize the delay upper bound and total energy consumption while maximizing system satisfaction.A real-time task scheduling algorithm based on bilateral matching game is proposed.By designing a bidirectional preference mechanism between tasks and computing nodes,combined with a multi-round stable matching strategy,accurate matching between tasks and nodes is achieved.Simulation results show that compared with the baseline scheme,the proposed algorithm significantly reduces the total execution cost,effectively balances the task execution delay and the energy consumption of compute nodes,and takes into account the interests of each network compute node.展开更多
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c...The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.展开更多
In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic h...In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.展开更多
The airplane refueling problem can be stated as follows.We are given n airplanes which can refuel one another during the flight.Each airplane has a reservoir volume wj(liters)and a consumption rate pj(liters per kilom...The airplane refueling problem can be stated as follows.We are given n airplanes which can refuel one another during the flight.Each airplane has a reservoir volume wj(liters)and a consumption rate pj(liters per kilometer).As soon as one airplane runs out of fuel,it is dropping out of the flight.The problem asks for finding a refueling scheme such that the last plane in the air reach a maximal distance.An equivalent version is the n-vehicle exploration problem.The computational complexity of this non-linear combinatorial optimization problem is open so far.This paper employs the neighborhood exchange method of single-machine scheduling to study the precedence relations of jobs,so as to improve the necessary and sufficiency conditions of optimal solutions,and establish an efficient heuristic algorithm which is a generalization of several existing special algorithms.展开更多
To address the challenge of low survival rates and limited data collection efficiency in current virtual probe deployments,which results from anomaly detection mechanisms in location-based service(LBS)applications,thi...To address the challenge of low survival rates and limited data collection efficiency in current virtual probe deployments,which results from anomaly detection mechanisms in location-based service(LBS)applications,this paper proposes a novel virtual probe deployment method based on user behavioral feature analysis.The core idea is to circumvent LBS anomaly detection by mimicking real-user behavior patterns.First,we design an automated data extraction algorithm that recognizes graphical user interface(GUI)elements to collect spatio-temporal behavior data.Then,by analyzing the automatically collected user data,we identify normal users’spatio-temporal patterns and extract their features such as high-activity time windows and spatial clustering characteristics.Subsequently,an antidetection scheduling strategy is developed,integrating spatial clustering optimization,load-balanced allocation,and time window control to generate probe scheduling schemes.Additionally,a self-correction mechanism based on an exponential backoff strategy is implemented to rectify anomalous behaviors andmaintain system stability.Experiments in real-world environments demonstrate that the proposed method significantly outperforms baseline methods in terms of both probe ban rate and task completion rate,while maintaining high time efficiency.This study provides a more reliable and clandestine solution for geosocial data collection and lays the foundation for building more robust virtual probe systems.展开更多
With the rapid development of power Internet of Things(IoT)scenarios such as smart factories and smart homes,numerous intelligent terminal devices and real-time interactive applications impose higher demands on comput...With the rapid development of power Internet of Things(IoT)scenarios such as smart factories and smart homes,numerous intelligent terminal devices and real-time interactive applications impose higher demands on computing latency and resource supply efficiency.Multi-access edge computing technology deploys cloud computing capabilities at the network edge;constructs distributed computing nodes and multi-access systems and offers infrastructure support for services with low latency and high reliability.Existing research relies on a strong assumption that the environmental state is fully observable and fails to thoroughly consider the continuous time-varying features of edge server load fluctuations,leading to insufficient adaptability of the model in a heterogeneous dynamic environment.Thus,this paper establishes a framework for end-edge collaborative task offloading based on a partially observable Markov decision-making process(POMDP)and proposes a method for end-edge collaborative task offloading in heterogeneous scenarios.It achieves time-series modeling of the historical load characteristics of edge servers and endows the agent with the ability to be aware of the load in dynamic environmental states.Moreover,by dynamically assessing the exploration value of historical trajectories in the central trajectory pool and adjusting the sample weight distribution,directional exploration and strategy optimization of high-value trajectories are realized.Experimental results indicate that the proposed method exhibits distinct advantages compared with existing methods in terms of average delay and task failure rate and also verifies the method’s robustness in a dynamic environment.展开更多
With the development of technology,diffusion model-based solvers have shown significant promise in solving Combinatorial Optimization(CO)problems,particularly in tackling Non-deterministic Polynomial-time hard(NP-hard...With the development of technology,diffusion model-based solvers have shown significant promise in solving Combinatorial Optimization(CO)problems,particularly in tackling Non-deterministic Polynomial-time hard(NP-hard)problems such as the Traveling Salesman Problem(TSP).However,existing diffusion model-based solvers typically employ a fixed,uniform noise schedule(e.g.,linear or cosine annealing)across all training instances,failing to fully account for the unique characteristics of each problem instance.To address this challenge,we present GraphGuided Diffusion Solvers(GGDS),an enhanced method for improving graph-based diffusion models.GGDS leverages Graph Neural Networks(GNNs)to capture graph structural information embedded in node coordinates and adjacency matrices,dynamically adjusting the noise levels in the diffusion model.This study investigates the TSP by examining two distinct time-step noise generation strategies:cosine annealing and a Neural Network(NN)-based approach.We evaluate their performance across different problem scales,particularly after integrating graph structural information.Experimental results indicate that GGDS outperforms previous methods with average performance improvements of 18.7%,6.3%,and 88.7%on TSP-500,TSP-100,and TSP-50,respectively.Specifically,GGDS demonstrates superior performance on TSP-500 and TSP-50,while its performance on TSP-100 is either comparable to or slightly better than that of previous methods,depending on the chosen noise schedule and decoding strategy.展开更多
Advanced technologies like Cyber-Physical Systems(CPS)and the Internet of Things(IoT)have supported modernizing and automating the transportation region through the introduction of Intelligent Transportation Systems(I...Advanced technologies like Cyber-Physical Systems(CPS)and the Internet of Things(IoT)have supported modernizing and automating the transportation region through the introduction of Intelligent Transportation Systems(ITS).Integrating CPS-ITS and IoT provides real-time Vehicle-to-Infrastructure(V2I)communication,supporting better traffic management,safety,and efficiency.These technological innovations generate complex problems that need to be addressed,uniquely about data routing and Task Scheduling(TS)in ITS.Attempts to solve those problems were primarily based on traditional and experimental methods,and the solutions were not so successful due to the dynamic nature of ITS.This is where the scope of Machine learning(ML)and Swarm Intelligence(SI)has significantly impacted dealing with these challenges;in this line,this research paper presents a novel method for TS and data routing in the CPS-ITS.This paper proposes using a cutting-edge ML algorithm for data transmission from CPS-ITS.This ML has Gated Linear Unit-approximated Reinforcement Learning(GLRL).Greedy Iterative-Particle Swarm Optimization(GI-PSO)has been recommended to develop the Particle Swarm Optimization(PSO)for TS.The primary objective of this study is to enhance the security and effectiveness of ITS systems that utilize CPS-ITS.This study trained and validated the models using a network simulation dataset of 50 nodes from numerous ITS environments.The experiments demonstrate that the proposed GLRL reduces End-toEnd Delay(EED)by 12%,enhances data size use from 83.6%to 88.6%,and achieves higher bandwidth allocation,particularly in high-demand scenarios such as multimedia data streams where adherence improved to 98.15%.Furthermore,the GLRL reduced Network Congestion(NC)by 5.5%,demonstrating its efficiency in managing complex traffic conditions across several environments.The model passed simulation tests in three different environments:urban(UE),suburban(SE),and rural(RE).It met the high bandwidth requirements,made task scheduling more efficient,and increased network throughput(NT).This proved that it was robust and flexible enough for scalable ITS applications.These innovations provide robust,scalable solutions for real-time traffic management,ultimately improving safety,reducing NC,and increasing overall NT.This study can affect ITS by developing it to be more responsive,safe,and effective and by creating a perfect method to set up UE,SE,and RE.展开更多
The increasing popularity of quantum computing has resulted in a considerable rise in demand for cloud quantum computing usage in recent years.Nevertheless,the rapid surge in demand for cloud-based quantum computing r...The increasing popularity of quantum computing has resulted in a considerable rise in demand for cloud quantum computing usage in recent years.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity.In order to meet the needs of an increasing number of researchers,it is imperative to facilitate efficient and flexible access to computing resources in a cloud environment.In this paper,we propose a novel quantum computing paradigm,Virtual QPU(VQPU),which addresses this issue and enhances quantum cloud throughput with guaranteed circuit fidelity.The proposal introduces three innovative concepts:(1)The integration of virtualization technology into the field of quantum computing to enhance quantum cloud throughput.(2)The introduction of an asynchronous execution of circuits methodology to improve quantum computing flexibility.(3)The development of a virtual QPU allocation scheme for quantum tasks in a cloud environment to improve circuit fidelity.The concepts have been validated through the utilization of a self-built simulated quantum cloud platform.展开更多
基金supported by National Natural Science Foundation of China(No.51577047)International Collaboration Project supported by Bureau of Science and Technology,Anhui Province(No.1604b0602015).
文摘Fast and accurate forecasting of schedulable capacity of electric vehicles(EVs)plays an important role in enabling the integration of EVs into future smart grids as distributed energy storage systems.Traditional methods are insufficient to deal with large-scale actual schedulable capacity data.This paper proposes forecasting models for schedulable capacity of EVs through the parallel gradient boosting decision tree algorithm and big data analysis for multi-time scales.The time scale of these data analysis comprises the real time of one minute,ultra-short-term of one hour and one-day-ahead scale of 24 hours.The predicted results for different time scales can be used for various ancillary services.The proposed algorithm is validated using operation data of 521 EVs in the field.The results show that compared with other machine learning methods such as the parallel random forest algorithm and parallel k-nearest neighbor algorithm,the proposed algorithm requires less training time with better forecasting accuracy and analytical processing ability in big data environment.
文摘WE observe that the response speed of a linear timeinvariant system to a step reference input depends not only on the system parameters but also on the magnitude of the step input.Based on this observation,we demonstrate a method to schedule the magnitude of the reference input to achieve a faster response.
基金supported by the National Natural Science Foundation of China under Grant 62472264the Natural Science Distinguished Youth Foundation of Shandong Province under Grant ZR2025QA13.
文摘Workflow scheduling is critical for efficient cloud resource management.This paper proposes Tunicate Swarm-Highest Response Ratio Next,a novel scheduler that synergistically combines the Tunicate Swarm Algorithm with the Highest Response Ratio Next policy.The Tunicate Swarm Algorithm generates a cost-minimizing task-to-VM mapping scheme,while the Highest Response Ratio Next dynamically dispatches tasks in the ready queue with the highest-priority.Experimental results demonstrate that the Tunicate Swarm-Highest Response RatioNext reduces costs by up to 94.8%compared to meta-heuristic baselines.It also achieves competitive cost efficiency vs.a learning-based method while offering superior operational simplicity and efficiency,establishing it as a highly practical solution for dynamic cloud environments.
基金supported by the Defense Innovation Special Zone Project and CAAI-Huawei MindSpore Open Fund.
文摘One of the key research focuses in quantum annealing is the design and optimization of annealing schedules to enhance computational efficiency,enabling large-scale applications.QuantumZero(QZero)pioneered the integration of Monte Carlo Tree Search(MCTS)with neural networks to autonomously design annealing schedules within a hybrid quantum-classical framework.This approach is distinguished by its ability to enhance the performance of Monte Carlo Tree Search through the integration of neural networks,enabling the efficient design of annealing paths even with limited annealing time.The paper presents an optimized QZero method based on intuitive reasoning theory and MindSpore,which further enhances QZero’s ability to conserve computational resources and resist noise.In terms of learning efficiency,the optimized QZero algorithm improves the convergence speed of the neural network by 93%compared to the original algorithm.Notably,the average number of quantum annealing queries required to achieve 99%fidelity is reduced by 45.09%.Regarding noise resistance,the optimized QZero algorithm requires 34.27%fewer quantum annealing queries to reach 99%fidelity compared to the original algorithm.The optimized QZero algorithm demonstrates strong competitiveness in optimizing quantum annealing schedules.
文摘Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.
文摘Ride-hailing electric vehicles are mobile resources with dispatch potential to improve resilience.However,they have not been well investigated because their charging and order-serving are affected or managed by the power grid dispatching center and the ride-hailing platform.Effective pre-strategies can improve the prevention ability for high-impact and low-probability(HILP)events and provide the foundation for measures in the response and restoration stages.First,this paper proposes a resilience reserve to expand the existing research on power system resilience.Secondly,this paper puts forward an interactive method of deep reinforcement learning,which considers the interests of both the power grid dispatching center and the ride-hailing platform.It improves the resilience reserve by achieving the order dispatch,orderly charging management of ride-hailing electric vehicles,and the pricing strategy of charging stations.Finally,this paper uses a practical example covering about 107.32 km2 in the center of Chengdu to verify that the proposed method improves the resilience reserve of the power system without obviously damaging the interests of the ride-hailing platform.
基金supported by National Key R&D Program of China under Grant 2022YFB2403500。
文摘For mixed-integer programming(MIP)problems in new power systems with uncertainties,existing studies tend to address uncertainty modeling or MIP solution methods in isolation.They overlook core bottlenecks arising from their coupling,such as variable dimension explosion,disrupted constraint separability,and conflicts in solution logic.To address this gap,this paper focuses on the coupling effects between the two and systematically conducts three aspects of work:first,the paper summarizes the uncertainty optimization methods suitable for addressing uncertainty-related issues in power systems,along with their respective advantages and disadvantages.It also clarifies the specific forms and operational mechanisms through which these uncertainty optimization methods are integrated into MIP models.Meanwhile,based on the application scenarios of new power systems,the paper delineates the applicable boundaries of different optimization methods;second,the paper organizes three categories of solution methods,which are exact solution methods,decomposition-based methods,and meta-heuristic algorithms.It focuses on analyzing the improvement paths of various solution methods for resolving coupling bottlenecks,as well as their applicability in different types of power system optimization problems;finally,providing a summary and presenting an outlook on future directions:artificial intelligence-enabled optimization,development of dedicated solvers for extreme scenarios,and dynamic modeling of multi-source uncertainties.This study aims to help researchers in the field of new power systems quickly grasp uncertainty optimization methods and core solution methods,bridge existing research gaps,and promote the development of this field.
基金supported by the National Natural Science Foundation of China(61374186)。
文摘In response to the challenges faced by unmanned swarms in mountain obstacle-breaching missions within complex terrains,such as poor task-resource coupling,lengthy solution generation times,and poor inter-platform collaboration,an unmanned swarm scheduling strategy tailored is proposed for mountain obstacle-breaching missions.Initially,by formalizing the descriptions of obstacle breaching operations,the swarm,and obstacle targets,an optimization model is constructed with the objectives of expected global benefit,timeliness,and task completion degree.A meta-task decomposition and reassembly strategy is then introduced to more precisely match the capabilities of unmanned platforms with task requirements.Additionally,a meta-task decomposition optimization model and a meta-task allocation operator are incorporated to achieve efficient allocation of swarm resources and collaborative scheduling.Simulation results demonstrate that the model can accurately generate reasonable and feasible obstacle breaching execution plans for unmanned swarms based on specific task requirements and environmental conditions.Moreover,compared to conventional strategies,the proposed strategy enhances task completion degree and expected returns while reducing the execution time of the plans.
基金funded by the Department of Education of Liaoning Province and was supported by the Basic Scientific Research Project of the Department of Education of Liaoning Province(Grant No.LJ222411632051)and(Grant No.LJKQZ2021085)Natural Science Foundation Project of Liaoning Province(Grant No.2022-BS-222).
文摘Virtual power plant(VPP)integrates a variety of distributed renewable energy and energy storage to participate in electricity market transactions,promote the consumption of renewable energy,and improve economic efficiency.In this paper,aiming at the uncertainty of distributed wind power and photovoltaic output,considering the coupling relationship between power,carbon trading,and green cardmarket,the optimal operationmodel and bidding scheme of VPP in spot market,carbon trading market,and green card market are established.On this basis,through the Shapley value and independent risk contribution theory in cooperative game theory,the quantitative analysis of the total income and risk contribution of various distributed resources in the virtual power plant is realized.Moreover,the scheduling strategies of virtual power plants under different risk preferences are systematically compared,and the feasibility and accuracy of the combination of Shapley value and independent risk contribution theory in ensuring fair income distribution and reasonable risk assessment are emphasized.A comprehensive solution for virtual power plants in the multi-market environment is constructed,which integrates operation strategy,income distribution mechanism,and risk control system into a unified analysis framework.Through the simulation of multi-scenario examples,the CPLEXsolver inMATLAB software is used to optimize themodel.The proposed joint optimization scheme can increase the profit of VPP participating in carbon trading and green certificate market by 29%.The total revenue of distributed resources managed by VPP is 9%higher than that of individual participation.
文摘The proliferation of carrier aircraft and the integration of unmanned aerial vehicles(UAVs)on aircraft carriers present new challenges to the automation of launch and recovery operations.This paper investigates a collaborative scheduling problem inherent to the operational processes of carrier aircraft,where launch and recovery tasks are conducted concurrently on the flight deck.The objective is to minimize the cumulative weighted waiting time in the air for recovering aircraft and the cumulative weighted delay time for launching aircraft.To tackle this challenge,a multiple population self-adaptive differential evolution(MPSADE)algorithm is proposed.This method features a self-adaptive parameter updating mechanism that is contingent upon population diversity,an asynchronous updating scheme,an individual migration operator,and a global crossover mechanism.Additionally,comprehensive experiments are conducted to validate the effectiveness of the proposed model and algorithm.Ultimately,a comparative analysis with existing operation modes confirms the enhanced efficiency of the collaborative operation mode.
基金supported by the National Natural Science Foundation of China(Grant No.52475543)Natural Science Foundation of Henan(Grant No.252300421101)+1 种基金Henan Province University Science and Technology Innovation Talent Support Plan(Grant No.24HASTIT048)Science and Technology Innovation Team Project of Zhengzhou University of Light Industry(Grant No.23XNKJTD0101).
文摘Aircraft assembly is characterized by stringent precedence constraints,limited resource availability,spatial restrictions,and a high degree of manual intervention.These factors lead to considerable variability in operator workloads and significantly increase the complexity of scheduling.To address this challenge,this study investigates the Aircraft Pulsating Assembly Line Scheduling Problem(APALSP)under skilled operator allocation,with the objective of minimizing assembly completion time.A mathematical model considering skilled operator allocation is developed,and a Q-Learning improved Particle Swarm Optimization algorithm(QLPSO)is proposed.In the algorithm design,a reverse scheduling strategy is adopted to effectively manage large-scale precedence constraints.Moreover,a reverse sequence encoding method is introduced to generate operation sequences,while a time decoding mechanism is employed to determine completion times.The problem is further reformulated as a Markov Decision Process(MDP)with explicitly defined state and action spaces.Within QLPSO,the Q-learning mechanism adaptively adjusts inertia weights and learning factors,thereby achieving a balance between exploration capability and convergence performance.To validate the effectiveness of the proposed approach,extensive computational experiments are conducted on benchmark instances of different scales,including small,medium,large,and ultra-large cases.The results demonstrate that QLPSO consistently delivers stable and high-quality solutions across all scenarios.In ultra-large-scale instances,it improves the best solution by 25.2%compared with the Genetic Algorithm(GA)and enhances the average solution by 16.9%over the Q-learning algorithm,showing clear advantages over the comparative methods.These findings not only confirm the effectiveness of the proposed algorithm but also provide valuable theoretical references and practical guidance for the intelligent scheduling optimization of aircraft pulsating assembly lines.
基金Supported by the National Program on Key Basic Research Project(2020YFA0713600)the National Natural Science Foundation of China(62272214)。
文摘In the era of the Internet of Things,distributed computing alleviates the problem of insufficient terminal computing power by integrating idle resources of heterogeneous devices.However,the imbalance between task execution delay and node energy consumption,and the scheduling and adaptation challenges brought about by device heterogeneity,urgently need to be addressed.To tackle this problem,this paper constructs a multi-objective real-time task scheduling model that considers task real-time performance,execution delay,system energy consumption,and node interests.The model aims to minimize the delay upper bound and total energy consumption while maximizing system satisfaction.A real-time task scheduling algorithm based on bilateral matching game is proposed.By designing a bidirectional preference mechanism between tasks and computing nodes,combined with a multi-round stable matching strategy,accurate matching between tasks and nodes is achieved.Simulation results show that compared with the baseline scheme,the proposed algorithm significantly reduces the total execution cost,effectively balances the task execution delay and the energy consumption of compute nodes,and takes into account the interests of each network compute node.
基金appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.
基金funding from the European Commission by the Ruralities project(grant agreement no.101060876).
文摘In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.
基金Supported by Natural Science Foundation of Henan Province(Grant Nos.232300421218 and 252300421483).
文摘The airplane refueling problem can be stated as follows.We are given n airplanes which can refuel one another during the flight.Each airplane has a reservoir volume wj(liters)and a consumption rate pj(liters per kilometer).As soon as one airplane runs out of fuel,it is dropping out of the flight.The problem asks for finding a refueling scheme such that the last plane in the air reach a maximal distance.An equivalent version is the n-vehicle exploration problem.The computational complexity of this non-linear combinatorial optimization problem is open so far.This paper employs the neighborhood exchange method of single-machine scheduling to study the precedence relations of jobs,so as to improve the necessary and sufficiency conditions of optimal solutions,and establish an efficient heuristic algorithm which is a generalization of several existing special algorithms.
基金supported by theNationalNatural Science Foundation of China(No.U23A20305)National Key Research and Development Program of China(No.2022YFB3102900)+1 种基金Innovation Scientists and Technicians Troop Construction Projects of Henan Province,China(No.254000510007)Key Research and Development Project of Henan Province(No.221111321200).
文摘To address the challenge of low survival rates and limited data collection efficiency in current virtual probe deployments,which results from anomaly detection mechanisms in location-based service(LBS)applications,this paper proposes a novel virtual probe deployment method based on user behavioral feature analysis.The core idea is to circumvent LBS anomaly detection by mimicking real-user behavior patterns.First,we design an automated data extraction algorithm that recognizes graphical user interface(GUI)elements to collect spatio-temporal behavior data.Then,by analyzing the automatically collected user data,we identify normal users’spatio-temporal patterns and extract their features such as high-activity time windows and spatial clustering characteristics.Subsequently,an antidetection scheduling strategy is developed,integrating spatial clustering optimization,load-balanced allocation,and time window control to generate probe scheduling schemes.Additionally,a self-correction mechanism based on an exponential backoff strategy is implemented to rectify anomalous behaviors andmaintain system stability.Experiments in real-world environments demonstrate that the proposed method significantly outperforms baseline methods in terms of both probe ban rate and task completion rate,while maintaining high time efficiency.This study provides a more reliable and clandestine solution for geosocial data collection and lays the foundation for building more robust virtual probe systems.
基金funded by the State Grid Corporation Science and Technology Project“Research and Application of Key Technologies for Integrated Sensing and Computing for Intelligent Operation of Power Grid”(Grant No.5700-202318596A-3-2-ZN).
文摘With the rapid development of power Internet of Things(IoT)scenarios such as smart factories and smart homes,numerous intelligent terminal devices and real-time interactive applications impose higher demands on computing latency and resource supply efficiency.Multi-access edge computing technology deploys cloud computing capabilities at the network edge;constructs distributed computing nodes and multi-access systems and offers infrastructure support for services with low latency and high reliability.Existing research relies on a strong assumption that the environmental state is fully observable and fails to thoroughly consider the continuous time-varying features of edge server load fluctuations,leading to insufficient adaptability of the model in a heterogeneous dynamic environment.Thus,this paper establishes a framework for end-edge collaborative task offloading based on a partially observable Markov decision-making process(POMDP)and proposes a method for end-edge collaborative task offloading in heterogeneous scenarios.It achieves time-series modeling of the historical load characteristics of edge servers and endows the agent with the ability to be aware of the load in dynamic environmental states.Moreover,by dynamically assessing the exploration value of historical trajectories in the central trajectory pool and adjusting the sample weight distribution,directional exploration and strategy optimization of high-value trajectories are realized.Experimental results indicate that the proposed method exhibits distinct advantages compared with existing methods in terms of average delay and task failure rate and also verifies the method’s robustness in a dynamic environment.
基金supported by the National Science and Technology Council,Taiwan,under grant no.NSTC 114-2221-E-197-005-MY3.
文摘With the development of technology,diffusion model-based solvers have shown significant promise in solving Combinatorial Optimization(CO)problems,particularly in tackling Non-deterministic Polynomial-time hard(NP-hard)problems such as the Traveling Salesman Problem(TSP).However,existing diffusion model-based solvers typically employ a fixed,uniform noise schedule(e.g.,linear or cosine annealing)across all training instances,failing to fully account for the unique characteristics of each problem instance.To address this challenge,we present GraphGuided Diffusion Solvers(GGDS),an enhanced method for improving graph-based diffusion models.GGDS leverages Graph Neural Networks(GNNs)to capture graph structural information embedded in node coordinates and adjacency matrices,dynamically adjusting the noise levels in the diffusion model.This study investigates the TSP by examining two distinct time-step noise generation strategies:cosine annealing and a Neural Network(NN)-based approach.We evaluate their performance across different problem scales,particularly after integrating graph structural information.Experimental results indicate that GGDS outperforms previous methods with average performance improvements of 18.7%,6.3%,and 88.7%on TSP-500,TSP-100,and TSP-50,respectively.Specifically,GGDS demonstrates superior performance on TSP-500 and TSP-50,while its performance on TSP-100 is either comparable to or slightly better than that of previous methods,depending on the chosen noise schedule and decoding strategy.
基金funded by Taif University,Taif,Saudi Arabia,project number(TU-DSPP-2024-17)。
文摘Advanced technologies like Cyber-Physical Systems(CPS)and the Internet of Things(IoT)have supported modernizing and automating the transportation region through the introduction of Intelligent Transportation Systems(ITS).Integrating CPS-ITS and IoT provides real-time Vehicle-to-Infrastructure(V2I)communication,supporting better traffic management,safety,and efficiency.These technological innovations generate complex problems that need to be addressed,uniquely about data routing and Task Scheduling(TS)in ITS.Attempts to solve those problems were primarily based on traditional and experimental methods,and the solutions were not so successful due to the dynamic nature of ITS.This is where the scope of Machine learning(ML)and Swarm Intelligence(SI)has significantly impacted dealing with these challenges;in this line,this research paper presents a novel method for TS and data routing in the CPS-ITS.This paper proposes using a cutting-edge ML algorithm for data transmission from CPS-ITS.This ML has Gated Linear Unit-approximated Reinforcement Learning(GLRL).Greedy Iterative-Particle Swarm Optimization(GI-PSO)has been recommended to develop the Particle Swarm Optimization(PSO)for TS.The primary objective of this study is to enhance the security and effectiveness of ITS systems that utilize CPS-ITS.This study trained and validated the models using a network simulation dataset of 50 nodes from numerous ITS environments.The experiments demonstrate that the proposed GLRL reduces End-toEnd Delay(EED)by 12%,enhances data size use from 83.6%to 88.6%,and achieves higher bandwidth allocation,particularly in high-demand scenarios such as multimedia data streams where adherence improved to 98.15%.Furthermore,the GLRL reduced Network Congestion(NC)by 5.5%,demonstrating its efficiency in managing complex traffic conditions across several environments.The model passed simulation tests in three different environments:urban(UE),suburban(SE),and rural(RE).It met the high bandwidth requirements,made task scheduling more efficient,and increased network throughput(NT).This proved that it was robust and flexible enough for scalable ITS applications.These innovations provide robust,scalable solutions for real-time traffic management,ultimately improving safety,reducing NC,and increasing overall NT.This study can affect ITS by developing it to be more responsive,safe,and effective and by creating a perfect method to set up UE,SE,and RE.
文摘The increasing popularity of quantum computing has resulted in a considerable rise in demand for cloud quantum computing usage in recent years.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity.In order to meet the needs of an increasing number of researchers,it is imperative to facilitate efficient and flexible access to computing resources in a cloud environment.In this paper,we propose a novel quantum computing paradigm,Virtual QPU(VQPU),which addresses this issue and enhances quantum cloud throughput with guaranteed circuit fidelity.The proposal introduces three innovative concepts:(1)The integration of virtualization technology into the field of quantum computing to enhance quantum cloud throughput.(2)The introduction of an asynchronous execution of circuits methodology to improve quantum computing flexibility.(3)The development of a virtual QPU allocation scheme for quantum tasks in a cloud environment to improve circuit fidelity.The concepts have been validated through the utilization of a self-built simulated quantum cloud platform.