To fulfill the requirements for hybrid real-time system scheduling, a long-release-interval-first (LRIF) real-time scheduling algorithm is proposed. The algorithm adopts both the fixed priority and the dynamic prior...To fulfill the requirements for hybrid real-time system scheduling, a long-release-interval-first (LRIF) real-time scheduling algorithm is proposed. The algorithm adopts both the fixed priority and the dynamic priority to assign priorities for tasks. By assigning higher priorities to the aperiodic soft real-time jobs with longer release intervals, it guarantees the executions for periodic hard real-time tasks and further probabilistically guarantees the executions for aperiodic soft real-time tasks. The schedulability test approach for the LRIF algorithm is presented. The implementation issues of the LRIF algorithm are also discussed. Simulation result shows that LRIF obtains better schedulable performance than the maximum urgency first (MUF) algorithm, the earliest deadline first (EDF) algorithm and EDF for hybrid tasks. LRIF has great capability to schedule both periodic hard real-time and aperiodic soft real-time tasks.展开更多
With notably few exceptions, the existing satellite mission operations cannot provide the ability of schedulability prediction, including the latest satellite planning service (SPS) standard–Sensor Planning Service...With notably few exceptions, the existing satellite mission operations cannot provide the ability of schedulability prediction, including the latest satellite planning service (SPS) standard–Sensor Planning Service Interface Standard 2.0 Earth Observation Satellite Tasking Extension (EO SPS) approved by Open Geospatial Consortium (OGC). The requestor can do nothing but waiting for the results of time consuming batch scheduling. It is often too late to adjust the request when receiving scheduling failures. A supervised learning algorithm based on robust decision tree and bagging support vector machine (Bagging SVM) is proposed to solve the problem above. The Bagging SVM is applied to improve the accuracy of classification and robust decision tree is utilized to reduce the error mean and error variation. The simulations and analysis show that a prediction action can be accomplished in near real-time with high accuracy. This means the decision makers can maximize the probability of successful scheduling through changing request parameters or take action to accommodate the scheduling failures in time.展开更多
Safety-critical avionics systems which become more complex and tend to integrate multiple functionalities with different levels of criticality for better cost and power efficiency are subject to certifications at vari...Safety-critical avionics systems which become more complex and tend to integrate multiple functionalities with different levels of criticality for better cost and power efficiency are subject to certifications at various levels of rigorousness. In order to simultaneously guarantee temporal constraints at all different levels of assurance mandated by different criticalities, novel scheduling techniques are in need. In this paper, a mixed-criticality sporadic task model with multiple virtual deadlines is built and a certification-cognizant dynamic scheduling approach referred as earliest virtual-deadline first with mixed-criticality(EVDF-MC) is considered, which exploits different relative deadlines of tasks in different criticality modes. As for the corresponding schedulability analysis problem, a sufficient and efficient schedulability test is proposed on the basis of demand-bound functions derived in the mixed-criticality scenario. In addition, a modified simulated annealing(MSA)-based heuristic approach is established for virtual deadlines assignment. Experiments performing simulations with randomly generated tasks indicate that the proposed approach is computationally efficient and competes well against the existing approaches.展开更多
In hard real-time systems, schedulability analysis is not only one of the important means of guaranteeing the timelines of embedded software but also one of the fundamental theories of applying other new techniques, s...In hard real-time systems, schedulability analysis is not only one of the important means of guaranteeing the timelines of embedded software but also one of the fundamental theories of applying other new techniques, such as energy savings and fault tolerance. However, most of the existing schedulability analysis methods assume that schedulers use preemptive scheduling or non-preemptive scheduling. In this paper, we present a schedulability analysis method, i.e., the worst-case hybrid scheduling (WCHS) algorithm, which considers the influence of release jitters of transactions and extends schedulability analysis theory to timing analysis of linear transactions under fixed priority hybrid scheduling. To the best of our knowledge, this method is the first one on timing analysis of linear transactions under hybrid scheduling. An example is employed to demonstrate the use of this method. Experiments show that this method has lower computational complexity while keeping correctness, and that hybrid scheduling has little influence on the average worst-case response time (WCRT), but a negative impact on the schedulability of systems.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Safe and efficient sortie scheduling on the confined flight deck is crucial for maintaining high combat effectiveness of the aircraft carrier.The primary difficulty exactly lies in the spatiotemporal coordination,i.e....Safe and efficient sortie scheduling on the confined flight deck is crucial for maintaining high combat effectiveness of the aircraft carrier.The primary difficulty exactly lies in the spatiotemporal coordination,i.e.,allocation of limited supporting resources and collision-avoidance between heterogeneous dispatch entities.In this paper,the problem is investigated in the perspective of hybrid flow-shop scheduling problem by synthesizing the precedence,space and resource constraints.Specifically,eight processing procedures are abstracted,where tractors,preparing spots,catapults,and launching are virtualized as machines.By analyzing the constraints in sortie scheduling,a mixed-integer planning model is constructed.In particular,the constraint on preparing spot occupancy is improved to further enhance the sortie efficiency.The basic trajectory library for each dispatch entity is generated and a delayed strategy is integrated to address the collision-avoidance issue.To efficiently solve the formulated HFSP,which is essentially a combinatorial problem with tightly coupled constraints,a chaos-initialized genetic algorithm is developed.The solution framework is validated by the simulation environment referring to the Fort-class carrier,exhibiting higher sortie efficiency when compared to existing strategies.And animation of the simulation results is available at www.bilibili.com/video/BV14t421A7Tt/.The study presents a promising supporting technique for autonomous flight deck operation in the foreseeable future,and can be easily extended to other supporting scenarios,e.g.,ammunition delivery and aircraft maintenance.展开更多
Ship outfitting is a key process in shipbuilding.Efficient and high-quality ship outfitting is a top priority for modern shipyards.These activities are conducted at different stations of shipyards.The outfitting plan ...Ship outfitting is a key process in shipbuilding.Efficient and high-quality ship outfitting is a top priority for modern shipyards.These activities are conducted at different stations of shipyards.The outfitting plan is one of the crucial issues in shipbuilding.In this paper,production scheduling and material ordering with endogenous uncertainty of the outfitting process are investigated.The uncertain factors in outfitting equipment production are usually decision-related,which leads to difficulties in addressing uncertainties in the outfitting production workshops before production is conducted according to plan.This uncertainty is regarded as endogenous uncertainty and can be treated as non-anticipativity constraints in the model.To address this problem,a stochastic two-stage programming model with endogenous uncertainty is established to optimize the outfitting job scheduling and raw material ordering process.A practical case of the shipyard of China Merchants Heavy Industry Co.,Ltd.is used to evaluate the performance of the proposed method.Satisfactory results are achieved at the lowest expected total cost as the complete kit rate of outfitting equipment is improved and emergency replenishment is reduced.展开更多
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
In the rapidly evolving landscape of television advertising,optimizing ad schedules to maximize viewer engagement and revenue has become significant.Traditional methods often operate in silos,limiting the potential in...In the rapidly evolving landscape of television advertising,optimizing ad schedules to maximize viewer engagement and revenue has become significant.Traditional methods often operate in silos,limiting the potential insights gained from broader data analysis due to concerns over privacy and data sharing.This article introduces a novel approach that leverages Federated Learning(FL)to enhance TV ad schedule optimization,combining the strengths of local optimization techniques with the power of global Machine Learning(ML)models to uncover actionable insights without compromising data privacy.It combines linear programming for initial ads scheduling optimization with ML—specifically,a K-Nearest Neighbors(KNN)model—for predicting ad spot positions.Taking into account the diversity and the difficulty of the ad-scheduling problem,we propose a prescriptivepredictive approach in which first the position of the ads is optimized(using Google’s OR-Tools CP-SAT)and then the scheduled position of all ads will be the result of the optimization problem.Second,this output becomes the target of a predictive task that predicts the position of new entries based on their characteristics ensuring the implementation of the scheduling at large scale(using KNN,Light Gradient Boosting Machine and Random Forest).Furthermore,we explore the integration of FL to enhance predictive accuracy and strategic insight across different broadcasting networks while preserving data privacy.The FL approach resulted in 8750 ads being precisely matched to their optimal category placements,showcasing an alignment with the intended diversity objectives.Additionally,there was a minimal deviation observed,with 1133 ads positioned within a one-category variance from their ideal placement in the original dataset.展开更多
Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and ...Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.展开更多
The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because o...The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.展开更多
In this paper,a bilevel optimization model of an integrated energy operator(IEO)–load aggregator(LA)is constructed to address the coordinate optimization challenge of multiple stakeholder island integrated energy sys...In this paper,a bilevel optimization model of an integrated energy operator(IEO)–load aggregator(LA)is constructed to address the coordinate optimization challenge of multiple stakeholder island integrated energy system(IIES).The upper level represents the integrated energy operator,and the lower level is the electricity-heatgas load aggregator.Owing to the benefit conflict between the upper and lower levels of the IIES,a dynamic pricing mechanism for coordinating the interests of the upper and lower levels is proposed,combined with factors such as the carbon emissions of the IIES,as well as the lower load interruption power.The price of selling energy can be dynamically adjusted to the lower LA in the mechanism,according to the information on carbon emissions and load interruption power.Mutual benefits and win-win situations are achieved between the upper and lower multistakeholders.Finally,CPLEX is used to iteratively solve the bilevel optimization model.The optimal solution is selected according to the joint optimal discrimination mechanism.Thesimulation results indicate that the sourceload coordinate operation can reduce the upper and lower operation costs.Using the proposed pricingmechanism,the carbon emissions and load interruption power of IEO-LA are reduced by 9.78%and 70.19%,respectively,and the capture power of the carbon capture equipment is improved by 36.24%.The validity of the proposed model and method is verified.展开更多
In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-base...In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.展开更多
Recently,unmanned aerial vehicle(UAV)-aided free-space optical(FSO)communication has attracted widespread attentions.However,most of the existing research focuses on communication performance only.The authors investig...Recently,unmanned aerial vehicle(UAV)-aided free-space optical(FSO)communication has attracted widespread attentions.However,most of the existing research focuses on communication performance only.The authors investigate the integrated scheduling of communication,sensing,and control for UAV-aided FSO communication systems.Initially,a sensing-control model is established via the control theory.Moreover,an FSO communication channel model is established by considering the effects of atmospheric loss,atmospheric turbulence,geometrical loss,and angle-of-arrival fluctuation.Then,the relationship between the motion control of the UAV and radial displacement is obtained to link the control aspect and communication aspect.Assuming that the base station has instantaneous channel state information(CSI)or statistical CSI,the thresholds of the sensing-control pattern activation are designed,respectively.Finally,an integrated scheduling scheme for performing communication,sensing,and control is proposed.Numerical results indicate that,compared with conventional time-triggered scheme,the proposed integrated scheduling scheme obtains comparable communication and control performance,but reduces the sensing consumed power by 52.46%.展开更多
Timing constraint Petri nets (TCPNs) can be used to model a real-time system specification and to verify the timing behavior of the system. This paper describes the limitations of the reachability analysis method in ...Timing constraint Petri nets (TCPNs) can be used to model a real-time system specification and to verify the timing behavior of the system. This paper describes the limitations of the reachability analysis method in analyzing complex systems for existing TCPNs. Based on further research on the schedulability analysis method with various topology structures, a more general state reachability analysis method is proposed. To meet various requirements of timely response for actual systems, this paper puts forward a heuristic method for selecting decision-spans of transitions and develops a heuristic algorithm for schedulability analysis of TCPNs. Examples are given showing the practicality of the method in the schedulability analysis for real-time systems with various structures.展开更多
Micro-nano Earth Observation Satellite(MEOS)constellation has the advantages of low construction cost,short revisit cycle,and high functional density,which is considered a promising solution for serving rapidly growin...Micro-nano Earth Observation Satellite(MEOS)constellation has the advantages of low construction cost,short revisit cycle,and high functional density,which is considered a promising solution for serving rapidly growing observation demands.The observation Scheduling Problem in the MEOS constellation(MEOSSP)is a challenging issue due to the large number of satellites and tasks,as well as complex observation constraints.To address the large-scale and complicated MEOSSP,we develop a Two-Stage Scheduling Algorithm based on the Pointer Network with Attention mechanism(TSSA-PNA).In TSSA-PNA,the MEOS observation scheduling is decomposed into a task allocation stage and a single-MEOS scheduling stage.In the task allocation stage,an adaptive task allocation algorithm with four problem-specific allocation operators is proposed to reallocate the unscheduled tasks to new MEOSs.Regarding the single-MEOS scheduling stage,we design a pointer network based on the encoder-decoder architecture to learn the optimal singleMEOS scheduling solution and introduce the attention mechanism into the encoder to improve the learning efficiency.The Pointer Network with Attention mechanism(PNA)can generate the single-MEOS scheduling solution quickly in an end-to-end manner.These two decomposed stages are performed iteratively to search for the solution with high profit.A greedy local search algorithm is developed to improve the profits further.The performance of the PNA and TSSA-PNA on singleMEOS and multi-MEOS scheduling problems are evaluated in the experiments.The experimental results demonstrate that PNA can obtain the approximate solution for the single-MEOS scheduling problem in a short time.Besides,the TSSA-PNA can achieve higher observation profits than the existing scheduling algorithms within the acceptable computational time for the large-scale MEOS scheduling problem.展开更多
Making plans is a good idea,but every one's schedule looks differe nt.You may have to talk about your plans before you're able to make some.It could sound like this:You ask,"Do you have plans this Friday ...Making plans is a good idea,but every one's schedule looks differe nt.You may have to talk about your plans before you're able to make some.It could sound like this:You ask,"Do you have plans this Friday night?"If the person already has plans,they may say,"I do.But I'm free on Saturday."If that day doesn't work for you,you can say,"I'm not available that day.How about Sunday after no on?"After you figure out the day and time,mark it on your calendar.展开更多
Traditional quantum circuit scheduling approaches underutilize the inherent parallelism of quantum computation in the Noisy Intermediate-Scale Quantum(NISQ)era,overlook the inter-layer operations can be further parall...Traditional quantum circuit scheduling approaches underutilize the inherent parallelism of quantum computation in the Noisy Intermediate-Scale Quantum(NISQ)era,overlook the inter-layer operations can be further parallelized.Based on this,two quantum circuit scheduling optimization approaches are designed and integrated into the quantum circuit compilation process.Firstly,we introduce the Layered Topology Scheduling Approach(LTSA),which employs a greedy algorithm and leverages the principles of topological sorting in graph theory.LTSA allocates quantum gates to a layered structure,maximizing the concurrent execution of quantum gate operations.Secondly,the Layerwise Conflict Resolution Approach(LCRA)is proposed.LCRA focuses on utilizing directly executable quantum gates within layers.Through the insertion of SWAP gates and conflict resolution checks,it minimizes conflicts and enhances parallelism,thereby optimizing the overall computational efficiency.Experimental findings indicate that LTSA and LCRA individually achieve a noteworthy reduction of 51.1%and 53.2%,respectively,in the number of inserted SWAP gates.Additionally,they contribute to a decrease in hardware gate overhead by 14.7%and 15%,respectively.Considering the intricate nature of quantum circuits and the temporal dependencies among different layers,the amalgamation of both approaches leads to a remarkable 51.6%reduction in inserted SWAP gates and a 14.8%decrease in hardware gate overhead.These results underscore the efficacy of the combined LTSA and LCRA in optimizing quantum circuit compilation.展开更多
Submission Papers appearing in the Journal comprise Editorials,Rapid Communications,Perspectives,Tutorials,Feature Articles,Reviews,Research Articles,which should contain original information,theoretical or experiment...Submission Papers appearing in the Journal comprise Editorials,Rapid Communications,Perspectives,Tutorials,Feature Articles,Reviews,Research Articles,which should contain original information,theoretical or experimental,on any topics in the field of polymer science and polymer material science.Papers already published or scheduled to be published elsewhereshould notbesubmittedand certainly will not be accepted.展开更多
Complex road conditions without signalized intersections when the traffic flow is nearly saturated result in high traffic congestion and accidents,reducing the traffic efficiency of intelligent vehicles.The complex ro...Complex road conditions without signalized intersections when the traffic flow is nearly saturated result in high traffic congestion and accidents,reducing the traffic efficiency of intelligent vehicles.The complex road traffic environment of smart vehicles and other vehicles frequently experiences conflicting start and stop motion.The fine-grained scheduling of autonomous vehicles(AVs)at non-signalized intersections,which is a promising technique for exploring optimal driving paths for both assisted driving nowadays and driverless cars in the near future,has attracted significant attention owing to its high potential for improving road safety and traffic efficiency.Fine-grained scheduling primarily focuses on signalized intersection scenarios,as applying it directly to non-signalized intersections is challenging because each AV can move freely without traffic signal control.This may cause frequent driving collisions and low road traffic efficiency.Therefore,this study proposes a novel algorithm to address this issue.Our work focuses on the fine-grained scheduling of automated vehicles at non-signal intersections via dual reinforced training(FS-DRL).For FS-DRL,we first use a grid to describe the non-signalized intersection and propose a convolutional neural network(CNN)-based fast decision model that can rapidly yield a coarse-grained scheduling decision for each AV in a distributed manner.We then load these coarse-grained scheduling decisions onto a deep Q-learning network(DQN)for further evaluation.We use an adaptive learning rate to maximize the reward function and employ parameterεto tradeoff the fast speed of coarse-grained scheduling in the CNN and optimal fine-grained scheduling in the DQN.In addition,we prove that using this adaptive learning rate leads to a converged loss rate with an extremely small number of training loops.The simulation results show that compared with Dijkstra,RNN,and ant colony-based scheduling,FS-DRL yields a high accuracy of 96.5%on the sample,with improved performance of approximately 61.54%-85.37%in terms of the average conflict and traffic efficiency.展开更多
基金The Natural Science Foundation of Jiangsu Province(NoBK2005408)
文摘To fulfill the requirements for hybrid real-time system scheduling, a long-release-interval-first (LRIF) real-time scheduling algorithm is proposed. The algorithm adopts both the fixed priority and the dynamic priority to assign priorities for tasks. By assigning higher priorities to the aperiodic soft real-time jobs with longer release intervals, it guarantees the executions for periodic hard real-time tasks and further probabilistically guarantees the executions for aperiodic soft real-time tasks. The schedulability test approach for the LRIF algorithm is presented. The implementation issues of the LRIF algorithm are also discussed. Simulation result shows that LRIF obtains better schedulable performance than the maximum urgency first (MUF) algorithm, the earliest deadline first (EDF) algorithm and EDF for hybrid tasks. LRIF has great capability to schedule both periodic hard real-time and aperiodic soft real-time tasks.
基金the National Natural Science Foundation of China(Nos.61174159 and 61101184)
文摘With notably few exceptions, the existing satellite mission operations cannot provide the ability of schedulability prediction, including the latest satellite planning service (SPS) standard–Sensor Planning Service Interface Standard 2.0 Earth Observation Satellite Tasking Extension (EO SPS) approved by Open Geospatial Consortium (OGC). The requestor can do nothing but waiting for the results of time consuming batch scheduling. It is often too late to adjust the request when receiving scheduling failures. A supervised learning algorithm based on robust decision tree and bagging support vector machine (Bagging SVM) is proposed to solve the problem above. The Bagging SVM is applied to improve the accuracy of classification and robust decision tree is utilized to reduce the error mean and error variation. The simulations and analysis show that a prediction action can be accomplished in near real-time with high accuracy. This means the decision makers can maximize the probability of successful scheduling through changing request parameters or take action to accommodate the scheduling failures in time.
基金co-supported by the National Natural Science Foundation of China(No:61073012)the Aeronautical Science Foundation of China(No:20111951015)the Fundamental Research Funds for the Central Universities of China(No:YWF-14-DZXY018)
文摘Safety-critical avionics systems which become more complex and tend to integrate multiple functionalities with different levels of criticality for better cost and power efficiency are subject to certifications at various levels of rigorousness. In order to simultaneously guarantee temporal constraints at all different levels of assurance mandated by different criticalities, novel scheduling techniques are in need. In this paper, a mixed-criticality sporadic task model with multiple virtual deadlines is built and a certification-cognizant dynamic scheduling approach referred as earliest virtual-deadline first with mixed-criticality(EVDF-MC) is considered, which exploits different relative deadlines of tasks in different criticality modes. As for the corresponding schedulability analysis problem, a sufficient and efficient schedulability test is proposed on the basis of demand-bound functions derived in the mixed-criticality scenario. In addition, a modified simulated annealing(MSA)-based heuristic approach is established for virtual deadlines assignment. Experiments performing simulations with randomly generated tasks indicate that the proposed approach is computationally efficient and competes well against the existing approaches.
基金the National Natural Science Foundation of China (No. 60533040)the Hi-Tech Research and Development Program (863) of China (Nos. 2007AA010304 and 2007AA01Z129)the Key Scientific and Technological Project of Hangzhou Tech-nology Bureau, China (No. 20062412B01)
文摘In hard real-time systems, schedulability analysis is not only one of the important means of guaranteeing the timelines of embedded software but also one of the fundamental theories of applying other new techniques, such as energy savings and fault tolerance. However, most of the existing schedulability analysis methods assume that schedulers use preemptive scheduling or non-preemptive scheduling. In this paper, we present a schedulability analysis method, i.e., the worst-case hybrid scheduling (WCHS) algorithm, which considers the influence of release jitters of transactions and extends schedulability analysis theory to timing analysis of linear transactions under fixed priority hybrid scheduling. To the best of our knowledge, this method is the first one on timing analysis of linear transactions under hybrid scheduling. An example is employed to demonstrate the use of this method. Experiments show that this method has lower computational complexity while keeping correctness, and that hybrid scheduling has little influence on the average worst-case response time (WCRT), but a negative impact on the schedulability of systems.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
基金the financial support of the National Key Research and Development Plan(2021YFB3302501)the financial support of the National Natural Science Foundation of China(12102077)。
文摘Safe and efficient sortie scheduling on the confined flight deck is crucial for maintaining high combat effectiveness of the aircraft carrier.The primary difficulty exactly lies in the spatiotemporal coordination,i.e.,allocation of limited supporting resources and collision-avoidance between heterogeneous dispatch entities.In this paper,the problem is investigated in the perspective of hybrid flow-shop scheduling problem by synthesizing the precedence,space and resource constraints.Specifically,eight processing procedures are abstracted,where tractors,preparing spots,catapults,and launching are virtualized as machines.By analyzing the constraints in sortie scheduling,a mixed-integer planning model is constructed.In particular,the constraint on preparing spot occupancy is improved to further enhance the sortie efficiency.The basic trajectory library for each dispatch entity is generated and a delayed strategy is integrated to address the collision-avoidance issue.To efficiently solve the formulated HFSP,which is essentially a combinatorial problem with tightly coupled constraints,a chaos-initialized genetic algorithm is developed.The solution framework is validated by the simulation environment referring to the Fort-class carrier,exhibiting higher sortie efficiency when compared to existing strategies.And animation of the simulation results is available at www.bilibili.com/video/BV14t421A7Tt/.The study presents a promising supporting technique for autonomous flight deck operation in the foreseeable future,and can be easily extended to other supporting scenarios,e.g.,ammunition delivery and aircraft maintenance.
基金supported in part by the High-tech ship scientific research project of the Ministry of Industry and Information Technology of the People’s Republic of China,and the National Nature Science Foundation of China(Grant No.71671113)the Science and Technology Department of Shaanxi Province(No.2020GY-219)the Ministry of Education Collaborative Project of Production,Learning and Research(No.201901024016).
文摘Ship outfitting is a key process in shipbuilding.Efficient and high-quality ship outfitting is a top priority for modern shipyards.These activities are conducted at different stations of shipyards.The outfitting plan is one of the crucial issues in shipbuilding.In this paper,production scheduling and material ordering with endogenous uncertainty of the outfitting process are investigated.The uncertain factors in outfitting equipment production are usually decision-related,which leads to difficulties in addressing uncertainties in the outfitting production workshops before production is conducted according to plan.This uncertainty is regarded as endogenous uncertainty and can be treated as non-anticipativity constraints in the model.To address this problem,a stochastic two-stage programming model with endogenous uncertainty is established to optimize the outfitting job scheduling and raw material ordering process.A practical case of the shipyard of China Merchants Heavy Industry Co.,Ltd.is used to evaluate the performance of the proposed method.Satisfactory results are achieved at the lowest expected total cost as the complete kit rate of outfitting equipment is improved and emergency replenishment is reduced.
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
基金supported by a grant of the Ministry of Research,Innovation and Digitization,CNCS/CCCDI-UEFISCDI,project number COFUND-DUT-OPEN4CEC-1,within PNCDI IV.
文摘In the rapidly evolving landscape of television advertising,optimizing ad schedules to maximize viewer engagement and revenue has become significant.Traditional methods often operate in silos,limiting the potential insights gained from broader data analysis due to concerns over privacy and data sharing.This article introduces a novel approach that leverages Federated Learning(FL)to enhance TV ad schedule optimization,combining the strengths of local optimization techniques with the power of global Machine Learning(ML)models to uncover actionable insights without compromising data privacy.It combines linear programming for initial ads scheduling optimization with ML—specifically,a K-Nearest Neighbors(KNN)model—for predicting ad spot positions.Taking into account the diversity and the difficulty of the ad-scheduling problem,we propose a prescriptivepredictive approach in which first the position of the ads is optimized(using Google’s OR-Tools CP-SAT)and then the scheduled position of all ads will be the result of the optimization problem.Second,this output becomes the target of a predictive task that predicts the position of new entries based on their characteristics ensuring the implementation of the scheduling at large scale(using KNN,Light Gradient Boosting Machine and Random Forest).Furthermore,we explore the integration of FL to enhance predictive accuracy and strategic insight across different broadcasting networks while preserving data privacy.The FL approach resulted in 8750 ads being precisely matched to their optimal category placements,showcasing an alignment with the intended diversity objectives.Additionally,there was a minimal deviation observed,with 1133 ads positioned within a one-category variance from their ideal placement in the original dataset.
基金supported by the Deanship of Scientific Research and Graduate Studies at King Khalid University under research grant number(R.G.P.2/93/45).
文摘Thedeployment of the Internet of Things(IoT)with smart sensors has facilitated the emergence of fog computing as an important technology for delivering services to smart environments such as campuses,smart cities,and smart transportation systems.Fog computing tackles a range of challenges,including processing,storage,bandwidth,latency,and reliability,by locally distributing secure information through end nodes.Consisting of endpoints,fog nodes,and back-end cloud infrastructure,it provides advanced capabilities beyond traditional cloud computing.In smart environments,particularly within smart city transportation systems,the abundance of devices and nodes poses significant challenges related to power consumption and system reliability.To address the challenges of latency,energy consumption,and fault tolerance in these environments,this paper proposes a latency-aware,faulttolerant framework for resource scheduling and data management,referred to as the FORD framework,for smart cities in fog environments.This framework is designed to meet the demands of time-sensitive applications,such as those in smart transportation systems.The FORD framework incorporates latency-aware resource scheduling to optimize task execution in smart city environments,leveraging resources from both fog and cloud environments.Through simulation-based executions,tasks are allocated to the nearest available nodes with minimum latency.In the event of execution failure,a fault-tolerantmechanism is employed to ensure the successful completion of tasks.Upon successful execution,data is efficiently stored in the cloud data center,ensuring data integrity and reliability within the smart city ecosystem.
基金supported in part by the National Key Research and Development Program of China under Grant No.2021YFF0901300in part by the National Natural Science Foundation of China under Grant Nos.62173076 and 72271048.
文摘The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.
基金supported by the Central Government Guides Local Science and Technology Development Fund Project(2023ZY0020)Key R&D and Achievement Transformation Project in InnerMongolia Autonomous Region(2022YFHH0019)+3 种基金the Fundamental Research Funds for Inner Mongolia University of Science&Technology(2022053)Natural Science Foundation of Inner Mongolia(2022LHQN05002)National Natural Science Foundation of China(52067018)Metallurgical Engineering First-Class Discipline Construction Project in Inner Mongolia University of Science and Technology,Control Science and Engineering Quality Improvement and Cultivation Discipline Project in Inner Mongolia University of Science and Technology。
文摘In this paper,a bilevel optimization model of an integrated energy operator(IEO)–load aggregator(LA)is constructed to address the coordinate optimization challenge of multiple stakeholder island integrated energy system(IIES).The upper level represents the integrated energy operator,and the lower level is the electricity-heatgas load aggregator.Owing to the benefit conflict between the upper and lower levels of the IIES,a dynamic pricing mechanism for coordinating the interests of the upper and lower levels is proposed,combined with factors such as the carbon emissions of the IIES,as well as the lower load interruption power.The price of selling energy can be dynamically adjusted to the lower LA in the mechanism,according to the information on carbon emissions and load interruption power.Mutual benefits and win-win situations are achieved between the upper and lower multistakeholders.Finally,CPLEX is used to iteratively solve the bilevel optimization model.The optimal solution is selected according to the joint optimal discrimination mechanism.Thesimulation results indicate that the sourceload coordinate operation can reduce the upper and lower operation costs.Using the proposed pricingmechanism,the carbon emissions and load interruption power of IEO-LA are reduced by 9.78%and 70.19%,respectively,and the capture power of the carbon capture equipment is improved by 36.24%.The validity of the proposed model and method is verified.
基金Shanxi Province Higher Education Science and Technology Innovation Fund Project(2022-676)Shanxi Soft Science Program Research Fund Project(2016041008-6)。
文摘In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.
文摘Recently,unmanned aerial vehicle(UAV)-aided free-space optical(FSO)communication has attracted widespread attentions.However,most of the existing research focuses on communication performance only.The authors investigate the integrated scheduling of communication,sensing,and control for UAV-aided FSO communication systems.Initially,a sensing-control model is established via the control theory.Moreover,an FSO communication channel model is established by considering the effects of atmospheric loss,atmospheric turbulence,geometrical loss,and angle-of-arrival fluctuation.Then,the relationship between the motion control of the UAV and radial displacement is obtained to link the control aspect and communication aspect.Assuming that the base station has instantaneous channel state information(CSI)or statistical CSI,the thresholds of the sensing-control pattern activation are designed,respectively.Finally,an integrated scheduling scheme for performing communication,sensing,and control is proposed.Numerical results indicate that,compared with conventional time-triggered scheme,the proposed integrated scheduling scheme obtains comparable communication and control performance,but reduces the sensing consumed power by 52.46%.
文摘Timing constraint Petri nets (TCPNs) can be used to model a real-time system specification and to verify the timing behavior of the system. This paper describes the limitations of the reachability analysis method in analyzing complex systems for existing TCPNs. Based on further research on the schedulability analysis method with various topology structures, a more general state reachability analysis method is proposed. To meet various requirements of timely response for actual systems, this paper puts forward a heuristic method for selecting decision-spans of transitions and develops a heuristic algorithm for schedulability analysis of TCPNs. Examples are given showing the practicality of the method in the schedulability analysis for real-time systems with various structures.
基金supported by the National Natural Science Foundation of China(No.62101587)the National Funded Postdoctoral Researcher Program of China(No.GZC20233578)。
文摘Micro-nano Earth Observation Satellite(MEOS)constellation has the advantages of low construction cost,short revisit cycle,and high functional density,which is considered a promising solution for serving rapidly growing observation demands.The observation Scheduling Problem in the MEOS constellation(MEOSSP)is a challenging issue due to the large number of satellites and tasks,as well as complex observation constraints.To address the large-scale and complicated MEOSSP,we develop a Two-Stage Scheduling Algorithm based on the Pointer Network with Attention mechanism(TSSA-PNA).In TSSA-PNA,the MEOS observation scheduling is decomposed into a task allocation stage and a single-MEOS scheduling stage.In the task allocation stage,an adaptive task allocation algorithm with four problem-specific allocation operators is proposed to reallocate the unscheduled tasks to new MEOSs.Regarding the single-MEOS scheduling stage,we design a pointer network based on the encoder-decoder architecture to learn the optimal singleMEOS scheduling solution and introduce the attention mechanism into the encoder to improve the learning efficiency.The Pointer Network with Attention mechanism(PNA)can generate the single-MEOS scheduling solution quickly in an end-to-end manner.These two decomposed stages are performed iteratively to search for the solution with high profit.A greedy local search algorithm is developed to improve the profits further.The performance of the PNA and TSSA-PNA on singleMEOS and multi-MEOS scheduling problems are evaluated in the experiments.The experimental results demonstrate that PNA can obtain the approximate solution for the single-MEOS scheduling problem in a short time.Besides,the TSSA-PNA can achieve higher observation profits than the existing scheduling algorithms within the acceptable computational time for the large-scale MEOS scheduling problem.
文摘Making plans is a good idea,but every one's schedule looks differe nt.You may have to talk about your plans before you're able to make some.It could sound like this:You ask,"Do you have plans this Friday night?"If the person already has plans,they may say,"I do.But I'm free on Saturday."If that day doesn't work for you,you can say,"I'm not available that day.How about Sunday after no on?"After you figure out the day and time,mark it on your calendar.
基金funded by the Natural Science Foundation of Heilongjiang Province(Grant No.LH2022F035)the Cultivation Programme for Young Innovative Talents in Ordinary Higher Education Institutions of Heilongjiang Province(Grant No.UNPYSCT-2020212)the Cultivation Programme for Young Innovative Talents in Scientific Research of Harbin University of Commerce(Grant No.2023-KYYWF-0983).
文摘Traditional quantum circuit scheduling approaches underutilize the inherent parallelism of quantum computation in the Noisy Intermediate-Scale Quantum(NISQ)era,overlook the inter-layer operations can be further parallelized.Based on this,two quantum circuit scheduling optimization approaches are designed and integrated into the quantum circuit compilation process.Firstly,we introduce the Layered Topology Scheduling Approach(LTSA),which employs a greedy algorithm and leverages the principles of topological sorting in graph theory.LTSA allocates quantum gates to a layered structure,maximizing the concurrent execution of quantum gate operations.Secondly,the Layerwise Conflict Resolution Approach(LCRA)is proposed.LCRA focuses on utilizing directly executable quantum gates within layers.Through the insertion of SWAP gates and conflict resolution checks,it minimizes conflicts and enhances parallelism,thereby optimizing the overall computational efficiency.Experimental findings indicate that LTSA and LCRA individually achieve a noteworthy reduction of 51.1%and 53.2%,respectively,in the number of inserted SWAP gates.Additionally,they contribute to a decrease in hardware gate overhead by 14.7%and 15%,respectively.Considering the intricate nature of quantum circuits and the temporal dependencies among different layers,the amalgamation of both approaches leads to a remarkable 51.6%reduction in inserted SWAP gates and a 14.8%decrease in hardware gate overhead.These results underscore the efficacy of the combined LTSA and LCRA in optimizing quantum circuit compilation.
文摘Submission Papers appearing in the Journal comprise Editorials,Rapid Communications,Perspectives,Tutorials,Feature Articles,Reviews,Research Articles,which should contain original information,theoretical or experimental,on any topics in the field of polymer science and polymer material science.Papers already published or scheduled to be published elsewhereshould notbesubmittedand certainly will not be accepted.
基金Supported by National Natural Science Foundation of China(Grant No.61803206)Jiangsu Provincial Natural Science Foundation(Grant No.222300420468)Jiangsu Provincial key R&D Program(Grant No.BE2017008-2).
文摘Complex road conditions without signalized intersections when the traffic flow is nearly saturated result in high traffic congestion and accidents,reducing the traffic efficiency of intelligent vehicles.The complex road traffic environment of smart vehicles and other vehicles frequently experiences conflicting start and stop motion.The fine-grained scheduling of autonomous vehicles(AVs)at non-signalized intersections,which is a promising technique for exploring optimal driving paths for both assisted driving nowadays and driverless cars in the near future,has attracted significant attention owing to its high potential for improving road safety and traffic efficiency.Fine-grained scheduling primarily focuses on signalized intersection scenarios,as applying it directly to non-signalized intersections is challenging because each AV can move freely without traffic signal control.This may cause frequent driving collisions and low road traffic efficiency.Therefore,this study proposes a novel algorithm to address this issue.Our work focuses on the fine-grained scheduling of automated vehicles at non-signal intersections via dual reinforced training(FS-DRL).For FS-DRL,we first use a grid to describe the non-signalized intersection and propose a convolutional neural network(CNN)-based fast decision model that can rapidly yield a coarse-grained scheduling decision for each AV in a distributed manner.We then load these coarse-grained scheduling decisions onto a deep Q-learning network(DQN)for further evaluation.We use an adaptive learning rate to maximize the reward function and employ parameterεto tradeoff the fast speed of coarse-grained scheduling in the CNN and optimal fine-grained scheduling in the DQN.In addition,we prove that using this adaptive learning rate leads to a converged loss rate with an extremely small number of training loops.The simulation results show that compared with Dijkstra,RNN,and ant colony-based scheduling,FS-DRL yields a high accuracy of 96.5%on the sample,with improved performance of approximately 61.54%-85.37%in terms of the average conflict and traffic efficiency.