To achieve high quality of service (QoS) on computational grids, the QoS-aware job scheduling is investigated for a hierarchical decentralized grid architecture that consists of multilevel schedulers. An integrated ...To achieve high quality of service (QoS) on computational grids, the QoS-aware job scheduling is investigated for a hierarchical decentralized grid architecture that consists of multilevel schedulers. An integrated QoS-aware job dispatching policy is proposed, which correlates priorities of incoming jobs used for job selecting at the local scheduler of the grid node with the job dispatching policies at the global scheduler for computational grids. The stochastic high-level Petri net (SHLPN) model of a two-level hierarchy computational grid architecture is presented, and a model refinement is made to reduce the complexity of the model solution. A performance analysis technique based on the SHLPN is proposed to investigate the QoS-aware job scheduling policy. Numerical results show that the QoS-aware job dispatching policy outperforms the QoS-unaware job dispatching policy in balancing the high-priority jobs, and thus enables priority-based QoS.展开更多
Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability...Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.展开更多
A Dominant Resource Fairness (DRF) based scheme for job scheduling in distributed cloud computing systems which was modeled as multi-job scheduling and multi-resource allocation coupling problem is proposed, where t...A Dominant Resource Fairness (DRF) based scheme for job scheduling in distributed cloud computing systems which was modeled as multi-job scheduling and multi-resource allocation coupling problem is proposed, where the resource pool is constructed from a large number of distributed heterogeneous servers, representing different points in the configuration space of resources such as processing, memory, storage and bandwidth. By introducing dominant resource share of jobs and virtual machines, the multi-job scheduling and multi-resource allocation joint mechanism significantly improves the cloud system's resource utilization, yet with a substantial reduction of job completion times. We show through experiments and case studies the superior performance of the algorithms in practice.展开更多
In this paper, we give a mathematical model for earliness-tardiness job scheduling problem with a common due window on parallel and non-identical machines. Because the job scheduling problem discussed in the paper con...In this paper, we give a mathematical model for earliness-tardiness job scheduling problem with a common due window on parallel and non-identical machines. Because the job scheduling problem discussed in the paper contains a problem of minimizing make-span, which is NP-complete on parallel and uniform machines, a heuristic algorithm is presented to find an approximate solution for the scheduling problem after proving an important theorem. Two numerical examples illustrate that the heuristic algorithm is very useful and effective in obtaining the near-optimal solution.展开更多
One of the fundamental problems in parallel and distributed systems is deciding how to allocate jobs to processors. The goals of job scheduling in a parallel environment are to minimize the parallel execution time of ...One of the fundamental problems in parallel and distributed systems is deciding how to allocate jobs to processors. The goals of job scheduling in a parallel environment are to minimize the parallel execution time of a job and try to balance the user’s desire with the system’s desire. The users always want their jobs be completed as quickly as possible, while the system wants to service as many jobs as possible. In this paper, a dynamic job scheduling algorithm was introduced. This algorithm tries to utilize the information of a practical system to allocate the jobs more evenly. The communication time between the processor and scheduler is overlapped with the computation time of the processor. So the communication overhead can be little. The principle of scheduling the job is based on the desirability of each processor. The scheduler would not allocate a new job to a processor that is already fully utilized. The execution efficiency of the system will be increased. This algorithm also can be reused in other complex algorithms.展开更多
The flexible job shop scheduling problem(FJSP)is commonly encountered in practical manufacturing environments.A product is typically built by assembling multiple jobs during actual manufacturing.AGVs are normally used...The flexible job shop scheduling problem(FJSP)is commonly encountered in practical manufacturing environments.A product is typically built by assembling multiple jobs during actual manufacturing.AGVs are normally used to transport the jobs from the processing shop to the assembly shop,where they are assembled.Therefore,studying the integrated scheduling problem with its processing,transportation,and assembly stages is extremely beneficial and significant.This research studies the three-stage flexible job shop scheduling problem with assembly and AGV transportation(FJSP-T-A),which includes processing jobs,transporting them via AGVs,and assembling them.A mixed integer linear programming(MILP)model is established to obtain optimal solutions.As the MILP model is challenging for solving large-scale problems,a novel co-evolutionary algorithm(NCEA)with two different decoding methods is proposed.In NCEA,a restart operation is developed to improve the diversity of the population,and a multiple crossover strategy is designed to improve the quality of individuals.The validity of the MILP model is proven by analyzing its complexity.The effectiveness of the restart operator,multiple crossovers,and the proposed algorithm is demonstrated by calculating and analyzing the RPI values of each algorithm's results within the time limit and performing a paired t-test on the average values of each algorithm at the 95%confidence level.This paper studies FJSP-T-A by minimizing the makespan for the first time,and presents a MILP model and an NCEA with two different decoding methods.展开更多
With the development of economic globalization,distributedmanufacturing is becomingmore andmore prevalent.Recently,integrated scheduling of distributed production and assembly has captured much concern.This research s...With the development of economic globalization,distributedmanufacturing is becomingmore andmore prevalent.Recently,integrated scheduling of distributed production and assembly has captured much concern.This research studies a distributed flexible job shop scheduling problem with assembly operations.Firstly,a mixed integer programming model is formulated to minimize the maximum completion time.Secondly,a Q-learning-assisted coevolutionary algorithmis presented to solve themodel:(1)Multiple populations are developed to seek required decisions simultaneously;(2)An encoding and decoding method based on problem features is applied to represent individuals;(3)A hybrid approach of heuristic rules and random methods is employed to acquire a high-quality population;(4)Three evolutionary strategies having crossover and mutation methods are adopted to enhance exploration capabilities;(5)Three neighborhood structures based on problem features are constructed,and a Q-learning-based iterative local search method is devised to improve exploitation abilities.The Q-learning approach is applied to intelligently select better neighborhood structures.Finally,a group of instances is constructed to perform comparison experiments.The effectiveness of the Q-learning approach is verified by comparing the developed algorithm with its variant without the Q-learning method.Three renowned meta-heuristic algorithms are used in comparison with the developed algorithm.The comparison results demonstrate that the designed method exhibits better performance in coping with the formulated problem.展开更多
As one of the most classical scheduling problems,flexible job shop scheduling problems(FJSP)find widespread applications in modern intelligent manufacturing systems.However,the majority of meta-heuristic methods for s...As one of the most classical scheduling problems,flexible job shop scheduling problems(FJSP)find widespread applications in modern intelligent manufacturing systems.However,the majority of meta-heuristic methods for solving FJSP in the literature are population-based evolutionary algorithms,which are complex and time-consuming.In this paper,we propose a fast effective singlesolution based local search algorithm with an innovative adaptive weighting-based local search(AWLS)technique for solving FJSP.The adaptive weighting technique assigns weights to each operation and adaptively updates them during the exploration.AWLS integrates a Tabu Search strategy and the adaptive weighting technique to smooth the landscape of the search space and enhance the exploration diversity.Computational experiments on 313 well-known benchmark instances demonstrate that AWLS is highly competitive with state-of-the-art algorithms in terms of both solution quality and computational efficiency,despite of its simplicity.Specifically,AWLS improves the previous best-known results in the literature on 33 instances and match the best-known results on the remaining ones except for only one under the same time limit of up to 300 s.As a strongly non-deterministic polynomia(NP)-hard problem which has been extensively studied for nearly half a century,breaking the records on these classic instances is an arduous task.Nevertheless,AWLS establishes new records on 8 challenging instances whose previous best records were established by a state-of-the-art meta-heuristic algorithm and a famous industrial solver.展开更多
The job shop scheduli ng problem has been studied for decades and known as an NP-hard problem. The fl exible job shop scheduling problem is a generalization of the classical job sche duling problem that allows an oper...The job shop scheduli ng problem has been studied for decades and known as an NP-hard problem. The fl exible job shop scheduling problem is a generalization of the classical job sche duling problem that allows an operation to be processed on one machine out of a set of machines. The problem is to assign each operation to a machine and find a sequence for the operations on the machine in order that the maximal completion time of all operations is minimized. A genetic algorithm is used to solve the f lexible job shop scheduling problem. A novel gene coding method aiming at job sh op problem is introduced which is intuitive and does not need repairing process to validate the gene. Computer simulations are carried out and the results show the effectiveness of the proposed algorithm.展开更多
Data-parallel computing platforms,such as Hadoop and Spark,are deployed in computing clusters for big data analytics.There is a general tendency that multiple users share the same computing cluster.The schedule of mul...Data-parallel computing platforms,such as Hadoop and Spark,are deployed in computing clusters for big data analytics.There is a general tendency that multiple users share the same computing cluster.The schedule of multiple jobs becomes a serious challenge.Over a long period in the past,the Shortest-Job-First(SJF)method has been considered as the optimal solution to minimize the average job completion time.However,the SJF method leads to a low system throughput in the case where a small number of short jobs consume a large amount of resources.This factor prolongs the average job completion time.We propose an improved heuristic job scheduling method,called the Densest-Job-Set-First(DJSF)method.The DJSF method schedules jobs by maximizing the number of completed jobs per unit time,aiming to decrease the average Job Completion Time(JCT)and improve the system throughput.We perform extensive simulations based on Google cluster data.Compared with the SJF method,the DJSF method decreases the average JCT by 23.19% and enhances the system throughput by 42.19%.Compared with Tetris,the job packing method improves the job completion efficiency by 55.4%,so that the computing platforms complete more jobs in a short time span.展开更多
To diagnose the feasibility of the solution of a job-shop scheduling problem(JSSP),a test algorithm based on diagraph and heuristic search is developed and verified through a case study.Meanwhile,a new repair algori...To diagnose the feasibility of the solution of a job-shop scheduling problem(JSSP),a test algorithm based on diagraph and heuristic search is developed and verified through a case study.Meanwhile,a new repair algorithm for modifying an infeasible solution of the JSSP to become a feasible solution is proposed for the general JSSP.The computational complexity of the test algorithm and the repair algorithm is both O(n) under the worst-case scenario,and O(2J+M) for the repair algorithm under the best-case scenario.The repair algorithm is not limited to specific optimization methods,such as local tabu search,genetic algorithms and shifting bottleneck procedures for job shop scheduling,but applicable to generic infeasible solutions for the JSSP to achieve feasibility.展开更多
This paper proposes a prediction engine designed for non-dedicated clusters, which is able to estimate the turnaround time for parallel applications, even in the presence of serial workload of the workstation owner. T...This paper proposes a prediction engine designed for non-dedicated clusters, which is able to estimate the turnaround time for parallel applications, even in the presence of serial workload of the workstation owner. The prediction engine can be configured to work with three different estimation kernels: a Historical kernel, a Simulation kernel based on analytical models and an integration of both, named Hybrid kernel. These estimation proposals were integrated into a scheduling system, named CISNE, which can be executed in an on-line or off-line mode. The accuracy of the proposed estimation methods was evaluated in relation to different job scheduling policies in a real and a simulated cluster environment. In both environments, we observed that the Hybrid system gives the best results because it combines the ability of a simulation engine to capture the dynamism of a non-dedicated environment together with the accuracy of the historical methods to estimate the application runtime considering the state of the resources.展开更多
The uncertainties of grid sites security are main hurdle to make the job scheduling secure, reliable and fault-tolerant. Most existing scheduling algorithms use fixed-number job replications to provide fault tolerant ...The uncertainties of grid sites security are main hurdle to make the job scheduling secure, reliable and fault-tolerant. Most existing scheduling algorithms use fixed-number job replications to provide fault tolerant ability and high scheduling success rate, which consume excessive resources or can not provide sufficient fault tolerant functions when grid security conditions change. In this paper a fuzzy-logic-based self-adaptive replication scheduling (FSARS) algorithm is proposed to handle the fuzziness or uncertainties of job replication number which is highly related to trust factors behind grid sites and user jobs. Remote sensing-based soil moisture extraction (RSBSME) workload experiments in real grid environment are performed to evaluate the proposed approach and the results show that high scheduling success rate of up to 95% and less grid resource utilization can be achieved through FSARS. Extensive experiments show that FSARS scales well when user jobs and grid sites increase.展开更多
Cloud computing is attracting an increasing number of simulation applications running in the virtualized cloud data center.These applications are submitted to the cloud in the form of simulation jobs.Meanwhile,the man...Cloud computing is attracting an increasing number of simulation applications running in the virtualized cloud data center.These applications are submitted to the cloud in the form of simulation jobs.Meanwhile,the management and scheduling of simulation jobs are playing an essential role to offer efficient and high productivity computational service.In this paper,we design a management and scheduling service framework for simulation jobs in two-tier virtualization-based private cloud data center,named simulation execution as a service(SimEaaS).It aims at releasing users from complex simulation running settings,while guaranteeing the QoS requirements adaptively.Furthermore,a novel job scheduling algorithm named adaptive deadline-aware job size adjustment(ADaSA)algorithm is designed to realize high job responsiveness under QoS requirement for SimEaaS.ADaSA tries to make full use of the idle fragmentation resources by tuning the number of requested processes of submitted jobs in the queue adaptively,while guaranteeing that jobs’deadline requirements are not violated.Extensive experiments with trace-driven simulation are conducted to evaluate the performance of our ADaSA.The results show that ADaSA outperforms both cloud-based job scheduling algorithm KCEASY and traditional EASY in terms of response time(up to 90%)and bounded slow down(up to 95%),while obtains approximately equivalent deadline-missed rate.ADaSA also outperforms two representative moldable scheduling algorithms in terms of deadline-missed rate(up to 60%).展开更多
A modified bottleneck-based (MB) heuristic for large-scale job-shop scheduling problems with a welldefined bottleneck is suggested, which is simpler but more tailored than the shifting bottleneck (SB) procedure. I...A modified bottleneck-based (MB) heuristic for large-scale job-shop scheduling problems with a welldefined bottleneck is suggested, which is simpler but more tailored than the shifting bottleneck (SB) procedure. In this algorithm, the bottleneck is first scheduled optimally while the non-bottleneck machines are subordinated around the solutions of the bottleneck schedule by some effective dispatching rules. Computational results indicate that the MB heuristic can achieve a better tradeoff between solution quality and computational time compared to SB procedure for medium-size problems. Furthermore, it can obtain a good solution in a short time for large-scale jobshop scheduling problems.展开更多
The flexible job shop scheduling problem(FJSP),which is NP-hard,widely exists in many manufacturing industries.It is very hard to be solved.A multi-swarm collaborative genetic algorithm(MSCGA)based on the collaborativ...The flexible job shop scheduling problem(FJSP),which is NP-hard,widely exists in many manufacturing industries.It is very hard to be solved.A multi-swarm collaborative genetic algorithm(MSCGA)based on the collaborative optimization algorithm is proposed for the FJSP.Multi-population structure is used to independently evolve two sub-problems of the FJSP in the MSCGA.Good operators are adopted and designed to ensure this algorithm to achieve a good performance.Some famous FJSP benchmarks are chosen to evaluate the effectiveness of the MSCGA.The adaptability and superiority of the proposed method are demonstrated by comparing with other reported algorithms.展开更多
The classical job shop scheduling problem(JSP) is the most popular machine scheduling model in practice and is known as NP-hard.The formulation of the JSP is based on the assumption that for each part type or job ther...The classical job shop scheduling problem(JSP) is the most popular machine scheduling model in practice and is known as NP-hard.The formulation of the JSP is based on the assumption that for each part type or job there is only one process plan that prescribes the sequence of operations and the machine on which each operation has to be performed.However,JSP with alternative machines for various operations is an extension of the classical JSP,which allows an operation to be processed by any machine from a given set of machines.Since this problem requires an additional decision of machine allocation during scheduling,it is much more complex than JSP.We present a domain independent genetic algorithm(GA) approach for the job shop scheduling problem with alternative machines.The GA is implemented in a spreadsheet environment.The performance of the proposed GA is analyzed by comparing with various problem instances taken from the literatures.The result shows that the proposed GA is competitive with the existing approaches.A simplified approach that would be beneficial to both practitioners and researchers is presented for solving scheduling problems with alternative machines.展开更多
Existing methods of local search mostly focus on how to reach optimal solution.However,in some emergency situations,search time is the hard constraint for job shop scheduling problem while optimal solution is not nece...Existing methods of local search mostly focus on how to reach optimal solution.However,in some emergency situations,search time is the hard constraint for job shop scheduling problem while optimal solution is not necessary.In this situation,the existing method of local search is not fast enough.This paper presents an emergency local search(ELS) approach which can reach feasible and nearly optimal solution in limited search time.The ELS approach is desirable for the aforementioned emergency situations where search time is limited and a nearly optimal solution is sufficient,which consists of three phases.Firstly,in order to reach a feasible and nearly optimal solution,infeasible solutions are repaired and a repair technique named group repair is proposed.Secondly,in order to save time,the amount of local search moves need to be reduced and this is achieved by a quickly search method named critical path search(CPS).Finally,CPS sometimes stops at a solution far from the optimal one.In order to jump out the search dilemma of CPS,a jump technique based on critical part is used to improve CPS.Furthermore,the schedule system based on ELS has been developed and experiments based on this system completed on the computer of Intel Pentium(R) 2.93 GHz.The experimental result shows that the optimal solutions of small scale instances are reached in 2 s,and the nearly optimal solutions of large scale instances are reached in 4 s.The proposed ELS approach can stably reach nearly optimal solutions with manageable search time,and can be applied on some emergency situations.展开更多
A clonal selection based memetic algorithm is proposed for solving job shop scheduling problems in this paper. In the proposed algorithm, the clonal selection and the local search mechanism are designed to enhance exp...A clonal selection based memetic algorithm is proposed for solving job shop scheduling problems in this paper. In the proposed algorithm, the clonal selection and the local search mechanism are designed to enhance exploration and exploitation. In the clonal selection mechanism, clonal selection, hypermutation and receptor edit theories are presented to construct an evolutionary searching mechanism which is used for exploration. In the local search mechanism, a simulated annealing local search algorithm based on Nowicki and Smutnicki's neighborhood is presented to exploit local optima. The proposed algorithm is examined using some well-known benchmark problems. Numerical results validate the effectiveness of the proposed algorithm.展开更多
Industry 4.0 production environments and smart manufacturing systems integrate both the physical and decision-making aspects of manufacturing operations into autonomous and decentralized systems.One of the key aspects...Industry 4.0 production environments and smart manufacturing systems integrate both the physical and decision-making aspects of manufacturing operations into autonomous and decentralized systems.One of the key aspects of these systems is a production planning,specifically,Scheduling operations on the machines.To cope with this problem,this paper proposed a Deep Reinforcement Learning with an Actor-Critic algorithm(DRLAC).We model the Job-Shop Scheduling Problem(JSSP)as a Markov Decision Process(MDP),represent the state of a JSSP as simple Graph Isomorphism Networks(GIN)to extract nodes features during scheduling,and derive the policy of optimal scheduling which guides the included node features to the best next action of schedule.In addition,we adopt the Actor-Critic(AC)network’s training algorithm-based reinforcement learning for achieving the optimal policy of the scheduling.To prove the proposed model’s effectiveness,first,we will present a case study that illustrated a conflict between two job scheduling,secondly,we will apply the proposed model to a known benchmark dataset and compare the results with the traditional scheduling methods and trending approaches.The numerical results indicate that the proposed model can be adaptive with real-time production scheduling,where the average percentage deviation(APD)of our model achieved values between 0.009 and 0.21 comparedwith heuristic methods and values between 0.014 and 0.18 compared with other trending approaches.展开更多
基金The National Natural Science Foundation of China(No60673054,90412012)
文摘To achieve high quality of service (QoS) on computational grids, the QoS-aware job scheduling is investigated for a hierarchical decentralized grid architecture that consists of multilevel schedulers. An integrated QoS-aware job dispatching policy is proposed, which correlates priorities of incoming jobs used for job selecting at the local scheduler of the grid node with the job dispatching policies at the global scheduler for computational grids. The stochastic high-level Petri net (SHLPN) model of a two-level hierarchy computational grid architecture is presented, and a model refinement is made to reduce the complexity of the model solution. A performance analysis technique based on the SHLPN is proposed to investigate the QoS-aware job scheduling policy. Numerical results show that the QoS-aware job dispatching policy outperforms the QoS-unaware job dispatching policy in balancing the high-priority jobs, and thus enables priority-based QoS.
基金supported and granted by the Ministry of Science and Technology,Taiwan(MOST110-2622-E-390-001 and MOST109-2622-E-390-002-CC3).
文摘Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.
文摘A Dominant Resource Fairness (DRF) based scheme for job scheduling in distributed cloud computing systems which was modeled as multi-job scheduling and multi-resource allocation coupling problem is proposed, where the resource pool is constructed from a large number of distributed heterogeneous servers, representing different points in the configuration space of resources such as processing, memory, storage and bandwidth. By introducing dominant resource share of jobs and virtual machines, the multi-job scheduling and multi-resource allocation joint mechanism significantly improves the cloud system's resource utilization, yet with a substantial reduction of job completion times. We show through experiments and case studies the superior performance of the algorithms in practice.
基金Zhejiang Provincial Natural Science Foundation (No. 698069) and High-Tech. Research andDevelopment Program (No. 863-511-945-002)
文摘In this paper, we give a mathematical model for earliness-tardiness job scheduling problem with a common due window on parallel and non-identical machines. Because the job scheduling problem discussed in the paper contains a problem of minimizing make-span, which is NP-complete on parallel and uniform machines, a heuristic algorithm is presented to find an approximate solution for the scheduling problem after proving an important theorem. Two numerical examples illustrate that the heuristic algorithm is very useful and effective in obtaining the near-optimal solution.
基金National Natural Science Foundation of China( No.60 173 0 3 1)
文摘One of the fundamental problems in parallel and distributed systems is deciding how to allocate jobs to processors. The goals of job scheduling in a parallel environment are to minimize the parallel execution time of a job and try to balance the user’s desire with the system’s desire. The users always want their jobs be completed as quickly as possible, while the system wants to service as many jobs as possible. In this paper, a dynamic job scheduling algorithm was introduced. This algorithm tries to utilize the information of a practical system to allocate the jobs more evenly. The communication time between the processor and scheduler is overlapped with the computation time of the processor. So the communication overhead can be little. The principle of scheduling the job is based on the desirability of each processor. The scheduler would not allocate a new job to a processor that is already fully utilized. The execution efficiency of the system will be increased. This algorithm also can be reused in other complex algorithms.
基金Supported by National Natural Science Foundation of China(Grant Nos.52205529 and 62303204)the Youth Innovation Team Program of Shandong Higher Education Institution(Grant No.2023KJ206)the Guangyue Youth Scholar Innovation Talent Program support received from Liaocheng University(Grant No.LCUGYTD2022-03)。
文摘The flexible job shop scheduling problem(FJSP)is commonly encountered in practical manufacturing environments.A product is typically built by assembling multiple jobs during actual manufacturing.AGVs are normally used to transport the jobs from the processing shop to the assembly shop,where they are assembled.Therefore,studying the integrated scheduling problem with its processing,transportation,and assembly stages is extremely beneficial and significant.This research studies the three-stage flexible job shop scheduling problem with assembly and AGV transportation(FJSP-T-A),which includes processing jobs,transporting them via AGVs,and assembling them.A mixed integer linear programming(MILP)model is established to obtain optimal solutions.As the MILP model is challenging for solving large-scale problems,a novel co-evolutionary algorithm(NCEA)with two different decoding methods is proposed.In NCEA,a restart operation is developed to improve the diversity of the population,and a multiple crossover strategy is designed to improve the quality of individuals.The validity of the MILP model is proven by analyzing its complexity.The effectiveness of the restart operator,multiple crossovers,and the proposed algorithm is demonstrated by calculating and analyzing the RPI values of each algorithm's results within the time limit and performing a paired t-test on the average values of each algorithm at the 95%confidence level.This paper studies FJSP-T-A by minimizing the makespan for the first time,and presents a MILP model and an NCEA with two different decoding methods.
文摘With the development of economic globalization,distributedmanufacturing is becomingmore andmore prevalent.Recently,integrated scheduling of distributed production and assembly has captured much concern.This research studies a distributed flexible job shop scheduling problem with assembly operations.Firstly,a mixed integer programming model is formulated to minimize the maximum completion time.Secondly,a Q-learning-assisted coevolutionary algorithmis presented to solve themodel:(1)Multiple populations are developed to seek required decisions simultaneously;(2)An encoding and decoding method based on problem features is applied to represent individuals;(3)A hybrid approach of heuristic rules and random methods is employed to acquire a high-quality population;(4)Three evolutionary strategies having crossover and mutation methods are adopted to enhance exploration capabilities;(5)Three neighborhood structures based on problem features are constructed,and a Q-learning-based iterative local search method is devised to improve exploitation abilities.The Q-learning approach is applied to intelligently select better neighborhood structures.Finally,a group of instances is constructed to perform comparison experiments.The effectiveness of the Q-learning approach is verified by comparing the developed algorithm with its variant without the Q-learning method.Three renowned meta-heuristic algorithms are used in comparison with the developed algorithm.The comparison results demonstrate that the designed method exhibits better performance in coping with the formulated problem.
基金supported in part by the National Natural Science Foundation of China(NSFC)(62202192 and 72101094)the National Science Fund for Distinguished Young Scholars of China(51825502).
文摘As one of the most classical scheduling problems,flexible job shop scheduling problems(FJSP)find widespread applications in modern intelligent manufacturing systems.However,the majority of meta-heuristic methods for solving FJSP in the literature are population-based evolutionary algorithms,which are complex and time-consuming.In this paper,we propose a fast effective singlesolution based local search algorithm with an innovative adaptive weighting-based local search(AWLS)technique for solving FJSP.The adaptive weighting technique assigns weights to each operation and adaptively updates them during the exploration.AWLS integrates a Tabu Search strategy and the adaptive weighting technique to smooth the landscape of the search space and enhance the exploration diversity.Computational experiments on 313 well-known benchmark instances demonstrate that AWLS is highly competitive with state-of-the-art algorithms in terms of both solution quality and computational efficiency,despite of its simplicity.Specifically,AWLS improves the previous best-known results in the literature on 33 instances and match the best-known results on the remaining ones except for only one under the same time limit of up to 300 s.As a strongly non-deterministic polynomia(NP)-hard problem which has been extensively studied for nearly half a century,breaking the records on these classic instances is an arduous task.Nevertheless,AWLS establishes new records on 8 challenging instances whose previous best records were established by a state-of-the-art meta-heuristic algorithm and a famous industrial solver.
文摘The job shop scheduli ng problem has been studied for decades and known as an NP-hard problem. The fl exible job shop scheduling problem is a generalization of the classical job sche duling problem that allows an operation to be processed on one machine out of a set of machines. The problem is to assign each operation to a machine and find a sequence for the operations on the machine in order that the maximal completion time of all operations is minimized. A genetic algorithm is used to solve the f lexible job shop scheduling problem. A novel gene coding method aiming at job sh op problem is introduced which is intuitive and does not need repairing process to validate the gene. Computer simulations are carried out and the results show the effectiveness of the proposed algorithm.
文摘Data-parallel computing platforms,such as Hadoop and Spark,are deployed in computing clusters for big data analytics.There is a general tendency that multiple users share the same computing cluster.The schedule of multiple jobs becomes a serious challenge.Over a long period in the past,the Shortest-Job-First(SJF)method has been considered as the optimal solution to minimize the average job completion time.However,the SJF method leads to a low system throughput in the case where a small number of short jobs consume a large amount of resources.This factor prolongs the average job completion time.We propose an improved heuristic job scheduling method,called the Densest-Job-Set-First(DJSF)method.The DJSF method schedules jobs by maximizing the number of completed jobs per unit time,aiming to decrease the average Job Completion Time(JCT)and improve the system throughput.We perform extensive simulations based on Google cluster data.Compared with the SJF method,the DJSF method decreases the average JCT by 23.19% and enhances the system throughput by 42.19%.Compared with Tetris,the job packing method improves the job completion efficiency by 55.4%,so that the computing platforms complete more jobs in a short time span.
基金The US National Science Foundation (No. CMMI-0408390, CMMI-0644552)the Research Fellowship for International Young Scientists (No. 51050110143)+2 种基金the Fok Ying-Tong Education Foundation(No. 114024)the Natural Science Foundation of Jiangsu Province (No.BK2009015)the Postdoctoral Science Foundation of Jiangsu Province (No.0901005C)
文摘To diagnose the feasibility of the solution of a job-shop scheduling problem(JSSP),a test algorithm based on diagraph and heuristic search is developed and verified through a case study.Meanwhile,a new repair algorithm for modifying an infeasible solution of the JSSP to become a feasible solution is proposed for the general JSSP.The computational complexity of the test algorithm and the repair algorithm is both O(n) under the worst-case scenario,and O(2J+M) for the repair algorithm under the best-case scenario.The repair algorithm is not limited to specific optimization methods,such as local tabu search,genetic algorithms and shifting bottleneck procedures for job shop scheduling,but applicable to generic infeasible solutions for the JSSP to achieve feasibility.
基金supported by the MEyC under Grant No.TIN 2008-05913
文摘This paper proposes a prediction engine designed for non-dedicated clusters, which is able to estimate the turnaround time for parallel applications, even in the presence of serial workload of the workstation owner. The prediction engine can be configured to work with three different estimation kernels: a Historical kernel, a Simulation kernel based on analytical models and an integration of both, named Hybrid kernel. These estimation proposals were integrated into a scheduling system, named CISNE, which can be executed in an on-line or off-line mode. The accuracy of the proposed estimation methods was evaluated in relation to different job scheduling policies in a real and a simulated cluster environment. In both environments, we observed that the Hybrid system gives the best results because it combines the ability of a simulation engine to capture the dynamism of a non-dedicated environment together with the accuracy of the historical methods to estimate the application runtime considering the state of the resources.
基金the Innovation Fund of Huazhong University of Science and Technology (No. HF04012006271)
文摘The uncertainties of grid sites security are main hurdle to make the job scheduling secure, reliable and fault-tolerant. Most existing scheduling algorithms use fixed-number job replications to provide fault tolerant ability and high scheduling success rate, which consume excessive resources or can not provide sufficient fault tolerant functions when grid security conditions change. In this paper a fuzzy-logic-based self-adaptive replication scheduling (FSARS) algorithm is proposed to handle the fuzziness or uncertainties of job replication number which is highly related to trust factors behind grid sites and user jobs. Remote sensing-based soil moisture extraction (RSBSME) workload experiments in real grid environment are performed to evaluate the proposed approach and the results show that high scheduling success rate of up to 95% and less grid resource utilization can be achieved through FSARS. Extensive experiments show that FSARS scales well when user jobs and grid sites increase.
基金supported by Scientific Research Plan of National University of Defense Technology under Grant No.ZK-20-38National Key Research&Development(R&D)Plan under Grant No.2017YFC0803300+2 种基金the National Natural Science Foundation of China under Grant Nos.71673292,71673294,61503402 and 61673388National Social Science Foundation of China under Grant No.17CGL047Guangdong Key Laboratory for Big Data Analysis and Simulation of Public Opinion.
文摘Cloud computing is attracting an increasing number of simulation applications running in the virtualized cloud data center.These applications are submitted to the cloud in the form of simulation jobs.Meanwhile,the management and scheduling of simulation jobs are playing an essential role to offer efficient and high productivity computational service.In this paper,we design a management and scheduling service framework for simulation jobs in two-tier virtualization-based private cloud data center,named simulation execution as a service(SimEaaS).It aims at releasing users from complex simulation running settings,while guaranteeing the QoS requirements adaptively.Furthermore,a novel job scheduling algorithm named adaptive deadline-aware job size adjustment(ADaSA)algorithm is designed to realize high job responsiveness under QoS requirement for SimEaaS.ADaSA tries to make full use of the idle fragmentation resources by tuning the number of requested processes of submitted jobs in the queue adaptively,while guaranteeing that jobs’deadline requirements are not violated.Extensive experiments with trace-driven simulation are conducted to evaluate the performance of our ADaSA.The results show that ADaSA outperforms both cloud-based job scheduling algorithm KCEASY and traditional EASY in terms of response time(up to 90%)and bounded slow down(up to 95%),while obtains approximately equivalent deadline-missed rate.ADaSA also outperforms two representative moldable scheduling algorithms in terms of deadline-missed rate(up to 60%).
基金the National Natural Science Foundation of China (6027401360474002)Shanghai Development Found for Science and Technology (04DZ11008).
文摘A modified bottleneck-based (MB) heuristic for large-scale job-shop scheduling problems with a welldefined bottleneck is suggested, which is simpler but more tailored than the shifting bottleneck (SB) procedure. In this algorithm, the bottleneck is first scheduled optimally while the non-bottleneck machines are subordinated around the solutions of the bottleneck schedule by some effective dispatching rules. Computational results indicate that the MB heuristic can achieve a better tradeoff between solution quality and computational time compared to SB procedure for medium-size problems. Furthermore, it can obtain a good solution in a short time for large-scale jobshop scheduling problems.
基金supported by the National Key R&D Program of China(2018AAA0101700)the Program for HUST Academic Frontier Youth Team(2017QYTD04).
文摘The flexible job shop scheduling problem(FJSP),which is NP-hard,widely exists in many manufacturing industries.It is very hard to be solved.A multi-swarm collaborative genetic algorithm(MSCGA)based on the collaborative optimization algorithm is proposed for the FJSP.Multi-population structure is used to independently evolve two sub-problems of the FJSP in the MSCGA.Good operators are adopted and designed to ensure this algorithm to achieve a good performance.Some famous FJSP benchmarks are chosen to evaluate the effectiveness of the MSCGA.The adaptability and superiority of the proposed method are demonstrated by comparing with other reported algorithms.
文摘The classical job shop scheduling problem(JSP) is the most popular machine scheduling model in practice and is known as NP-hard.The formulation of the JSP is based on the assumption that for each part type or job there is only one process plan that prescribes the sequence of operations and the machine on which each operation has to be performed.However,JSP with alternative machines for various operations is an extension of the classical JSP,which allows an operation to be processed by any machine from a given set of machines.Since this problem requires an additional decision of machine allocation during scheduling,it is much more complex than JSP.We present a domain independent genetic algorithm(GA) approach for the job shop scheduling problem with alternative machines.The GA is implemented in a spreadsheet environment.The performance of the proposed GA is analyzed by comparing with various problem instances taken from the literatures.The result shows that the proposed GA is competitive with the existing approaches.A simplified approach that would be beneficial to both practitioners and researchers is presented for solving scheduling problems with alternative machines.
基金supported by National Natural Science Foundation of China(Grant No.61004109)Fundamental Research Funds for the Central Universities of China(Grant No.FRF-TP-12-071A)
文摘Existing methods of local search mostly focus on how to reach optimal solution.However,in some emergency situations,search time is the hard constraint for job shop scheduling problem while optimal solution is not necessary.In this situation,the existing method of local search is not fast enough.This paper presents an emergency local search(ELS) approach which can reach feasible and nearly optimal solution in limited search time.The ELS approach is desirable for the aforementioned emergency situations where search time is limited and a nearly optimal solution is sufficient,which consists of three phases.Firstly,in order to reach a feasible and nearly optimal solution,infeasible solutions are repaired and a repair technique named group repair is proposed.Secondly,in order to save time,the amount of local search moves need to be reduced and this is achieved by a quickly search method named critical path search(CPS).Finally,CPS sometimes stops at a solution far from the optimal one.In order to jump out the search dilemma of CPS,a jump technique based on critical part is used to improve CPS.Furthermore,the schedule system based on ELS has been developed and experiments based on this system completed on the computer of Intel Pentium(R) 2.93 GHz.The experimental result shows that the optimal solutions of small scale instances are reached in 2 s,and the nearly optimal solutions of large scale instances are reached in 4 s.The proposed ELS approach can stably reach nearly optimal solutions with manageable search time,and can be applied on some emergency situations.
文摘A clonal selection based memetic algorithm is proposed for solving job shop scheduling problems in this paper. In the proposed algorithm, the clonal selection and the local search mechanism are designed to enhance exploration and exploitation. In the clonal selection mechanism, clonal selection, hypermutation and receptor edit theories are presented to construct an evolutionary searching mechanism which is used for exploration. In the local search mechanism, a simulated annealing local search algorithm based on Nowicki and Smutnicki's neighborhood is presented to exploit local optima. The proposed algorithm is examined using some well-known benchmark problems. Numerical results validate the effectiveness of the proposed algorithm.
文摘Industry 4.0 production environments and smart manufacturing systems integrate both the physical and decision-making aspects of manufacturing operations into autonomous and decentralized systems.One of the key aspects of these systems is a production planning,specifically,Scheduling operations on the machines.To cope with this problem,this paper proposed a Deep Reinforcement Learning with an Actor-Critic algorithm(DRLAC).We model the Job-Shop Scheduling Problem(JSSP)as a Markov Decision Process(MDP),represent the state of a JSSP as simple Graph Isomorphism Networks(GIN)to extract nodes features during scheduling,and derive the policy of optimal scheduling which guides the included node features to the best next action of schedule.In addition,we adopt the Actor-Critic(AC)network’s training algorithm-based reinforcement learning for achieving the optimal policy of the scheduling.To prove the proposed model’s effectiveness,first,we will present a case study that illustrated a conflict between two job scheduling,secondly,we will apply the proposed model to a known benchmark dataset and compare the results with the traditional scheduling methods and trending approaches.The numerical results indicate that the proposed model can be adaptive with real-time production scheduling,where the average percentage deviation(APD)of our model achieved values between 0.009 and 0.21 comparedwith heuristic methods and values between 0.014 and 0.18 compared with other trending approaches.