Formalizing complex processes and phenomena of a real-world problem may require a large number of variables and constraints,resulting in what is termed a large-scale optimization problem.Nowadays,such large-scale opti...Formalizing complex processes and phenomena of a real-world problem may require a large number of variables and constraints,resulting in what is termed a large-scale optimization problem.Nowadays,such large-scale optimization problems are solved using computing machines,leading to an enormous computational time being required,which may delay deriving timely solutions.Decomposition methods,which partition a large-scale optimization problem into lower-dimensional subproblems,represent a key approach to addressing time-efficiency issues.There has been significant progress in both applied mathematics and emerging artificial intelligence approaches on this front.This work aims at providing an overview of the decomposition methods from both the mathematics and computer science points of view.We also remark on the state-of-the-art developments and recent applications of the decomposition methods,and discuss the future research and development perspectives.展开更多
Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr...Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.展开更多
To solve large-scale optimization problems,Fragrance coefficient and variant Particle Swarm local search Butterfly Optimization Algorithm(FPSBOA)is proposed.In the position update stage of Butterfly Optimization Algor...To solve large-scale optimization problems,Fragrance coefficient and variant Particle Swarm local search Butterfly Optimization Algorithm(FPSBOA)is proposed.In the position update stage of Butterfly Optimization Algorithm(BOA),the fragrance coefficient is designed to balance the exploration and exploitation of BOA.The variant particle swarm local search strategy is proposed to improve the local search ability of the current optimal butterfly and prevent the algorithm from falling into local optimality.192000-dimensional functions and 201000-dimensional CEC 2010 large-scale functions are used to verify FPSBOA for complex large-scale optimization problems.The experimental results are statistically analyzed by Friedman test and Wilcoxon rank-sum test.All attained results demonstrated that FPSBOA can better solve more challenging scientific and industrial real-world problems with thousands of variables.Finally,four mechanical engineering problems and one ten-dimensional process synthesis and design problem are applied to FPSBOA,which shows FPSBOA has the feasibility and effectiveness in real-world application problems.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices ou...In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.展开更多
The article introduces the main practices and achievements of the Environment and Plant Protection Institute of Chinese Academy of Tropical Agricultural Sciences in promoting the sharing of large-scale instruments and...The article introduces the main practices and achievements of the Environment and Plant Protection Institute of Chinese Academy of Tropical Agricultural Sciences in promoting the sharing of large-scale instruments and equipment in recent years,analyzes the existing problems in the management system,management team,assessment incentives and maintenance guarantee,and proposes improvement measures and suggestions from aspects of improving the sharing management system,strengthening management team building,strengthening sharing assessment and incentives,improving maintenance capabilities and expanding external publicity,to further improve the sharing management of large-scale instruments and equipment.展开更多
<span style="font-family:Verdana;">T</span><span><span><span style="font-family:;" "=""><span style="font-family:Verdana;">his research ...<span style="font-family:Verdana;">T</span><span><span><span style="font-family:;" "=""><span style="font-family:Verdana;">his research develops and elaborates studies done for a contribution to the 2019 PIC International Conference 2019 in Malta, about the decision-making process. Decision-making is the act of choosing between two or more courses of action. In the wider process of problem-solving, decision-making involves choosing between possible solutions to a problem, and these decisions can be made through either an intuitive or reasoned process, or a combination of the two. The study of decision-making processes, to be understood as the role of human factors, becomes particularly interesting in complex organizations. This research aims to analyze how an effective team, within organizations, can develop a more correct and effective decision-making, in order to get an optimal solution, overcoming the typical uncertainty. The paper describes the point of departure of decision in complex, time-pressured, uncertain, ambiguous and changing environments. The use of a leading case (the Tenerife air accident, 1977), will lead us to the desired results, </span><i><span style="font-family:Verdana;">i</span></i><span style="font-family:Verdana;">.</span><i><span style="font-family:Verdana;">e</span></i><span style="font-family:Verdana;">. to demonstrate how an effective decisional process, including team dynamics, can be useful to reduce the risk, present in all decisions, and reduce errors. The case of Tenerife air disaster, confirm our research. In that case, in fact, the group dynamics prove not to have worked. Thus, we can state that if a team approach had been followed instead of a more individual one, the results would probably have been different. The central belief of the research, is that classic decision theory could benefit from a team approach, which reduces the risk that a decision may lead to undesirable consequences. As demonstrated with the case study, within organizations, the decision-making is not a solitary action. Decisions, in fact, are made within a team and in order to be able to function effectively in a group, and manage group situations, there are essential skills. The team can then become a resource for the decisional process and problem solving, but it is necessary </span></span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">to </span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">understand the dynamics.</span></span></span>展开更多
Nanoparticles provide great advantages but also great risks. Risks associating with nanoparticles are the problem of all technologies, but they increase in many times in nanotechnologies. Adequate methods of outgoing ...Nanoparticles provide great advantages but also great risks. Risks associating with nanoparticles are the problem of all technologies, but they increase in many times in nanotechnologies. Adequate methods of outgoing production inspection are necessary to solve the problem of risks, and the inspection must be based on the safety standard. Existing safety standard results from a principle of “maximum permissible concentrations or MPC”. This principle is not applicable to nanoparticles, but a safety standard reflecting risks inherent in nanoparticles doesn’t exist. Essence of the risks is illustrated by the example from pharmacology, since its safety assurance is conceptually based on MPC and it has already come against this problem. Possible formula of safety standard for nanoparticles is reflected in many publications, but conventional inspection methods cannot provide its realization, and this gap is an obstacle to assumption of similar formulas. Therefore the development of nanoparticle industry as a whole (also development of the pharmacology in particular) is impossible without the creation of an adequate inspection method. There are suggested new inspection methods founded on the new physical principle and satisfying to the adequate safety standard for nanoparticles. These methods demonstrate that creation of the adequate safety standard and the outgoing production inspection in a large-scale manufacturing of nanoparticles are the solvable problems. However there is a great distance between the physical principle and its hardware realization, and a transition from the principle to the hardware demands great intellectual and material costs. Therefore it is desirable to call attention of the public at large to the necessity of urgent expansions of investigations associated with outgoing inspections in nanoparticles technologies. It is necessary also to attract attention, first, of representatives of state structures controlling approvals of the adequate safety standard to this problem, since it is impossible to compel producers providing the safety without the similar standard, and, second, of leaders of pharmacological industry, since their industry already entered into the nanotechnology era, and they have taken an interest in a forthcoming development of inspection methods.展开更多
This article offers an overview of the development of dairy cow breeding in China,and analyzes the problems in the development of dairy cow breeding in China as follows:the breeding scale is small;the pasture grass pr...This article offers an overview of the development of dairy cow breeding in China,and analyzes the problems in the development of dairy cow breeding in China as follows:the breeding scale is small;the pasture grass production cannot meet demand;the proportion of fine breed is not high and the dairy cow yield per unit is low;it lacks epidemic prevention and quarantine mechanism;economic benefits of large-scale breeding are not high.These problems have become bottleneck in the development of the dairy cow breeding.Finally countermeasures are put forward for the development of dairy cow breeding in China as follows:developing large-scale breeding,and increasing subsidies;supporting the development of grass industry,and ensuring the supply of good feed;strengthening cultivation and promotion of fine breed,and promoting the quality of fresh milk;improving the accountability system of dairy products,and giving play to supervisory role of the news media.展开更多
This paper presents a heuristic polarity decision-making algorithm for solving Boolean satisfiability (SAT). The algorithm inherits many features of the current state-of-the-art SAT solvers, such as fast BCP, clause...This paper presents a heuristic polarity decision-making algorithm for solving Boolean satisfiability (SAT). The algorithm inherits many features of the current state-of-the-art SAT solvers, such as fast BCP, clause recording, restarts, etc. In addition, a preconditioning step that calculates the polarities of variables according to the cover distribution of Karnaugh map is introduced into DPLL procedure, which greatly reduces the number of conflicts in the search process. The proposed approach is implemented as a SAT solver named DiffSat. Experiments show that DiffSat can solve many "real-life" instances in a reasonable time while the best existing SAT solvers, such as Zchaff and MiniSat, cannot. In particular, DiffSat can solve every instance of Bart benchmark suite in less than 0.03 s while Zchaff and MiniSat fail under a 900 s time limit. Furthermore, DiffSat even outperforms the outstanding incomplete algorithm DLM in some instances.展开更多
The numerical quadrature methods for dealing with the problems of singular and near-singular integrals caused by Burton-Miller method are proposed, by which the conventional and fast multipole BEMs (boundary element ...The numerical quadrature methods for dealing with the problems of singular and near-singular integrals caused by Burton-Miller method are proposed, by which the conventional and fast multipole BEMs (boundary element methods) for 3D acoustic problems based on constant elements are improved. To solve the problem of singular integrals, a Hadamard finite-part integral method is presented, which is a simplified combination of the methods proposed by Kirkup and Wolf. The problem of near-singular integrals is overcome by the simple method of polar transformation and the more complex method of PART (Projection and Angular & Radial Transformation). The effectiveness of these methods for solving the singular and near-singular problems is validated through comparing with the results computed by the analytical method and/or the commercial software LMS Virtual.Lab. In addition, the influence of the near-singular integral problem on the computational precisions is analyzed by computing the errors relative to the exact solution. The computational complexities of the conventional and fast multipole BEM are analyzed and compared through numerical computations. A large-scale acoustic scattering problem, whose degree of freedoms is about 340,000, is implemented successfully. The results show that, the near singularity is primarily introduced by the hyper-singular kernel, and has great influences on the precision of the solution. The precision of fast multipole BEM is the same as conventional BEM, but the computational complexities are much lower.展开更多
Large-scale simulation optimization(SO)problems encompass both large-scale ranking-and-selection problems and high-dimensional discrete or continuous SO problems,presenting significant challenges to existing SO theori...Large-scale simulation optimization(SO)problems encompass both large-scale ranking-and-selection problems and high-dimensional discrete or continuous SO problems,presenting significant challenges to existing SO theories and algorithms.This paper begins by providing illustrative examples that highlight the differences between large-scale SO problems and those of a more moderate scale.Subsequently,it reviews several widely employed techniques for addressing large-scale SOproblems,such as divide-and-conquer,dimension reduction,and gradient-based algorithms.Additionally,the paper examines parallelization techniques leveraging widely accessible parallel computing environments to facilitate the resolution of large-scale SO problems.展开更多
An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable fo...An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable for handling sparse data structure and pos sesses Q-quadratic convergence rate. The global convergence of this new method is proved, the convergence rate is further analysed, and the detailed implementation is discussed in this paper. Some numerical tests for solving truss optimization and large sparse problems are reported. The theoretical and numerical results show that the new method is efficient for solving large-scale sparse NLP problems.展开更多
First, the main procedures and the distinctive features of the most-obtuse-angle(MOA)row or column pivot rules are introduced for achieving primal or dual feasibility in linear programming. Then, two special auxilia...First, the main procedures and the distinctive features of the most-obtuse-angle(MOA)row or column pivot rules are introduced for achieving primal or dual feasibility in linear programming. Then, two special auxiliary problems are constructed to prove that each of the rules can be actually considered as a simplex approach for solving the corresponding auxiliary problem. In addition, the nested pricing rule is also reviewed and its geometric interpretation is offered based on the heuristic characterization of an optimal solution.展开更多
Inspired by the success of the projected Barzilai-Borwein(PBB)method for largescale box-constrained quadratic programming,we propose and analyze the monotone projected gradient methods in this paper.We show by experim...Inspired by the success of the projected Barzilai-Borwein(PBB)method for largescale box-constrained quadratic programming,we propose and analyze the monotone projected gradient methods in this paper.We show by experiments and analyses that for the new methods,it is generally a bad option to compute steplengths based on the negative gradients.Thus in our algorithms,some continuous or discontinuous projected gradients are used instead to compute the steplengths.Numerical experiments on a wide variety of test problems are presented,indicating that the new methods usually outperform the PBB method.展开更多
With the rapid development of computer technology,numerical simulation has become the third scientific research tool besides theoretical analysis and experi-mental research.As the core of numerical simulation,construct...With the rapid development of computer technology,numerical simulation has become the third scientific research tool besides theoretical analysis and experi-mental research.As the core of numerical simulation,constructing efficient,accurate and stable numerical methods to simulate complex scientific and engineering prob-lems has become a key issue in computational mechanics.The article outlines the ap-plication of singular boundary method to the large-scale and high-frequency acoustic problems.In practical application,the key issue is to construct efficient and accurate numerical methodology to calculate the large-scale and high-frequency soundfield.This article focuses on the following two research areas.They are how to discretize partial differential equations into more appropriate linear equations,and how to solve linear equations more efficiently.The bottle neck problems encountered in the compu-tational acoustics are used as the technical routes,i.e.,efficient solution of dense linear system composed of ill-conditioned matrix and stable simulation of wave propagation at low sampling frequencies.The article reviews recent advances in emerging appli-cations of the singular boundary method for computational acoustics.This collection can provide a reference for simulating other more complex wave propagation.展开更多
Reinforcement learning(RL)algorithm has been introduced for several decades,which becomes a paradigm in sequential decision-making and control.The development of reinforcement learning,especially in recent years,has e...Reinforcement learning(RL)algorithm has been introduced for several decades,which becomes a paradigm in sequential decision-making and control.The development of reinforcement learning,especially in recent years,has enabled this algorithm to be applied in many industry fields,such as robotics,medical intelligence,and games.This paper first introduces the history and background of reinforcement learning,and then illustrates the industrial application and open source platforms.After that,the successful applications from AlphaGo to AlphaZero and future reinforcement learning technique are focused on.Finally,the artificial intelligence for complex interaction(e.g.,stochastic environment,multiple players,selfish behavior,and distributes optimization)is considered and this paper concludes with the highlight and outlook of future general artificial intelligence.展开更多
The localized method of fundamental solutions(LMFS)is a relatively new meshless boundary collocation method.In the LMFS,the global MFS approxima-tion which is expensive to evaluate is replaced by local MFS formulation...The localized method of fundamental solutions(LMFS)is a relatively new meshless boundary collocation method.In the LMFS,the global MFS approxima-tion which is expensive to evaluate is replaced by local MFS formulation defined in a set of overlapping subdomains.The LMFS algorithm therefore converts differential equations into sparse rather than dense matrices which are much cheaper to calcu-late.This paper makes thefirst attempt to apply the LMFS,in conjunction with a domain-decomposition technique,for the numerical solution of steady-state heat con-duction problems in two-dimensional(2D)anisotropic layered materials.Here,the layered material is decomposed into several subdomains along the layer-layer inter-faces,and in each of the subdomains,the solution is approximated by using the LMFS expansion.On the subdomain interface,compatibility of temperatures and heatfluxes are imposed.Preliminary numerical experiments illustrate that the proposed domain-decomposition LMFS algorithm is accurate,stable and computationally efficient for the numerical solution of large-scale multi-layered materials.展开更多
Some optimization problems in scientific research,such as the robustness optimization for the Internet of Things and the neural architecture search,are large-scale in decision space and expensive for objective evaluat...Some optimization problems in scientific research,such as the robustness optimization for the Internet of Things and the neural architecture search,are large-scale in decision space and expensive for objective evaluation.In order to get a good solution in a limited budget for the large-scale expensive optimization,a random grouping strategy is adopted to divide the problem into some low-dimensional sub-problems.A surrogate model is then trained for each sub-problem using different strategies to select training data adaptively.After that,a dynamic infill criterion is proposed corresponding to the models currently used in the surrogate-assisted sub-problem optimization.Furthermore,an escape mechanism is proposed to keep the diversity of the population.The performance of the method is evaluated on CEC’2013 benchmark functions.Experimental results show that the algorithm has better performance in solving expensive large-scale optimization problems.展开更多
To provide the supplier with the minimizum vehicle travel distance in the distribution process of goods in three situations of new customer demand,customer cancellation service,and change of customer delivery address,...To provide the supplier with the minimizum vehicle travel distance in the distribution process of goods in three situations of new customer demand,customer cancellation service,and change of customer delivery address,based on the ideas of pre-optimization and real-time optimization,a two-stage planning model of dynamic demand based vehicle routing problem with time windows was established.At the pre-optimization stage,an improved genetic algorithm was used to obtain the pre-optimized distribution route,a large-scale neighborhood search method was integrated into the mutation operation to improve the local optimization performance of the genetic algorithm,and a variety of operators were introduced to expand the search space of neighborhood solutions;At the real-time optimization stage,a periodic optimization strategy was adopted to transform a complex dynamic problem into several static problems,and four neighborhood search operators were used to quickly adjust the route.Two different scale examples were designed for experiments.It is proved that the algorithm can plan the better route,and adjust the distribution route in time under the real-time constraints.Therefore,the proposed algorithm can provide theoretical guidance for suppliers to solve the dynamic demand based vehicle routing problem.展开更多
基金The Australian Research Council(DP200101197,DP230101107).
文摘Formalizing complex processes and phenomena of a real-world problem may require a large number of variables and constraints,resulting in what is termed a large-scale optimization problem.Nowadays,such large-scale optimization problems are solved using computing machines,leading to an enormous computational time being required,which may delay deriving timely solutions.Decomposition methods,which partition a large-scale optimization problem into lower-dimensional subproblems,represent a key approach to addressing time-efficiency issues.There has been significant progress in both applied mathematics and emerging artificial intelligence approaches on this front.This work aims at providing an overview of the decomposition methods from both the mathematics and computer science points of view.We also remark on the state-of-the-art developments and recent applications of the decomposition methods,and discuss the future research and development perspectives.
基金support by the Open Project of Xiangjiang Laboratory(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28,ZK21-07)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(CX20230074)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJZ03)the Science and Technology Innovation Program of Humnan Province(2023RC1002).
文摘Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.
基金funded by the National Natural Science Foundation of China(No.72104069)the Science and Technology Department of Henan Province,China(No.182102310886 and 162102110109)the Postgraduate Meritocracy Scheme,hina(No.SYL19060145).
文摘To solve large-scale optimization problems,Fragrance coefficient and variant Particle Swarm local search Butterfly Optimization Algorithm(FPSBOA)is proposed.In the position update stage of Butterfly Optimization Algorithm(BOA),the fragrance coefficient is designed to balance the exploration and exploitation of BOA.The variant particle swarm local search strategy is proposed to improve the local search ability of the current optimal butterfly and prevent the algorithm from falling into local optimality.192000-dimensional functions and 201000-dimensional CEC 2010 large-scale functions are used to verify FPSBOA for complex large-scale optimization problems.The experimental results are statistically analyzed by Friedman test and Wilcoxon rank-sum test.All attained results demonstrated that FPSBOA can better solve more challenging scientific and industrial real-world problems with thousands of variables.Finally,four mechanical engineering problems and one ten-dimensional process synthesis and design problem are applied to FPSBOA,which shows FPSBOA has the feasibility and effectiveness in real-world application problems.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
基金The research was supported by the State Education Grant for Retumed Scholars
文摘In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.
文摘The article introduces the main practices and achievements of the Environment and Plant Protection Institute of Chinese Academy of Tropical Agricultural Sciences in promoting the sharing of large-scale instruments and equipment in recent years,analyzes the existing problems in the management system,management team,assessment incentives and maintenance guarantee,and proposes improvement measures and suggestions from aspects of improving the sharing management system,strengthening management team building,strengthening sharing assessment and incentives,improving maintenance capabilities and expanding external publicity,to further improve the sharing management of large-scale instruments and equipment.
文摘<span style="font-family:Verdana;">T</span><span><span><span style="font-family:;" "=""><span style="font-family:Verdana;">his research develops and elaborates studies done for a contribution to the 2019 PIC International Conference 2019 in Malta, about the decision-making process. Decision-making is the act of choosing between two or more courses of action. In the wider process of problem-solving, decision-making involves choosing between possible solutions to a problem, and these decisions can be made through either an intuitive or reasoned process, or a combination of the two. The study of decision-making processes, to be understood as the role of human factors, becomes particularly interesting in complex organizations. This research aims to analyze how an effective team, within organizations, can develop a more correct and effective decision-making, in order to get an optimal solution, overcoming the typical uncertainty. The paper describes the point of departure of decision in complex, time-pressured, uncertain, ambiguous and changing environments. The use of a leading case (the Tenerife air accident, 1977), will lead us to the desired results, </span><i><span style="font-family:Verdana;">i</span></i><span style="font-family:Verdana;">.</span><i><span style="font-family:Verdana;">e</span></i><span style="font-family:Verdana;">. to demonstrate how an effective decisional process, including team dynamics, can be useful to reduce the risk, present in all decisions, and reduce errors. The case of Tenerife air disaster, confirm our research. In that case, in fact, the group dynamics prove not to have worked. Thus, we can state that if a team approach had been followed instead of a more individual one, the results would probably have been different. The central belief of the research, is that classic decision theory could benefit from a team approach, which reduces the risk that a decision may lead to undesirable consequences. As demonstrated with the case study, within organizations, the decision-making is not a solitary action. Decisions, in fact, are made within a team and in order to be able to function effectively in a group, and manage group situations, there are essential skills. The team can then become a resource for the decisional process and problem solving, but it is necessary </span></span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">to </span></span></span><span style="font-family:Verdana;"><span style="font-family:Verdana;"><span style="font-family:Verdana;">understand the dynamics.</span></span></span>
文摘Nanoparticles provide great advantages but also great risks. Risks associating with nanoparticles are the problem of all technologies, but they increase in many times in nanotechnologies. Adequate methods of outgoing production inspection are necessary to solve the problem of risks, and the inspection must be based on the safety standard. Existing safety standard results from a principle of “maximum permissible concentrations or MPC”. This principle is not applicable to nanoparticles, but a safety standard reflecting risks inherent in nanoparticles doesn’t exist. Essence of the risks is illustrated by the example from pharmacology, since its safety assurance is conceptually based on MPC and it has already come against this problem. Possible formula of safety standard for nanoparticles is reflected in many publications, but conventional inspection methods cannot provide its realization, and this gap is an obstacle to assumption of similar formulas. Therefore the development of nanoparticle industry as a whole (also development of the pharmacology in particular) is impossible without the creation of an adequate inspection method. There are suggested new inspection methods founded on the new physical principle and satisfying to the adequate safety standard for nanoparticles. These methods demonstrate that creation of the adequate safety standard and the outgoing production inspection in a large-scale manufacturing of nanoparticles are the solvable problems. However there is a great distance between the physical principle and its hardware realization, and a transition from the principle to the hardware demands great intellectual and material costs. Therefore it is desirable to call attention of the public at large to the necessity of urgent expansions of investigations associated with outgoing inspections in nanoparticles technologies. It is necessary also to attract attention, first, of representatives of state structures controlling approvals of the adequate safety standard to this problem, since it is impossible to compel producers providing the safety without the similar standard, and, second, of leaders of pharmacological industry, since their industry already entered into the nanotechnology era, and they have taken an interest in a forthcoming development of inspection methods.
基金Key Technologies R&D Program of Heilongjiang Province ( GC10D206) Cooperation Project of Northeast Agricultural University and Inner Mongolia Mengniu Dairy ( Group) Co. ,Ltd. +1 种基金The PhD Start-up Fund of Northeast Agricultural University ( 2009-RCW05) National Soft Science Research Plan Project ( 2010GXQ-5D330)
文摘This article offers an overview of the development of dairy cow breeding in China,and analyzes the problems in the development of dairy cow breeding in China as follows:the breeding scale is small;the pasture grass production cannot meet demand;the proportion of fine breed is not high and the dairy cow yield per unit is low;it lacks epidemic prevention and quarantine mechanism;economic benefits of large-scale breeding are not high.These problems have become bottleneck in the development of the dairy cow breeding.Finally countermeasures are put forward for the development of dairy cow breeding in China as follows:developing large-scale breeding,and increasing subsidies;supporting the development of grass industry,and ensuring the supply of good feed;strengthening cultivation and promotion of fine breed,and promoting the quality of fresh milk;improving the accountability system of dairy products,and giving play to supervisory role of the news media.
基金the National Natural Science Foundation of China (Grant Nos. 90207002, 90307017, 60773125 and 60676018)National Science Foundation (Grant Nos. CCR-0306298)+1 种基金China Postdoctoral Science Foundation (Grant No. KLH1202005)the Natural Science Foundation of Shanghai City (Grant No. 06ZR14016)
文摘This paper presents a heuristic polarity decision-making algorithm for solving Boolean satisfiability (SAT). The algorithm inherits many features of the current state-of-the-art SAT solvers, such as fast BCP, clause recording, restarts, etc. In addition, a preconditioning step that calculates the polarities of variables according to the cover distribution of Karnaugh map is introduced into DPLL procedure, which greatly reduces the number of conflicts in the search process. The proposed approach is implemented as a SAT solver named DiffSat. Experiments show that DiffSat can solve many "real-life" instances in a reasonable time while the best existing SAT solvers, such as Zchaff and MiniSat, cannot. In particular, DiffSat can solve every instance of Bart benchmark suite in less than 0.03 s while Zchaff and MiniSat fail under a 900 s time limit. Furthermore, DiffSat even outperforms the outstanding incomplete algorithm DLM in some instances.
基金supported by the National Natural Science Foundation of China(11304344,11404364)the Project of Hubei Provincial Department of Education(D20141803)+1 种基金the Natural Science Foundation of Hubei Province(2014CFB378)the Doctoral Scientific Research Foundation of Hubei University of Automotive Technology(BK201604)
文摘The numerical quadrature methods for dealing with the problems of singular and near-singular integrals caused by Burton-Miller method are proposed, by which the conventional and fast multipole BEMs (boundary element methods) for 3D acoustic problems based on constant elements are improved. To solve the problem of singular integrals, a Hadamard finite-part integral method is presented, which is a simplified combination of the methods proposed by Kirkup and Wolf. The problem of near-singular integrals is overcome by the simple method of polar transformation and the more complex method of PART (Projection and Angular & Radial Transformation). The effectiveness of these methods for solving the singular and near-singular problems is validated through comparing with the results computed by the analytical method and/or the commercial software LMS Virtual.Lab. In addition, the influence of the near-singular integral problem on the computational precisions is analyzed by computing the errors relative to the exact solution. The computational complexities of the conventional and fast multipole BEM are analyzed and compared through numerical computations. A large-scale acoustic scattering problem, whose degree of freedoms is about 340,000, is implemented successfully. The results show that, the near singularity is primarily introduced by the hyper-singular kernel, and has great influences on the precision of the solution. The precision of fast multipole BEM is the same as conventional BEM, but the computational complexities are much lower.
基金supported by the National Natural Science Foundation of China(Nos.72071146,72091211,72293562,and 72031006).
文摘Large-scale simulation optimization(SO)problems encompass both large-scale ranking-and-selection problems and high-dimensional discrete or continuous SO problems,presenting significant challenges to existing SO theories and algorithms.This paper begins by providing illustrative examples that highlight the differences between large-scale SO problems and those of a more moderate scale.Subsequently,it reviews several widely employed techniques for addressing large-scale SOproblems,such as divide-and-conquer,dimension reduction,and gradient-based algorithms.Additionally,the paper examines parallelization techniques leveraging widely accessible parallel computing environments to facilitate the resolution of large-scale SO problems.
基金This research was supported by Nationa Natural Science Foundation of China, LSEC of CAS in Beijingand Natural Science Foundati
文摘An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable for handling sparse data structure and pos sesses Q-quadratic convergence rate. The global convergence of this new method is proved, the convergence rate is further analysed, and the detailed implementation is discussed in this paper. Some numerical tests for solving truss optimization and large sparse problems are reported. The theoretical and numerical results show that the new method is efficient for solving large-scale sparse NLP problems.
基金The National Natural Science Foundation of China(No.10371017).
文摘First, the main procedures and the distinctive features of the most-obtuse-angle(MOA)row or column pivot rules are introduced for achieving primal or dual feasibility in linear programming. Then, two special auxiliary problems are constructed to prove that each of the rules can be actually considered as a simplex approach for solving the corresponding auxiliary problem. In addition, the nested pricing rule is also reviewed and its geometric interpretation is offered based on the heuristic characterization of an optimal solution.
基金supported by the National Natural Science Foundation of China(Grant Nos.10171104,10571171&40233029).
文摘Inspired by the success of the projected Barzilai-Borwein(PBB)method for largescale box-constrained quadratic programming,we propose and analyze the monotone projected gradient methods in this paper.We show by experiments and analyses that for the new methods,it is generally a bad option to compute steplengths based on the negative gradients.Thus in our algorithms,some continuous or discontinuous projected gradients are used instead to compute the steplengths.Numerical experiments on a wide variety of test problems are presented,indicating that the new methods usually outperform the PBB method.
基金supported by China Postdoctoral Science Foundation(Grant No.2020M682335)Key R&D and Promotion Special Projects(Scientific Problem Tackling)in Henan Province of China(Grant No.212102210375).
文摘With the rapid development of computer technology,numerical simulation has become the third scientific research tool besides theoretical analysis and experi-mental research.As the core of numerical simulation,constructing efficient,accurate and stable numerical methods to simulate complex scientific and engineering prob-lems has become a key issue in computational mechanics.The article outlines the ap-plication of singular boundary method to the large-scale and high-frequency acoustic problems.In practical application,the key issue is to construct efficient and accurate numerical methodology to calculate the large-scale and high-frequency soundfield.This article focuses on the following two research areas.They are how to discretize partial differential equations into more appropriate linear equations,and how to solve linear equations more efficiently.The bottle neck problems encountered in the compu-tational acoustics are used as the technical routes,i.e.,efficient solution of dense linear system composed of ill-conditioned matrix and stable simulation of wave propagation at low sampling frequencies.The article reviews recent advances in emerging appli-cations of the singular boundary method for computational acoustics.This collection can provide a reference for simulating other more complex wave propagation.
文摘Reinforcement learning(RL)algorithm has been introduced for several decades,which becomes a paradigm in sequential decision-making and control.The development of reinforcement learning,especially in recent years,has enabled this algorithm to be applied in many industry fields,such as robotics,medical intelligence,and games.This paper first introduces the history and background of reinforcement learning,and then illustrates the industrial application and open source platforms.After that,the successful applications from AlphaGo to AlphaZero and future reinforcement learning technique are focused on.Finally,the artificial intelligence for complex interaction(e.g.,stochastic environment,multiple players,selfish behavior,and distributes optimization)is considered and this paper concludes with the highlight and outlook of future general artificial intelligence.
基金The work described in this paper was supported by the National Natural Science Foundation of China(Nos.11872220,11772119)the Natural Science Foundation of Shandong Province of China(Nos.2019KJI009,ZR2017JL004)+1 种基金the Six Talent Peaks Project in Jiangsu Province of China(Grant No.2019-KTHY-009)the Key Laboratory of Road Construction Technology and Equipment(Chang’an University,Grant No.300102251505).
文摘The localized method of fundamental solutions(LMFS)is a relatively new meshless boundary collocation method.In the LMFS,the global MFS approxima-tion which is expensive to evaluate is replaced by local MFS formulation defined in a set of overlapping subdomains.The LMFS algorithm therefore converts differential equations into sparse rather than dense matrices which are much cheaper to calcu-late.This paper makes thefirst attempt to apply the LMFS,in conjunction with a domain-decomposition technique,for the numerical solution of steady-state heat con-duction problems in two-dimensional(2D)anisotropic layered materials.Here,the layered material is decomposed into several subdomains along the layer-layer inter-faces,and in each of the subdomains,the solution is approximated by using the LMFS expansion.On the subdomain interface,compatibility of temperatures and heatfluxes are imposed.Preliminary numerical experiments illustrate that the proposed domain-decomposition LMFS algorithm is accurate,stable and computationally efficient for the numerical solution of large-scale multi-layered materials.
基金This work was supported in part by the National Natural Science Foundation of China(No.61876123)Shanxi Key Research and Development Program(No.202102020101002)Natural Science Foundation of Shanxi Province(Nos.201901D111264 and 201901D111262).
文摘Some optimization problems in scientific research,such as the robustness optimization for the Internet of Things and the neural architecture search,are large-scale in decision space and expensive for objective evaluation.In order to get a good solution in a limited budget for the large-scale expensive optimization,a random grouping strategy is adopted to divide the problem into some low-dimensional sub-problems.A surrogate model is then trained for each sub-problem using different strategies to select training data adaptively.After that,a dynamic infill criterion is proposed corresponding to the models currently used in the surrogate-assisted sub-problem optimization.Furthermore,an escape mechanism is proposed to keep the diversity of the population.The performance of the method is evaluated on CEC’2013 benchmark functions.Experimental results show that the algorithm has better performance in solving expensive large-scale optimization problems.
基金supported by Natural Science Foundation Project of Gansu Provincial Science and Technology Department(No.1506RJZA084)Gansu Provincial Education Department Scientific Research Fund Grant Project(No.1204-13).
文摘To provide the supplier with the minimizum vehicle travel distance in the distribution process of goods in three situations of new customer demand,customer cancellation service,and change of customer delivery address,based on the ideas of pre-optimization and real-time optimization,a two-stage planning model of dynamic demand based vehicle routing problem with time windows was established.At the pre-optimization stage,an improved genetic algorithm was used to obtain the pre-optimized distribution route,a large-scale neighborhood search method was integrated into the mutation operation to improve the local optimization performance of the genetic algorithm,and a variety of operators were introduced to expand the search space of neighborhood solutions;At the real-time optimization stage,a periodic optimization strategy was adopted to transform a complex dynamic problem into several static problems,and four neighborhood search operators were used to quickly adjust the route.Two different scale examples were designed for experiments.It is proved that the algorithm can plan the better route,and adjust the distribution route in time under the real-time constraints.Therefore,the proposed algorithm can provide theoretical guidance for suppliers to solve the dynamic demand based vehicle routing problem.