Formalizing complex processes and phenomena of a real-world problem may require a large number of variables and constraints,resulting in what is termed a large-scale optimization problem.Nowadays,such large-scale opti...Formalizing complex processes and phenomena of a real-world problem may require a large number of variables and constraints,resulting in what is termed a large-scale optimization problem.Nowadays,such large-scale optimization problems are solved using computing machines,leading to an enormous computational time being required,which may delay deriving timely solutions.Decomposition methods,which partition a large-scale optimization problem into lower-dimensional subproblems,represent a key approach to addressing time-efficiency issues.There has been significant progress in both applied mathematics and emerging artificial intelligence approaches on this front.This work aims at providing an overview of the decomposition methods from both the mathematics and computer science points of view.We also remark on the state-of-the-art developments and recent applications of the decomposition methods,and discuss the future research and development perspectives.展开更多
The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the intera...The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the interaction among decision variables is intricate,leading to large group sizes and suboptimal optimization effects;hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables(MOEAWOD)is proposed in this paper.Initially,the decision variables are perturbed and categorized into convergence and diversity variables;subsequently,the convergence variables are subdivided into groups based on the interactions among different decision variables.If the size of a group surpasses the set threshold,that group undergoes a process of weighting and overlapping grouping.Specifically,the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables.The decision variable with the highest interaction in the group is identified and disregarded,and the remaining variables are then reclassified into subgroups.Finally,the decision variable with the strongest interaction is added to each subgroup.MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups,which contributed to the optimized direction of convergence and diversity exploration with different groups.MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems,and the experimental results demonstrate the effectiveness of our methods.Compared with the other algorithms,our method is still at an advantage.展开更多
To solve large-scale optimization problems,Fragrance coefficient and variant Particle Swarm local search Butterfly Optimization Algorithm(FPSBOA)is proposed.In the position update stage of Butterfly Optimization Algor...To solve large-scale optimization problems,Fragrance coefficient and variant Particle Swarm local search Butterfly Optimization Algorithm(FPSBOA)is proposed.In the position update stage of Butterfly Optimization Algorithm(BOA),the fragrance coefficient is designed to balance the exploration and exploitation of BOA.The variant particle swarm local search strategy is proposed to improve the local search ability of the current optimal butterfly and prevent the algorithm from falling into local optimality.192000-dimensional functions and 201000-dimensional CEC 2010 large-scale functions are used to verify FPSBOA for complex large-scale optimization problems.The experimental results are statistically analyzed by Friedman test and Wilcoxon rank-sum test.All attained results demonstrated that FPSBOA can better solve more challenging scientific and industrial real-world problems with thousands of variables.Finally,four mechanical engineering problems and one ten-dimensional process synthesis and design problem are applied to FPSBOA,which shows FPSBOA has the feasibility and effectiveness in real-world application problems.展开更多
Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr...Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices ou...In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.展开更多
The article introduces the main practices and achievements of the Environment and Plant Protection Institute of Chinese Academy of Tropical Agricultural Sciences in promoting the sharing of large-scale instruments and...The article introduces the main practices and achievements of the Environment and Plant Protection Institute of Chinese Academy of Tropical Agricultural Sciences in promoting the sharing of large-scale instruments and equipment in recent years,analyzes the existing problems in the management system,management team,assessment incentives and maintenance guarantee,and proposes improvement measures and suggestions from aspects of improving the sharing management system,strengthening management team building,strengthening sharing assessment and incentives,improving maintenance capabilities and expanding external publicity,to further improve the sharing management of large-scale instruments and equipment.展开更多
Nanoparticles provide great advantages but also great risks. Risks associating with nanoparticles are the problem of all technologies, but they increase in many times in nanotechnologies. Adequate methods of outgoing ...Nanoparticles provide great advantages but also great risks. Risks associating with nanoparticles are the problem of all technologies, but they increase in many times in nanotechnologies. Adequate methods of outgoing production inspection are necessary to solve the problem of risks, and the inspection must be based on the safety standard. Existing safety standard results from a principle of “maximum permissible concentrations or MPC”. This principle is not applicable to nanoparticles, but a safety standard reflecting risks inherent in nanoparticles doesn’t exist. Essence of the risks is illustrated by the example from pharmacology, since its safety assurance is conceptually based on MPC and it has already come against this problem. Possible formula of safety standard for nanoparticles is reflected in many publications, but conventional inspection methods cannot provide its realization, and this gap is an obstacle to assumption of similar formulas. Therefore the development of nanoparticle industry as a whole (also development of the pharmacology in particular) is impossible without the creation of an adequate inspection method. There are suggested new inspection methods founded on the new physical principle and satisfying to the adequate safety standard for nanoparticles. These methods demonstrate that creation of the adequate safety standard and the outgoing production inspection in a large-scale manufacturing of nanoparticles are the solvable problems. However there is a great distance between the physical principle and its hardware realization, and a transition from the principle to the hardware demands great intellectual and material costs. Therefore it is desirable to call attention of the public at large to the necessity of urgent expansions of investigations associated with outgoing inspections in nanoparticles technologies. It is necessary also to attract attention, first, of representatives of state structures controlling approvals of the adequate safety standard to this problem, since it is impossible to compel producers providing the safety without the similar standard, and, second, of leaders of pharmacological industry, since their industry already entered into the nanotechnology era, and they have taken an interest in a forthcoming development of inspection methods.展开更多
Multi-criteria decision-making(MCDM)is essential for handling complex decision problems under uncertainty,especially in fields such as criminal justice,healthcare,and environmental management.Traditional fuzzy MCDM te...Multi-criteria decision-making(MCDM)is essential for handling complex decision problems under uncertainty,especially in fields such as criminal justice,healthcare,and environmental management.Traditional fuzzy MCDM techniques have failed to deal with problems where uncertainty or vagueness is involved.To address this issue,we propose a novel framework that integrates group and overlap functions with Aczel-Alsina(AA)operational laws in the intuitionistic fuzzy set(IFS)environment.Overlap functions capture the degree to which two inputs share common features and are used to find how closely two values or criteria match in uncertain environments,while the Group functions are used to combine different expert opinions into a single collective result.This study introduces four new aggregation operators:Group Overlap function-based intuitionistic fuzzy Aczel-Alsina(GOF-IFAA)Weighted Averaging(GOF-IFAAWA)operator,intuitionistic fuzzy Aczel-Alsina(GOF-IFAA)Weighted Geometric(GOF-IFAAWG),intuitionistic fuzzy Aczel-Alsina(GOF-IFAA)OrderedWeighted Averaging(GOF-IFAAOWA),and intuitionistic fuzzy Aczel-Alsina(GOF-IFAA)Ordered Weighted Geometric(GOF-IFAAOWG),which are rigorously defined and mathematically analyzed and offer improved flexibility in managing overlapping,uncertain,and hesitant information.The properties of these operators are discussed in detail.Further,the effectiveness,validity,activeness,and ability to capture the uncertain information,the developed operators are applied to the AI-based Criminal Justice Policy Selection problem.At last,the comparison analysis between prior and proposed studies has been displayed,and then followed by the conclusion of the result.展开更多
This article offers an overview of the development of dairy cow breeding in China,and analyzes the problems in the development of dairy cow breeding in China as follows:the breeding scale is small;the pasture grass pr...This article offers an overview of the development of dairy cow breeding in China,and analyzes the problems in the development of dairy cow breeding in China as follows:the breeding scale is small;the pasture grass production cannot meet demand;the proportion of fine breed is not high and the dairy cow yield per unit is low;it lacks epidemic prevention and quarantine mechanism;economic benefits of large-scale breeding are not high.These problems have become bottleneck in the development of the dairy cow breeding.Finally countermeasures are put forward for the development of dairy cow breeding in China as follows:developing large-scale breeding,and increasing subsidies;supporting the development of grass industry,and ensuring the supply of good feed;strengthening cultivation and promotion of fine breed,and promoting the quality of fresh milk;improving the accountability system of dairy products,and giving play to supervisory role of the news media.展开更多
The numerical quadrature methods for dealing with the problems of singular and near-singular integrals caused by Burton-Miller method are proposed, by which the conventional and fast multipole BEMs (boundary element ...The numerical quadrature methods for dealing with the problems of singular and near-singular integrals caused by Burton-Miller method are proposed, by which the conventional and fast multipole BEMs (boundary element methods) for 3D acoustic problems based on constant elements are improved. To solve the problem of singular integrals, a Hadamard finite-part integral method is presented, which is a simplified combination of the methods proposed by Kirkup and Wolf. The problem of near-singular integrals is overcome by the simple method of polar transformation and the more complex method of PART (Projection and Angular & Radial Transformation). The effectiveness of these methods for solving the singular and near-singular problems is validated through comparing with the results computed by the analytical method and/or the commercial software LMS Virtual.Lab. In addition, the influence of the near-singular integral problem on the computational precisions is analyzed by computing the errors relative to the exact solution. The computational complexities of the conventional and fast multipole BEM are analyzed and compared through numerical computations. A large-scale acoustic scattering problem, whose degree of freedoms is about 340,000, is implemented successfully. The results show that, the near singularity is primarily introduced by the hyper-singular kernel, and has great influences on the precision of the solution. The precision of fast multipole BEM is the same as conventional BEM, but the computational complexities are much lower.展开更多
Large-scale simulation optimization(SO)problems encompass both large-scale ranking-and-selection problems and high-dimensional discrete or continuous SO problems,presenting significant challenges to existing SO theori...Large-scale simulation optimization(SO)problems encompass both large-scale ranking-and-selection problems and high-dimensional discrete or continuous SO problems,presenting significant challenges to existing SO theories and algorithms.This paper begins by providing illustrative examples that highlight the differences between large-scale SO problems and those of a more moderate scale.Subsequently,it reviews several widely employed techniques for addressing large-scale SOproblems,such as divide-and-conquer,dimension reduction,and gradient-based algorithms.Additionally,the paper examines parallelization techniques leveraging widely accessible parallel computing environments to facilitate the resolution of large-scale SO problems.展开更多
An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable fo...An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable for handling sparse data structure and pos sesses Q-quadratic convergence rate. The global convergence of this new method is proved, the convergence rate is further analysed, and the detailed implementation is discussed in this paper. Some numerical tests for solving truss optimization and large sparse problems are reported. The theoretical and numerical results show that the new method is efficient for solving large-scale sparse NLP problems.展开更多
基金The Australian Research Council(DP200101197,DP230101107).
文摘Formalizing complex processes and phenomena of a real-world problem may require a large number of variables and constraints,resulting in what is termed a large-scale optimization problem.Nowadays,such large-scale optimization problems are solved using computing machines,leading to an enormous computational time being required,which may delay deriving timely solutions.Decomposition methods,which partition a large-scale optimization problem into lower-dimensional subproblems,represent a key approach to addressing time-efficiency issues.There has been significant progress in both applied mathematics and emerging artificial intelligence approaches on this front.This work aims at providing an overview of the decomposition methods from both the mathematics and computer science points of view.We also remark on the state-of-the-art developments and recent applications of the decomposition methods,and discuss the future research and development perspectives.
基金supported in part by the Central Government Guides Local Science and TechnologyDevelopment Funds(Grant No.YDZJSX2021A038)in part by theNational Natural Science Foundation of China under(Grant No.61806138)in part by the China University Industry-University-Research Collaborative Innovation Fund(Future Network Innovation Research and Application Project)(Grant 2021FNA04014).
文摘The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the interaction among decision variables is intricate,leading to large group sizes and suboptimal optimization effects;hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables(MOEAWOD)is proposed in this paper.Initially,the decision variables are perturbed and categorized into convergence and diversity variables;subsequently,the convergence variables are subdivided into groups based on the interactions among different decision variables.If the size of a group surpasses the set threshold,that group undergoes a process of weighting and overlapping grouping.Specifically,the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables.The decision variable with the highest interaction in the group is identified and disregarded,and the remaining variables are then reclassified into subgroups.Finally,the decision variable with the strongest interaction is added to each subgroup.MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups,which contributed to the optimized direction of convergence and diversity exploration with different groups.MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems,and the experimental results demonstrate the effectiveness of our methods.Compared with the other algorithms,our method is still at an advantage.
基金funded by the National Natural Science Foundation of China(No.72104069)the Science and Technology Department of Henan Province,China(No.182102310886 and 162102110109)the Postgraduate Meritocracy Scheme,hina(No.SYL19060145).
文摘To solve large-scale optimization problems,Fragrance coefficient and variant Particle Swarm local search Butterfly Optimization Algorithm(FPSBOA)is proposed.In the position update stage of Butterfly Optimization Algorithm(BOA),the fragrance coefficient is designed to balance the exploration and exploitation of BOA.The variant particle swarm local search strategy is proposed to improve the local search ability of the current optimal butterfly and prevent the algorithm from falling into local optimality.192000-dimensional functions and 201000-dimensional CEC 2010 large-scale functions are used to verify FPSBOA for complex large-scale optimization problems.The experimental results are statistically analyzed by Friedman test and Wilcoxon rank-sum test.All attained results demonstrated that FPSBOA can better solve more challenging scientific and industrial real-world problems with thousands of variables.Finally,four mechanical engineering problems and one ten-dimensional process synthesis and design problem are applied to FPSBOA,which shows FPSBOA has the feasibility and effectiveness in real-world application problems.
基金support by the Open Project of Xiangjiang Laboratory(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28,ZK21-07)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(CX20230074)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJZ03)the Science and Technology Innovation Program of Humnan Province(2023RC1002).
文摘Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
基金The research was supported by the State Education Grant for Retumed Scholars
文摘In this paper we report a sparse truncated Newton algorithm for handling large-scale simple bound nonlinear constrained minimixation problem. The truncated Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. At each iterative level, the search direction consists of three parts, one of which is a subspace truncated Newton direction, the other two are subspace gradient and modified gradient directions. The subspace truncated Newton direction is obtained by solving a sparse system of linear equations. The global convergence and quadratic convergence rate of the algorithm are proved and some numerical tests are given.
文摘The article introduces the main practices and achievements of the Environment and Plant Protection Institute of Chinese Academy of Tropical Agricultural Sciences in promoting the sharing of large-scale instruments and equipment in recent years,analyzes the existing problems in the management system,management team,assessment incentives and maintenance guarantee,and proposes improvement measures and suggestions from aspects of improving the sharing management system,strengthening management team building,strengthening sharing assessment and incentives,improving maintenance capabilities and expanding external publicity,to further improve the sharing management of large-scale instruments and equipment.
文摘Nanoparticles provide great advantages but also great risks. Risks associating with nanoparticles are the problem of all technologies, but they increase in many times in nanotechnologies. Adequate methods of outgoing production inspection are necessary to solve the problem of risks, and the inspection must be based on the safety standard. Existing safety standard results from a principle of “maximum permissible concentrations or MPC”. This principle is not applicable to nanoparticles, but a safety standard reflecting risks inherent in nanoparticles doesn’t exist. Essence of the risks is illustrated by the example from pharmacology, since its safety assurance is conceptually based on MPC and it has already come against this problem. Possible formula of safety standard for nanoparticles is reflected in many publications, but conventional inspection methods cannot provide its realization, and this gap is an obstacle to assumption of similar formulas. Therefore the development of nanoparticle industry as a whole (also development of the pharmacology in particular) is impossible without the creation of an adequate inspection method. There are suggested new inspection methods founded on the new physical principle and satisfying to the adequate safety standard for nanoparticles. These methods demonstrate that creation of the adequate safety standard and the outgoing production inspection in a large-scale manufacturing of nanoparticles are the solvable problems. However there is a great distance between the physical principle and its hardware realization, and a transition from the principle to the hardware demands great intellectual and material costs. Therefore it is desirable to call attention of the public at large to the necessity of urgent expansions of investigations associated with outgoing inspections in nanoparticles technologies. It is necessary also to attract attention, first, of representatives of state structures controlling approvals of the adequate safety standard to this problem, since it is impossible to compel producers providing the safety without the similar standard, and, second, of leaders of pharmacological industry, since their industry already entered into the nanotechnology era, and they have taken an interest in a forthcoming development of inspection methods.
基金supported by“1 Decembrie 1918”University of Alba Iulia,510009 Alba Iuliasupported in part by the HEC-NRPU project,under the grant No.14566.
文摘Multi-criteria decision-making(MCDM)is essential for handling complex decision problems under uncertainty,especially in fields such as criminal justice,healthcare,and environmental management.Traditional fuzzy MCDM techniques have failed to deal with problems where uncertainty or vagueness is involved.To address this issue,we propose a novel framework that integrates group and overlap functions with Aczel-Alsina(AA)operational laws in the intuitionistic fuzzy set(IFS)environment.Overlap functions capture the degree to which two inputs share common features and are used to find how closely two values or criteria match in uncertain environments,while the Group functions are used to combine different expert opinions into a single collective result.This study introduces four new aggregation operators:Group Overlap function-based intuitionistic fuzzy Aczel-Alsina(GOF-IFAA)Weighted Averaging(GOF-IFAAWA)operator,intuitionistic fuzzy Aczel-Alsina(GOF-IFAA)Weighted Geometric(GOF-IFAAWG),intuitionistic fuzzy Aczel-Alsina(GOF-IFAA)OrderedWeighted Averaging(GOF-IFAAOWA),and intuitionistic fuzzy Aczel-Alsina(GOF-IFAA)Ordered Weighted Geometric(GOF-IFAAOWG),which are rigorously defined and mathematically analyzed and offer improved flexibility in managing overlapping,uncertain,and hesitant information.The properties of these operators are discussed in detail.Further,the effectiveness,validity,activeness,and ability to capture the uncertain information,the developed operators are applied to the AI-based Criminal Justice Policy Selection problem.At last,the comparison analysis between prior and proposed studies has been displayed,and then followed by the conclusion of the result.
基金Key Technologies R&D Program of Heilongjiang Province ( GC10D206) Cooperation Project of Northeast Agricultural University and Inner Mongolia Mengniu Dairy ( Group) Co. ,Ltd. +1 种基金The PhD Start-up Fund of Northeast Agricultural University ( 2009-RCW05) National Soft Science Research Plan Project ( 2010GXQ-5D330)
文摘This article offers an overview of the development of dairy cow breeding in China,and analyzes the problems in the development of dairy cow breeding in China as follows:the breeding scale is small;the pasture grass production cannot meet demand;the proportion of fine breed is not high and the dairy cow yield per unit is low;it lacks epidemic prevention and quarantine mechanism;economic benefits of large-scale breeding are not high.These problems have become bottleneck in the development of the dairy cow breeding.Finally countermeasures are put forward for the development of dairy cow breeding in China as follows:developing large-scale breeding,and increasing subsidies;supporting the development of grass industry,and ensuring the supply of good feed;strengthening cultivation and promotion of fine breed,and promoting the quality of fresh milk;improving the accountability system of dairy products,and giving play to supervisory role of the news media.
基金supported by the National Natural Science Foundation of China(11304344,11404364)the Project of Hubei Provincial Department of Education(D20141803)+1 种基金the Natural Science Foundation of Hubei Province(2014CFB378)the Doctoral Scientific Research Foundation of Hubei University of Automotive Technology(BK201604)
文摘The numerical quadrature methods for dealing with the problems of singular and near-singular integrals caused by Burton-Miller method are proposed, by which the conventional and fast multipole BEMs (boundary element methods) for 3D acoustic problems based on constant elements are improved. To solve the problem of singular integrals, a Hadamard finite-part integral method is presented, which is a simplified combination of the methods proposed by Kirkup and Wolf. The problem of near-singular integrals is overcome by the simple method of polar transformation and the more complex method of PART (Projection and Angular & Radial Transformation). The effectiveness of these methods for solving the singular and near-singular problems is validated through comparing with the results computed by the analytical method and/or the commercial software LMS Virtual.Lab. In addition, the influence of the near-singular integral problem on the computational precisions is analyzed by computing the errors relative to the exact solution. The computational complexities of the conventional and fast multipole BEM are analyzed and compared through numerical computations. A large-scale acoustic scattering problem, whose degree of freedoms is about 340,000, is implemented successfully. The results show that, the near singularity is primarily introduced by the hyper-singular kernel, and has great influences on the precision of the solution. The precision of fast multipole BEM is the same as conventional BEM, but the computational complexities are much lower.
基金supported by the National Natural Science Foundation of China(Nos.72071146,72091211,72293562,and 72031006).
文摘Large-scale simulation optimization(SO)problems encompass both large-scale ranking-and-selection problems and high-dimensional discrete or continuous SO problems,presenting significant challenges to existing SO theories and algorithms.This paper begins by providing illustrative examples that highlight the differences between large-scale SO problems and those of a more moderate scale.Subsequently,it reviews several widely employed techniques for addressing large-scale SOproblems,such as divide-and-conquer,dimension reduction,and gradient-based algorithms.Additionally,the paper examines parallelization techniques leveraging widely accessible parallel computing environments to facilitate the resolution of large-scale SO problems.
基金This research was supported by Nationa Natural Science Foundation of China, LSEC of CAS in Beijingand Natural Science Foundati
文摘An NGTN method was proposed for solving large-scale sparse nonlinear programming (NLP) problems. This is a hybrid method of a truncated Newton direction and a modified negative gradient direction, which is suitable for handling sparse data structure and pos sesses Q-quadratic convergence rate. The global convergence of this new method is proved, the convergence rate is further analysed, and the detailed implementation is discussed in this paper. Some numerical tests for solving truss optimization and large sparse problems are reported. The theoretical and numerical results show that the new method is efficient for solving large-scale sparse NLP problems.