Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when ta...Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when tackling high-dimensional optimization challenges.To effectively address these challenges,this study introduces cooperative metaheuristics integrating dynamic dimension reduction(DR).Building upon particle swarm optimization(PSO)and differential evolution(DE),the proposed cooperative methods C-PSO and C-DE are developed.In the proposed methods,the modified principal components analysis(PCA)is utilized to reduce the dimension of design variables,thereby decreasing computational costs.The dynamic DR strategy implements periodic execution of modified PCA after a fixed number of iterations,resulting in the important dimensions being dynamically identified.Compared with the static one,the dynamic DR strategy can achieve precise identification of important dimensions,thereby enabling accelerated convergence toward optimal solutions.Furthermore,the influence of cumulative contribution rate thresholds on optimization problems with different dimensions is investigated.Metaheuristic algorithms(PSO,DE)and cooperative metaheuristics(C-PSO,C-DE)are examined by 15 benchmark functions and two engineering design problems(speed reducer and composite pressure vessel).Comparative results demonstrate that the cooperative methods achieve significantly superior performance compared to standard methods in both solution accuracy and computational efficiency.Compared to standard metaheuristic algorithms,cooperative metaheuristics achieve a reduction in computational cost of at least 40%.The cooperative metaheuristics can be effectively used to tackle both high-dimensional unconstrained and constrained optimization problems.展开更多
Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemio...Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemiology and risk assessment,including high dimensionality,correlated exposure,and subtle individual effects.Methods We proposed a novel statistical approach,the generalized functional linear model(GFLM),to analyze the health effects of exposure mixtures.GFLM treats the effect of mixture exposures as a smooth function by reordering exposures based on specific mechanisms and capturing internal correlations to provide a meaningful estimation and interpretation.The robustness and efficiency was evaluated under various scenarios through extensive simulation studies.Results We applied the GFLM to two datasets from the National Health and Nutrition Examination Survey(NHANES).In the first application,we examined the effects of 37 nutrients on BMI(2011–2016 cycles).The GFLM identified a significant mixture effect,with fiber and fat emerging as the nutrients with the greatest negative and positive effects on BMI,respectively.For the second application,we investigated the association between four pre-and perfluoroalkyl substances(PFAS)and gout risk(2007–2018 cycles).Unlike traditional methods,the GFLM indicated no significant association,demonstrating its robustness to multicollinearity.Conclusion GFLM framework is a powerful tool for mixture exposure analysis,offering improved handling of correlated exposures and interpretable results.It demonstrates robust performance across various scenarios and real-world applications,advancing our understanding of complex environmental exposures and their health impacts on environmental epidemiology and toxicology.展开更多
Structural Reliability-Based Topology Optimization(RBTO),as an efficient design methodology,serves as a crucial means to ensure the development ofmodern engineering structures towards high performance,long service lif...Structural Reliability-Based Topology Optimization(RBTO),as an efficient design methodology,serves as a crucial means to ensure the development ofmodern engineering structures towards high performance,long service life,and high reliability.However,in practical design processes,topology optimization must not only account for the static performance of structures but also consider the impacts of various responses and uncertainties under complex dynamic conditions,which traditional methods often struggle accommodate.Therefore,this study proposes an RBTO framework based on a Kriging-assisted level set function and a novel Dynamic Hybrid Particle Swarm Optimization(DHPSO)algorithm.By leveraging the Kriging model as a surrogate,the high cost associated with repeatedly running finite element analysis processes is reduced,addressing the issue of minimizing structural compliance.Meanwhile,the DHPSO algorithm enables a better balance between the population’s developmental and exploratory capabilities,significantly accelerating convergence speed and enhancing global convergence performance.Finally,the proposed method is validated through three different structural examples,demonstrating its superior performance.Observed that the computational that,compared to the traditional Solid Isotropic Material with Penalization(SIMP)method,the proposed approach reduces the upper bound of structural compliance by approximately 30%.Additionally,the optimized results exhibit clear material interfaces without grayscale elements,and the stress concentration factor is reduced by approximately 42%.Consequently,the computational results fromdifferent examples verify the effectiveness and superiority of this study across various fields,achieving the goal of providing more precise optimization results within a shorter timeframe.展开更多
The Dynamical Density Functional Theory(DDFT)algorithm,derived by associating classical Density Functional Theory(DFT)with the fundamental Smoluchowski dynamical equation,describes the evolution of inhomo-geneous flui...The Dynamical Density Functional Theory(DDFT)algorithm,derived by associating classical Density Functional Theory(DFT)with the fundamental Smoluchowski dynamical equation,describes the evolution of inhomo-geneous fluid density distributions over time.It plays a significant role in studying the evolution of density distributions over time in inhomogeneous systems.The Sunway Bluelight II supercomputer,as a new generation of China’s developed supercomputer,possesses powerful computational capabilities.Porting and optimizing industrial software on this platform holds significant importance.For the optimization of the DDFT algorithm,based on the Sunway Bluelight II supercomputer and the unique hardware architecture of the SW39000 processor,this work proposes three acceleration strategies to enhance computational efficiency and performance,including direct parallel optimization,local-memory constrained optimization for CPEs,and multi-core groups collaboration and communication optimization.This method combines the characteristics of the program’s algorithm with the unique hardware architecture of the Sunway Bluelight II supercomputer,optimizing the storage and transmission structures to achieve a closer integration of software and hardware.For the first time,this paper presents Sunway-Dynamical Density Functional Theory(SW-DDFT).Experimental results show that SW-DDFT achieves a speedup of 6.67 times within a single-core group compared to the original DDFT implementation,with six core groups(a total of 384 CPEs),the maximum speedup can reach 28.64 times,and parallel efficiency can reach 71%,demonstrating excellent acceleration performance.展开更多
The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the effi...The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.展开更多
Bifunctional oxide-zeolite-based composites(OXZEO)have emerged as promising materials for the direct conversion of syngas to olefins.However,experimental screening and optimization of reaction parameters remain resour...Bifunctional oxide-zeolite-based composites(OXZEO)have emerged as promising materials for the direct conversion of syngas to olefins.However,experimental screening and optimization of reaction parameters remain resource-intensive.To address this challenge,we implemented a three-stage framework integrating machine learning,Bayesian optimization,and experimental validation,utilizing a carefully curated dataset from the literature.Our ensemble-tree model(R^(2)>0.87)identified Zn-Zr and Cu-Mg binary mixed oxides as the most effective OXZEO systems,with their light olefin space-time yields confirmed by physically mixing with HSAPO-34 through experimental validation.Density functional theory calculations further elucidated the activity trends between Zn-Zr and Cu-Mg mixed oxides.Among 16 catalyst and reaction condition descriptors,the oxide/zeolite ratio,reaction temperature,and pressure emerged as the most significant factors.This interpretable,data-driven framework offers a versatile approach that can be applied to other catalytic processes,providing a powerful tool for experiment design and optimization in catalysis.展开更多
The multi-objective optimization of backfill effect based on response surface methodology and desirability function(RSM-DF)was conducted.Firstly,the test results show that the uniaxial compressive strength(UCS)increas...The multi-objective optimization of backfill effect based on response surface methodology and desirability function(RSM-DF)was conducted.Firstly,the test results show that the uniaxial compressive strength(UCS)increases with cement sand ratio(CSR),slurry concentration(SC),and curing age(CA),while flow resistance(FR)increases with SC and backfill flow rate(BFR),and decreases with CSR.Then the regression models of UCS and FR as response values were established through RSM.Multi-factor interaction found that CSR-CA impacted UCS most,while SC-BFR impacted FR most.By introducing the desirability function,the optimal backfill parameters were obtained based on RSM-DF(CSR is 1:6.25,SC is 69%,CA is 11.5 d,and BFR is 90 m^(3)/h),showing close results of Design Expert and high reliability for optimization.For a copper mine in China,RSM-DF optimization will reduce cement consumption by 4758 t per year,increase tailings consumption by about 6700 t,and reduce CO_(2)emission by about 4758 t.Thus,RSM-DF provides a new approach for backfill parameters optimization,which has important theoretical and practical values.展开更多
Dear Editor,This letter investigates predefined-time optimization problems(OPs) of multi-agent systems(MASs), where the agent of MASs is subject to inequality constraints, and the team objective function accounts for ...Dear Editor,This letter investigates predefined-time optimization problems(OPs) of multi-agent systems(MASs), where the agent of MASs is subject to inequality constraints, and the team objective function accounts for impulse effects. Firstly, to address the inequality constraints,the penalty method is introduced. Then, a novel optimization strategy is developed, which only requires that the team objective function be strongly convex.展开更多
Ant colony optimization(ACO)is a random search algorithm based on probability calculation.However,the uninformed search strategy has a slow convergence speed.The Bayesian algorithm uses the historical information of t...Ant colony optimization(ACO)is a random search algorithm based on probability calculation.However,the uninformed search strategy has a slow convergence speed.The Bayesian algorithm uses the historical information of the searched point to determine the next search point during the search process,reducing the uncertainty in the random search process.Due to the ability of the Bayesian algorithm to reduce uncertainty,a Bayesian ACO algorithm is proposed in this paper to increase the convergence speed of the conventional ACO algorithm for image edge detection.In addition,this paper has the following two innovations on the basis of the classical algorithm,one of which is to add random perturbations after completing the pheromone update.The second is the use of adaptive pheromone heuristics.Experimental results illustrate that the proposed Bayesian ACO algorithm has faster convergence and higher precision and recall than the traditional ant colony algorithm,due to the improvement of the pheromone utilization rate.Moreover,Bayesian ACO algorithm outperforms the other comparative methods in edge detection task.展开更多
Most material distribution-based topology optimization methods work on a relaxed form of the optimization problem and then push the solution toward the binary limits.However,when benchmarking these methods,researchers...Most material distribution-based topology optimization methods work on a relaxed form of the optimization problem and then push the solution toward the binary limits.However,when benchmarking these methods,researchers use known solutions to only a single form of benchmark problem.This paper proposes a comparison platform for systematic benchmarking of topology optimization methods using both binary and relaxed forms.A greyness measure is implemented to evaluate how far a solution is from the desired binary form.The well-known ZhouRozvany(ZR)problem is selected as the benchmarking problem here,making use of available global solutions for both its relaxed and binary forms.The recently developed non-penalization Smooth-edged Material Distribution for Optimizing Topology(SEMDOT),well-established Solid Isotropic Material with Penalization(SIMP),and continuation methods are studied on this platform.Interestingly,in most cases,the grayscale solutions obtained by SEMDOT demonstrate better performance in dealing with the ZR problem than SIMP.The reasons are investigated and attributed to the usage of two different regularization techniques,namely,the Heaviside smooth function in SEMDOT and the power-law penalty in SIMP.More importantly,a simple-to-use benchmarking graph is proposed for evaluating newly developed topology optimization methods.展开更多
Magnetic skyrmions are recognized as potential information carriers for building the next-generation spintronic memory and logic devices.Towards functional device applications,efficient electrical detection of skyrmio...Magnetic skyrmions are recognized as potential information carriers for building the next-generation spintronic memory and logic devices.Towards functional device applications,efficient electrical detection of skyrmions at room temperature is one of the most important prerequisites.展开更多
The Wuding River Basin,situated in the Loess Plateau of northern China,is an ecologically fragile region facing severe soil erosion and imbalanced ecosystem service(ES)functions.However,the mechanisms driving the spat...The Wuding River Basin,situated in the Loess Plateau of northern China,is an ecologically fragile region facing severe soil erosion and imbalanced ecosystem service(ES)functions.However,the mechanisms driving the spatiotemporal evolution of ES functions,as well as the trade-offs and synergies among these functions,remain poorly understood,constraining effective watershed-scale management.To address this challenge,this study quantified four ES functions,i.e.,water yield(WY),carbon storage(CS),habitat quality(HQ),and soil conservation(SC)in the Wuding River Basin from 1990 to 2020 using the Integrated Valuation of Ecosystem Services and Tradeoff(InVEST)model,and proposed an innovative integration of InVEST with a Bayesian Belief Network(BBN)to nonlinearly identify trade-off and synergy relationships among ES functions through probabilistic inference.A trade-off and synergy index(TSI)was developed to assess the spatial interaction intensity among ES functions,while sensitivity and scenario analyses were employed to determine key driving factors,followed by spatial optimization to delineate functional zones.Results revealed distinct spatiotemporal variations:WY increased from 98.69 to 120.52 mm;SC rose to an average of 3.05×104 t/hm2;CS remained relatively stable(about 15.50 t/km2);and HQ averaged 0.51 with localized declines.The BBN achieved a high accuracy of 81.9%and effectively identified strong synergies between WY and SC,as well as between CS and HQ,while clear trade-offs were observed between WY and SC versus CS and HQ.Sensitivity analysis indicated precipitation(variance reduction of 9.4%),land use(9.8%),and vegetation cover(9.1%)as key driving factors.Spatial optimization further showed that core supply and ecological regulation zones are concentrated in the central-southern and southeastern basin,while ecological strengthening and optimization core zones dominate the central-northern and southeastern margins,highlighting strong spatial heterogeneity.Overall,this study advances ES research by combining process-based quantification with probabilistic modeling,offering a robust framework for studying nonlinear interactions,driving mechanisms,and optimization strategies,and providing a transferable paradigm for watershed-scale ES management and ecological planning in arid and semi-arid areas.展开更多
This paper proposed a new libration decoupling analytical speed function(LD-ASF)in lieu of the classic analytical speed function to control the climber's speed along a partial space elevator to improve libration s...This paper proposed a new libration decoupling analytical speed function(LD-ASF)in lieu of the classic analytical speed function to control the climber's speed along a partial space elevator to improve libration stability in cargo transportation.The LD-ASF is further optimized for payload transportation efficiency by a novel coordinate game theory to balance competing control objectives among payload transport speed,stable end body's libration,and overall control input via model predictive control.The transfer period is divided into several sections to reduce computational burden.The validity and efficacy of the proposed LD-ASF and coordinate game-based model predictive control are demonstrated by computer simulation.Numerical results reveal that the optimized LD-ASF results in higher transportation speed,stable end body's libration,lower thrust fuel consumption,and more flexible optimization space than the classic analytical speed function.展开更多
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
With the rapid development of Network Function Virtualization(NFV),the problem of low resource utilizationin traditional data centers is gradually being addressed.However,existing research does not optimize both local...With the rapid development of Network Function Virtualization(NFV),the problem of low resource utilizationin traditional data centers is gradually being addressed.However,existing research does not optimize both localand global allocation of resources in data centers.Hence,we propose an adaptive hybrid optimization strategy thatcombines dynamic programming and neural networks to improve resource utilization and service quality in datacenters.Our approach encompasses a service function chain simulation generator,a parallel architecture servicesystem,a dynamic programming strategy formaximizing the utilization of local server resources,a neural networkfor predicting the global utilization rate of resources and a global resource optimization strategy for bottleneck andredundant resources.With the implementation of our local and global resource allocation strategies,the systemperformance is significantly optimized through simulation.展开更多
A non-probabilistic reliability topology optimization method is proposed based on the aggregation function and matrix multiplication.The expression of the geometric stiffness matrix is derived,the finite element linea...A non-probabilistic reliability topology optimization method is proposed based on the aggregation function and matrix multiplication.The expression of the geometric stiffness matrix is derived,the finite element linear buckling analysis is conducted,and the sensitivity solution of the linear buckling factor is achieved.For a specific problem in linear buckling topology optimization,a Heaviside projection function based on the exponential smooth growth is developed to eliminate the gray cells.The aggregation function method is used to consider the high-order eigenvalues,so as to obtain continuous sensitivity information and refined structural design.With cyclic matrix programming,a fast topology optimization method that can be used to efficiently obtain the unit assembly and sensitivity solution is conducted.To maximize the buckling load,under the constraint of the given buckling load,two types of topological optimization columns are constructed.The variable density method is used to achieve the topology optimization solution along with the moving asymptote optimization algorithm.The vertex method and the matching point method are used to carry out an uncertainty propagation analysis,and the non-probability reliability topology optimization method considering buckling responses is developed based on the transformation of non-probability reliability indices based on the characteristic distance.Finally,the differences in the structural topology optimization under different reliability degrees are illustrated by examples.展开更多
High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation lear...High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation learning to an HDI matrix,whose hyper-parameter adaptation can be implemented through a particle swarm optimizer(PSO) to meet scalable requirements.However, conventional PSO is limited by its premature issues,which leads to the accuracy loss of a resultant LFA model. To address this thorny issue, this study merges the information of each particle's state migration into its evolution process following the principle of a generalized momentum method for improving its search ability, thereby building a state-migration particle swarm optimizer(SPSO), whose theoretical convergence is rigorously proved in this study. It is then incorporated into an LFA model for implementing efficient hyper-parameter adaptation without accuracy loss. Experiments on six HDI matrices indicate that an SPSO-incorporated LFA model outperforms state-of-the-art LFA models in terms of prediction accuracy for missing data of an HDI matrix with competitive computational efficiency.Hence, SPSO's use ensures efficient and reliable hyper-parameter adaptation in an LFA model, thus ensuring practicality and accurate representation learning for HDI matrices.展开更多
With the continuous advancement in topology optimization and additive manufacturing(AM)technology,the capability to fabricate functionally graded materials and intricate cellular structures with spatially varying micr...With the continuous advancement in topology optimization and additive manufacturing(AM)technology,the capability to fabricate functionally graded materials and intricate cellular structures with spatially varying microstructures has grown significantly.However,a critical challenge is encountered in the design of these structures–the absence of robust interface connections between adjacent microstructures,potentially resulting in diminished efficiency or macroscopic failure.A Hybrid Level Set Method(HLSM)is proposed,specifically designed to enhance connectivity among non-uniform microstructures,contributing to the design of functionally graded cellular structures.The HLSM introduces a pioneering algorithm for effectively blending heterogeneous microstructure interfaces.Initially,an interpolation algorithm is presented to construct transition microstructures seamlessly connected on both sides.Subsequently,the algorithm enables the morphing of non-uniform unit cells to seamlessly adapt to interconnected adjacent microstructures.The method,seamlessly integrated into a multi-scale topology optimization framework using the level set method,exhibits its efficacy through numerical examples,showcasing its prowess in optimizing 2D and 3D functionally graded materials(FGM)and multi-scale topology optimization.In essence,the pressing issue of interface connections in complex structure design is not only addressed but also a robust methodology is introduced,substantiated by numerical evidence,advancing optimization capabilities in the realm of functionally graded materials and cellular structures.展开更多
With the increasing dimensionality of the data,High-dimensional Feature Selection(HFS)becomes an increasingly dif-ficult task.It is not simple to find the best subset of features due to the breadth of the search space...With the increasing dimensionality of the data,High-dimensional Feature Selection(HFS)becomes an increasingly dif-ficult task.It is not simple to find the best subset of features due to the breadth of the search space and the intricacy of the interactions between features.Many of the Feature Selection(FS)approaches now in use for these problems perform sig-nificantly less well when faced with such intricate situations involving high-dimensional search spaces.It is demonstrated that meta-heuristic algorithms can provide sub-optimal results in an acceptable amount of time.This paper presents a new binary Boosted version of the Spider Wasp Optimizer(BSWO)called Binary Boosted SWO(BBSWO),which combines a number of successful and promising strategies,in order to deal with HFS.The shortcomings of the original BSWO,including early convergence,settling into local optimums,limited exploration and exploitation,and lack of population diversity,were addressed by the proposal of this new variant of SWO.The concept of chaos optimization is introduced in BSWO,where initialization is consistently produced by utilizing the properties of sine chaos mapping.A new convergence parameter was then incorporated into BSWO to achieve a promising balance between exploration and exploitation.Multiple exploration mechanisms were then applied in conjunction with several exploitation strategies to effectively enrich the search process of BSWO within the search space.Finally,quantum-based optimization was added to enhance the diversity of the search agents in BSWO.The proposed BBSWO not only offers the most suitable subset of features located,but it also lessens the data's redundancy structure.BBSWO was evaluated using the k-Nearest Neighbor(k-NN)classifier on 23 HFS problems from the biomedical domain taken from the UCI repository.The results were compared with those of traditional BSWO and other well-known meta-heuristics-based FS.The findings indicate that,in comparison to other competing techniques,the proposed BBSWO can,on average,identify the least significant subsets of features with efficient classification accuracy of the k-NN classifier.展开更多
The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based o...The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.展开更多
基金funded by National Natural Science Foundation of China(Nos.12402142,11832013 and 11572134)Natural Science Foundation of Hubei Province(No.2024AFB235)+1 种基金Hubei Provincial Department of Education Science and Technology Research Project(No.Q20221714)the Opening Foundation of Hubei Key Laboratory of Digital Textile Equipment(Nos.DTL2023019 and DTL2022012).
文摘Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when tackling high-dimensional optimization challenges.To effectively address these challenges,this study introduces cooperative metaheuristics integrating dynamic dimension reduction(DR).Building upon particle swarm optimization(PSO)and differential evolution(DE),the proposed cooperative methods C-PSO and C-DE are developed.In the proposed methods,the modified principal components analysis(PCA)is utilized to reduce the dimension of design variables,thereby decreasing computational costs.The dynamic DR strategy implements periodic execution of modified PCA after a fixed number of iterations,resulting in the important dimensions being dynamically identified.Compared with the static one,the dynamic DR strategy can achieve precise identification of important dimensions,thereby enabling accelerated convergence toward optimal solutions.Furthermore,the influence of cumulative contribution rate thresholds on optimization problems with different dimensions is investigated.Metaheuristic algorithms(PSO,DE)and cooperative metaheuristics(C-PSO,C-DE)are examined by 15 benchmark functions and two engineering design problems(speed reducer and composite pressure vessel).Comparative results demonstrate that the cooperative methods achieve significantly superior performance compared to standard methods in both solution accuracy and computational efficiency.Compared to standard metaheuristic algorithms,cooperative metaheuristics achieve a reduction in computational cost of at least 40%.The cooperative metaheuristics can be effectively used to tackle both high-dimensional unconstrained and constrained optimization problems.
基金supported in part by the Young Scientists Fund of the National Natural Science Foundation of China(Grant Nos.82304253)(and 82273709)the Foundation for Young Talents in Higher Education of Guangdong Province(Grant No.2022KQNCX021)the PhD Starting Project of Guangdong Medical University(Grant No.GDMUB2022054).
文摘Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemiology and risk assessment,including high dimensionality,correlated exposure,and subtle individual effects.Methods We proposed a novel statistical approach,the generalized functional linear model(GFLM),to analyze the health effects of exposure mixtures.GFLM treats the effect of mixture exposures as a smooth function by reordering exposures based on specific mechanisms and capturing internal correlations to provide a meaningful estimation and interpretation.The robustness and efficiency was evaluated under various scenarios through extensive simulation studies.Results We applied the GFLM to two datasets from the National Health and Nutrition Examination Survey(NHANES).In the first application,we examined the effects of 37 nutrients on BMI(2011–2016 cycles).The GFLM identified a significant mixture effect,with fiber and fat emerging as the nutrients with the greatest negative and positive effects on BMI,respectively.For the second application,we investigated the association between four pre-and perfluoroalkyl substances(PFAS)and gout risk(2007–2018 cycles).Unlike traditional methods,the GFLM indicated no significant association,demonstrating its robustness to multicollinearity.Conclusion GFLM framework is a powerful tool for mixture exposure analysis,offering improved handling of correlated exposures and interpretable results.It demonstrates robust performance across various scenarios and real-world applications,advancing our understanding of complex environmental exposures and their health impacts on environmental epidemiology and toxicology.
基金fundings supported by Sichuan Science and Technology Program(2025YFHZ0065).
文摘Structural Reliability-Based Topology Optimization(RBTO),as an efficient design methodology,serves as a crucial means to ensure the development ofmodern engineering structures towards high performance,long service life,and high reliability.However,in practical design processes,topology optimization must not only account for the static performance of structures but also consider the impacts of various responses and uncertainties under complex dynamic conditions,which traditional methods often struggle accommodate.Therefore,this study proposes an RBTO framework based on a Kriging-assisted level set function and a novel Dynamic Hybrid Particle Swarm Optimization(DHPSO)algorithm.By leveraging the Kriging model as a surrogate,the high cost associated with repeatedly running finite element analysis processes is reduced,addressing the issue of minimizing structural compliance.Meanwhile,the DHPSO algorithm enables a better balance between the population’s developmental and exploratory capabilities,significantly accelerating convergence speed and enhancing global convergence performance.Finally,the proposed method is validated through three different structural examples,demonstrating its superior performance.Observed that the computational that,compared to the traditional Solid Isotropic Material with Penalization(SIMP)method,the proposed approach reduces the upper bound of structural compliance by approximately 30%.Additionally,the optimized results exhibit clear material interfaces without grayscale elements,and the stress concentration factor is reduced by approximately 42%.Consequently,the computational results fromdifferent examples verify the effectiveness and superiority of this study across various fields,achieving the goal of providing more precise optimization results within a shorter timeframe.
基金supported by National Key Research and Development Program of China under Grant 2024YFE0210800National Natural Science Foundation of China under Grant 62495062Beijing Natural Science Foundation under Grant L242017.
文摘The Dynamical Density Functional Theory(DDFT)algorithm,derived by associating classical Density Functional Theory(DFT)with the fundamental Smoluchowski dynamical equation,describes the evolution of inhomo-geneous fluid density distributions over time.It plays a significant role in studying the evolution of density distributions over time in inhomogeneous systems.The Sunway Bluelight II supercomputer,as a new generation of China’s developed supercomputer,possesses powerful computational capabilities.Porting and optimizing industrial software on this platform holds significant importance.For the optimization of the DDFT algorithm,based on the Sunway Bluelight II supercomputer and the unique hardware architecture of the SW39000 processor,this work proposes three acceleration strategies to enhance computational efficiency and performance,including direct parallel optimization,local-memory constrained optimization for CPEs,and multi-core groups collaboration and communication optimization.This method combines the characteristics of the program’s algorithm with the unique hardware architecture of the Sunway Bluelight II supercomputer,optimizing the storage and transmission structures to achieve a closer integration of software and hardware.For the first time,this paper presents Sunway-Dynamical Density Functional Theory(SW-DDFT).Experimental results show that SW-DDFT achieves a speedup of 6.67 times within a single-core group compared to the original DDFT implementation,with six core groups(a total of 384 CPEs),the maximum speedup can reach 28.64 times,and parallel efficiency can reach 71%,demonstrating excellent acceleration performance.
基金supported by the Innovation Fund Project of the Gansu Education Department(Grant No.2021B-099).
文摘The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.
基金funded by the KRICT Project (KK2512-10) of the Korea Research Institute of Chemical Technology and the Ministry of Trade, Industry and Energy (MOTIE)the Korea Institute for Advancement of Technology (KIAT) through the Virtual Engineering Platform Program (P0022334)+1 种基金supported by the Carbon Neutral Industrial Strategic Technology Development Program (RS-202300261088) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea)Further support was provided by research fund of Chungnam National University。
文摘Bifunctional oxide-zeolite-based composites(OXZEO)have emerged as promising materials for the direct conversion of syngas to olefins.However,experimental screening and optimization of reaction parameters remain resource-intensive.To address this challenge,we implemented a three-stage framework integrating machine learning,Bayesian optimization,and experimental validation,utilizing a carefully curated dataset from the literature.Our ensemble-tree model(R^(2)>0.87)identified Zn-Zr and Cu-Mg binary mixed oxides as the most effective OXZEO systems,with their light olefin space-time yields confirmed by physically mixing with HSAPO-34 through experimental validation.Density functional theory calculations further elucidated the activity trends between Zn-Zr and Cu-Mg mixed oxides.Among 16 catalyst and reaction condition descriptors,the oxide/zeolite ratio,reaction temperature,and pressure emerged as the most significant factors.This interpretable,data-driven framework offers a versatile approach that can be applied to other catalytic processes,providing a powerful tool for experiment design and optimization in catalysis.
基金Funded by the Deep Underground National Science&Technology Major Project gram of China(No.2024ZD1003704)the National Natural Science Foundation of China(Nos.51834001 and 52374111)。
文摘The multi-objective optimization of backfill effect based on response surface methodology and desirability function(RSM-DF)was conducted.Firstly,the test results show that the uniaxial compressive strength(UCS)increases with cement sand ratio(CSR),slurry concentration(SC),and curing age(CA),while flow resistance(FR)increases with SC and backfill flow rate(BFR),and decreases with CSR.Then the regression models of UCS and FR as response values were established through RSM.Multi-factor interaction found that CSR-CA impacted UCS most,while SC-BFR impacted FR most.By introducing the desirability function,the optimal backfill parameters were obtained based on RSM-DF(CSR is 1:6.25,SC is 69%,CA is 11.5 d,and BFR is 90 m^(3)/h),showing close results of Design Expert and high reliability for optimization.For a copper mine in China,RSM-DF optimization will reduce cement consumption by 4758 t per year,increase tailings consumption by about 6700 t,and reduce CO_(2)emission by about 4758 t.Thus,RSM-DF provides a new approach for backfill parameters optimization,which has important theoretical and practical values.
基金supported in part by the National Natural Science Foundation of China(62276119)the Natural Science Foundation of Jiangsu Province(BK20241764)the Postgraduate Research & Practice Innovation Program of Jiangsu Province(KYCX22_2860)
文摘Dear Editor,This letter investigates predefined-time optimization problems(OPs) of multi-agent systems(MASs), where the agent of MASs is subject to inequality constraints, and the team objective function accounts for impulse effects. Firstly, to address the inequality constraints,the penalty method is introduced. Then, a novel optimization strategy is developed, which only requires that the team objective function be strongly convex.
基金supported by the National Natural Science Foundation of China(62276055).
文摘Ant colony optimization(ACO)is a random search algorithm based on probability calculation.However,the uninformed search strategy has a slow convergence speed.The Bayesian algorithm uses the historical information of the searched point to determine the next search point during the search process,reducing the uncertainty in the random search process.Due to the ability of the Bayesian algorithm to reduce uncertainty,a Bayesian ACO algorithm is proposed in this paper to increase the convergence speed of the conventional ACO algorithm for image edge detection.In addition,this paper has the following two innovations on the basis of the classical algorithm,one of which is to add random perturbations after completing the pheromone update.The second is the use of adaptive pheromone heuristics.Experimental results illustrate that the proposed Bayesian ACO algorithm has faster convergence and higher precision and recall than the traditional ant colony algorithm,due to the improvement of the pheromone utilization rate.Moreover,Bayesian ACO algorithm outperforms the other comparative methods in edge detection task.
文摘Most material distribution-based topology optimization methods work on a relaxed form of the optimization problem and then push the solution toward the binary limits.However,when benchmarking these methods,researchers use known solutions to only a single form of benchmark problem.This paper proposes a comparison platform for systematic benchmarking of topology optimization methods using both binary and relaxed forms.A greyness measure is implemented to evaluate how far a solution is from the desired binary form.The well-known ZhouRozvany(ZR)problem is selected as the benchmarking problem here,making use of available global solutions for both its relaxed and binary forms.The recently developed non-penalization Smooth-edged Material Distribution for Optimizing Topology(SEMDOT),well-established Solid Isotropic Material with Penalization(SIMP),and continuation methods are studied on this platform.Interestingly,in most cases,the grayscale solutions obtained by SEMDOT demonstrate better performance in dealing with the ZR problem than SIMP.The reasons are investigated and attributed to the usage of two different regularization techniques,namely,the Heaviside smooth function in SEMDOT and the power-law penalty in SIMP.More importantly,a simple-to-use benchmarking graph is proposed for evaluating newly developed topology optimization methods.
基金supported by the National Key R&D Program of China(Grant No.2022YFA1405100)the NSFC distinguished Young Scholar program(Grant No.12225409)+6 种基金the Basic Science Center Project of National Natural Science Foundation of China(NSFC)(Grant No.52388201)the NSFC general program(Grant Nos.52271181,51831005,and 12421004)the Innovation Program for Quantum Science and Technology(Grant No.2023ZD0300500)Beijing Natural Science Foundation(Grant No.Z240006)supported by the KAUST Office of Sponsored Research(OSR)under Award Nos.ORA-CRG102021-4665 and ORA-CRG11-2022-5031supported by the National Key Research and Development Program of China(No.2024YFA1408503)Sichuan Province Science and Technology Support Program(No.2025YFHZ0147)。
文摘Magnetic skyrmions are recognized as potential information carriers for building the next-generation spintronic memory and logic devices.Towards functional device applications,efficient electrical detection of skyrmions at room temperature is one of the most important prerequisites.
基金supported by the Science and Technology Project of Shaanxi Province Water Conservancy,China(2025slkj-10)the Natural Science Basic Research Program of Shaanxi Province,China(S2025-JC-QN-2416).
文摘The Wuding River Basin,situated in the Loess Plateau of northern China,is an ecologically fragile region facing severe soil erosion and imbalanced ecosystem service(ES)functions.However,the mechanisms driving the spatiotemporal evolution of ES functions,as well as the trade-offs and synergies among these functions,remain poorly understood,constraining effective watershed-scale management.To address this challenge,this study quantified four ES functions,i.e.,water yield(WY),carbon storage(CS),habitat quality(HQ),and soil conservation(SC)in the Wuding River Basin from 1990 to 2020 using the Integrated Valuation of Ecosystem Services and Tradeoff(InVEST)model,and proposed an innovative integration of InVEST with a Bayesian Belief Network(BBN)to nonlinearly identify trade-off and synergy relationships among ES functions through probabilistic inference.A trade-off and synergy index(TSI)was developed to assess the spatial interaction intensity among ES functions,while sensitivity and scenario analyses were employed to determine key driving factors,followed by spatial optimization to delineate functional zones.Results revealed distinct spatiotemporal variations:WY increased from 98.69 to 120.52 mm;SC rose to an average of 3.05×104 t/hm2;CS remained relatively stable(about 15.50 t/km2);and HQ averaged 0.51 with localized declines.The BBN achieved a high accuracy of 81.9%and effectively identified strong synergies between WY and SC,as well as between CS and HQ,while clear trade-offs were observed between WY and SC versus CS and HQ.Sensitivity analysis indicated precipitation(variance reduction of 9.4%),land use(9.8%),and vegetation cover(9.1%)as key driving factors.Spatial optimization further showed that core supply and ecological regulation zones are concentrated in the central-southern and southeastern basin,while ecological strengthening and optimization core zones dominate the central-northern and southeastern margins,highlighting strong spatial heterogeneity.Overall,this study advances ES research by combining process-based quantification with probabilistic modeling,offering a robust framework for studying nonlinear interactions,driving mechanisms,and optimization strategies,and providing a transferable paradigm for watershed-scale ES management and ecological planning in arid and semi-arid areas.
基金funded by the National Natural Science Foundation of China(12102487)Basic and Applied Basic Research Foundation of Guangdong Province,China(2023A1515012339)+1 种基金Shenzhen Science and Technology Program(ZDSYS20210623091808026)the Discovery Grant(RGPIN-2024-06290)of the Natural Sciences and Engineering Research Council of Canada。
文摘This paper proposed a new libration decoupling analytical speed function(LD-ASF)in lieu of the classic analytical speed function to control the climber's speed along a partial space elevator to improve libration stability in cargo transportation.The LD-ASF is further optimized for payload transportation efficiency by a novel coordinate game theory to balance competing control objectives among payload transport speed,stable end body's libration,and overall control input via model predictive control.The transfer period is divided into several sections to reduce computational burden.The validity and efficacy of the proposed LD-ASF and coordinate game-based model predictive control are demonstrated by computer simulation.Numerical results reveal that the optimized LD-ASF results in higher transportation speed,stable end body's libration,lower thrust fuel consumption,and more flexible optimization space than the classic analytical speed function.
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
基金the Fundamental Research Program of Guangdong,China,under Grants 2020B1515310023 and 2023A1515011281in part by the National Natural Science Foundation of China under Grant 61571005.
文摘With the rapid development of Network Function Virtualization(NFV),the problem of low resource utilizationin traditional data centers is gradually being addressed.However,existing research does not optimize both localand global allocation of resources in data centers.Hence,we propose an adaptive hybrid optimization strategy thatcombines dynamic programming and neural networks to improve resource utilization and service quality in datacenters.Our approach encompasses a service function chain simulation generator,a parallel architecture servicesystem,a dynamic programming strategy formaximizing the utilization of local server resources,a neural networkfor predicting the global utilization rate of resources and a global resource optimization strategy for bottleneck andredundant resources.With the implementation of our local and global resource allocation strategies,the systemperformance is significantly optimized through simulation.
基金Project supported by the National Natural Science Foundation of China (Nos.12072007,12072006,12132001,and 52192632)the Ningbo Natural Science Foundation of Zhejiang Province of China (No.202003N4018)the Defense Industrial Technology Development Program of China (Nos.JCKY2019205A006,JCKY2019203A003,and JCKY2021204A002)。
文摘A non-probabilistic reliability topology optimization method is proposed based on the aggregation function and matrix multiplication.The expression of the geometric stiffness matrix is derived,the finite element linear buckling analysis is conducted,and the sensitivity solution of the linear buckling factor is achieved.For a specific problem in linear buckling topology optimization,a Heaviside projection function based on the exponential smooth growth is developed to eliminate the gray cells.The aggregation function method is used to consider the high-order eigenvalues,so as to obtain continuous sensitivity information and refined structural design.With cyclic matrix programming,a fast topology optimization method that can be used to efficiently obtain the unit assembly and sensitivity solution is conducted.To maximize the buckling load,under the constraint of the given buckling load,two types of topological optimization columns are constructed.The variable density method is used to achieve the topology optimization solution along with the moving asymptote optimization algorithm.The vertex method and the matching point method are used to carry out an uncertainty propagation analysis,and the non-probability reliability topology optimization method considering buckling responses is developed based on the transformation of non-probability reliability indices based on the characteristic distance.Finally,the differences in the structural topology optimization under different reliability degrees are illustrated by examples.
基金supported in part by the National Natural Science Foundation of China (62372385, 62272078, 62002337)the Chongqing Natural Science Foundation (CSTB2022NSCQ-MSX1486, CSTB2023NSCQ-LZX0069)the Deanship of Scientific Research at King Abdulaziz University, Jeddah, Saudi Arabia (RG-12-135-43)。
文摘High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation learning to an HDI matrix,whose hyper-parameter adaptation can be implemented through a particle swarm optimizer(PSO) to meet scalable requirements.However, conventional PSO is limited by its premature issues,which leads to the accuracy loss of a resultant LFA model. To address this thorny issue, this study merges the information of each particle's state migration into its evolution process following the principle of a generalized momentum method for improving its search ability, thereby building a state-migration particle swarm optimizer(SPSO), whose theoretical convergence is rigorously proved in this study. It is then incorporated into an LFA model for implementing efficient hyper-parameter adaptation without accuracy loss. Experiments on six HDI matrices indicate that an SPSO-incorporated LFA model outperforms state-of-the-art LFA models in terms of prediction accuracy for missing data of an HDI matrix with competitive computational efficiency.Hence, SPSO's use ensures efficient and reliable hyper-parameter adaptation in an LFA model, thus ensuring practicality and accurate representation learning for HDI matrices.
基金the National Key Research and Development Program of China(Grant Number 2021YFB1714600)the National Natural Science Foundation of China(Grant Number 52075195)the Fundamental Research Funds for the Central Universities,China through Program No.2172019kfyXJJS078.
文摘With the continuous advancement in topology optimization and additive manufacturing(AM)technology,the capability to fabricate functionally graded materials and intricate cellular structures with spatially varying microstructures has grown significantly.However,a critical challenge is encountered in the design of these structures–the absence of robust interface connections between adjacent microstructures,potentially resulting in diminished efficiency or macroscopic failure.A Hybrid Level Set Method(HLSM)is proposed,specifically designed to enhance connectivity among non-uniform microstructures,contributing to the design of functionally graded cellular structures.The HLSM introduces a pioneering algorithm for effectively blending heterogeneous microstructure interfaces.Initially,an interpolation algorithm is presented to construct transition microstructures seamlessly connected on both sides.Subsequently,the algorithm enables the morphing of non-uniform unit cells to seamlessly adapt to interconnected adjacent microstructures.The method,seamlessly integrated into a multi-scale topology optimization framework using the level set method,exhibits its efficacy through numerical examples,showcasing its prowess in optimizing 2D and 3D functionally graded materials(FGM)and multi-scale topology optimization.In essence,the pressing issue of interface connections in complex structure design is not only addressed but also a robust methodology is introduced,substantiated by numerical evidence,advancing optimization capabilities in the realm of functionally graded materials and cellular structures.
基金supported from the Deanship of Research and Graduate Studies(DRG)at Ajman University,Ajman,UAE(Grant No.2023-IRG-ENIT-34).
文摘With the increasing dimensionality of the data,High-dimensional Feature Selection(HFS)becomes an increasingly dif-ficult task.It is not simple to find the best subset of features due to the breadth of the search space and the intricacy of the interactions between features.Many of the Feature Selection(FS)approaches now in use for these problems perform sig-nificantly less well when faced with such intricate situations involving high-dimensional search spaces.It is demonstrated that meta-heuristic algorithms can provide sub-optimal results in an acceptable amount of time.This paper presents a new binary Boosted version of the Spider Wasp Optimizer(BSWO)called Binary Boosted SWO(BBSWO),which combines a number of successful and promising strategies,in order to deal with HFS.The shortcomings of the original BSWO,including early convergence,settling into local optimums,limited exploration and exploitation,and lack of population diversity,were addressed by the proposal of this new variant of SWO.The concept of chaos optimization is introduced in BSWO,where initialization is consistently produced by utilizing the properties of sine chaos mapping.A new convergence parameter was then incorporated into BSWO to achieve a promising balance between exploration and exploitation.Multiple exploration mechanisms were then applied in conjunction with several exploitation strategies to effectively enrich the search process of BSWO within the search space.Finally,quantum-based optimization was added to enhance the diversity of the search agents in BSWO.The proposed BBSWO not only offers the most suitable subset of features located,but it also lessens the data's redundancy structure.BBSWO was evaluated using the k-Nearest Neighbor(k-NN)classifier on 23 HFS problems from the biomedical domain taken from the UCI repository.The results were compared with those of traditional BSWO and other well-known meta-heuristics-based FS.The findings indicate that,in comparison to other competing techniques,the proposed BBSWO can,on average,identify the least significant subsets of features with efficient classification accuracy of the k-NN classifier.
文摘The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.