In recent years,the development of new types of nuclear reactors,such as transportable,marine,and space reactors,has presented new challenges for the optimization of reactor radiation-shielding design.Shielding struct...In recent years,the development of new types of nuclear reactors,such as transportable,marine,and space reactors,has presented new challenges for the optimization of reactor radiation-shielding design.Shielding structures typically need to be lightweight,miniaturized,and radiation-protected,which is a multi-parameter and multi-objective optimization problem.The conventional multi-objective(two or three objectives)optimization method for radiation-shielding design exhibits limitations for a number of optimization objectives and variable parameters,as well as a deficiency in achieving a global optimal solution,thereby failing to meet the requirements of shielding optimization for newly developed reactors.In this study,genetic and artificial bee-colony algorithms are combined with a reference-point-selection strategy and applied to the many-objective(having four or more objectives)optimal design of reactor radiation shielding.To validate the reliability of the methods,an optimization simulation is conducted on three-dimensional shielding structures and another complicated shielding-optimization problem.The numerical results demonstrate that the proposed algorithms outperform conventional shielding-design methods in terms of optimization performance,and they exhibit their reliability in practical engineering problems.The many-objective optimization algorithms developed in this study are proven to efficiently and consistently search for Pareto-front shielding schemes.Therefore,the algorithms proposed in this study offer novel insights into improving the shielding-design performance and shielding quality of new reactor types.展开更多
The decoherence of high-dimensional orbital angular momentum(OAM)entanglement in the weak scintillation regime has been investigated.In this study,we simulate atmospheric turbulence by utilizing a multiple-phase scree...The decoherence of high-dimensional orbital angular momentum(OAM)entanglement in the weak scintillation regime has been investigated.In this study,we simulate atmospheric turbulence by utilizing a multiple-phase screen imprinted with anisotropic non-Kolmogorov turbulence.The entanglement negativity and fidelity are introduced to quantify the entanglement of a high-dimensional OAM state.The numerical evaluation results indicate that entanglement negativity and fidelity last longer for a high-dimensional OAM state when the azimuthal mode has a lower value.Additionally,the evolution of higher-dimensional OAM entanglement is significantly influenced by OAM beam parameters and turbulence parameters.Compared to isotropic atmospheric turbulence,anisotropic turbulence has a lesser influence on highdimensional OAM entanglement.展开更多
As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays...As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.展开更多
It is known that monotone recurrence relations can induce a class of twist homeomorphisms on the high-dimensional cylinder,which is an extension of the class of monotone twist maps on the annulus or two-dimensional cy...It is known that monotone recurrence relations can induce a class of twist homeomorphisms on the high-dimensional cylinder,which is an extension of the class of monotone twist maps on the annulus or two-dimensional cylinder.By constructing a bounded solution of the monotone recurrence relation,the main conclusion in this paper is acquired:The induced homeomorphism has Birkhoff orbits provided there is a compact forward-invariant set.Therefore,it generalizes Angenent's results in low-dimensional cases.展开更多
Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when ta...Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when tackling high-dimensional optimization challenges.To effectively address these challenges,this study introduces cooperative metaheuristics integrating dynamic dimension reduction(DR).Building upon particle swarm optimization(PSO)and differential evolution(DE),the proposed cooperative methods C-PSO and C-DE are developed.In the proposed methods,the modified principal components analysis(PCA)is utilized to reduce the dimension of design variables,thereby decreasing computational costs.The dynamic DR strategy implements periodic execution of modified PCA after a fixed number of iterations,resulting in the important dimensions being dynamically identified.Compared with the static one,the dynamic DR strategy can achieve precise identification of important dimensions,thereby enabling accelerated convergence toward optimal solutions.Furthermore,the influence of cumulative contribution rate thresholds on optimization problems with different dimensions is investigated.Metaheuristic algorithms(PSO,DE)and cooperative metaheuristics(C-PSO,C-DE)are examined by 15 benchmark functions and two engineering design problems(speed reducer and composite pressure vessel).Comparative results demonstrate that the cooperative methods achieve significantly superior performance compared to standard methods in both solution accuracy and computational efficiency.Compared to standard metaheuristic algorithms,cooperative metaheuristics achieve a reduction in computational cost of at least 40%.The cooperative metaheuristics can be effectively used to tackle both high-dimensional unconstrained and constrained optimization problems.展开更多
Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemio...Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemiology and risk assessment,including high dimensionality,correlated exposure,and subtle individual effects.Methods We proposed a novel statistical approach,the generalized functional linear model(GFLM),to analyze the health effects of exposure mixtures.GFLM treats the effect of mixture exposures as a smooth function by reordering exposures based on specific mechanisms and capturing internal correlations to provide a meaningful estimation and interpretation.The robustness and efficiency was evaluated under various scenarios through extensive simulation studies.Results We applied the GFLM to two datasets from the National Health and Nutrition Examination Survey(NHANES).In the first application,we examined the effects of 37 nutrients on BMI(2011–2016 cycles).The GFLM identified a significant mixture effect,with fiber and fat emerging as the nutrients with the greatest negative and positive effects on BMI,respectively.For the second application,we investigated the association between four pre-and perfluoroalkyl substances(PFAS)and gout risk(2007–2018 cycles).Unlike traditional methods,the GFLM indicated no significant association,demonstrating its robustness to multicollinearity.Conclusion GFLM framework is a powerful tool for mixture exposure analysis,offering improved handling of correlated exposures and interpretable results.It demonstrates robust performance across various scenarios and real-world applications,advancing our understanding of complex environmental exposures and their health impacts on environmental epidemiology and toxicology.展开更多
The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the effi...The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.展开更多
Seismic data is commonly acquired sparsely and irregularly, which necessitates the regularization of seismic data with anti-aliasing and anti-leakage methods during seismic data processing. We propose a novel method o...Seismic data is commonly acquired sparsely and irregularly, which necessitates the regularization of seismic data with anti-aliasing and anti-leakage methods during seismic data processing. We propose a novel method of 4D anti-aliasing and anti-leakage Fourier transform using a cube-removal strategy to address the combination of irregular sampling and aliasing in high-dimensional seismic data. We compute a weighting function by stacking the spectrum along the radial lines, apply this function to suppress the aliasing energy, and then iteratively pick the dominant amplitude cube to construct the Fourier spectrum. The proposed method is very efficient due to a cube removal strategy for accelerating the convergence of Fourier reconstruction and a well-designed parallel architecture using CPU/GPU collaborative computing. To better fill the acquisition holes from 5D seismic data and meanwhile considering the GPU memory limitation, we developed the anti-aliasing and anti-leakage Fourier transform method in 4D with the remaining spatial dimension looped. The entire workflow is composed of three steps: data splitting, 4D regularization, and data merging. Numerical tests on both synthetic and field data examples demonstrate the high efficiency and effectiveness of our approach.展开更多
Decomposition of a complex multi-objective optimisation problem(MOP)to multiple simple subMOPs,known as M2M for short,is an effective approach to multi-objective optimisation.However,M2M facilitates little communicati...Decomposition of a complex multi-objective optimisation problem(MOP)to multiple simple subMOPs,known as M2M for short,is an effective approach to multi-objective optimisation.However,M2M facilitates little communication/collaboration between subMOPs,which limits its use in complex optimisation scenarios.This paper extends the M2M framework to develop a unified algorithm for both multi-objective and manyobjective optimisation.Through bilevel decomposition,an MOP is divided into multiple subMOPs at upper level,each of which is further divided into a number of single-objective subproblems at lower level.Neighbouring subMOPs are allowed to share some subproblems so that the knowledge gained from solving one subMOP can be transferred to another,and eventually to all the subMOPs.The bilevel decomposition is readily combined with some new mating selection and population update strategies,leading to a high-performance algorithm that competes effectively against a number of state-of-the-arts studied in this paper for both multiand many-objective optimisation.Parameter analysis and component analysis have been also carried out to further justify the proposed algorithm.展开更多
This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighte...This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighted.The influence of inter-layer couplings on the target controllability of multi-layer networks is discussed.It is found that even if there exists a layer which is not target controllable,the entire multi-layer network can still be target controllable due to the inter-layer couplings.For the multi-layer networks with general structure,a necessary and sufficient condition for target controllability is given by establishing the relationship between uncontrollable subspace and output matrix.By the derived condition,it can be found that the system may be target controllable even if it is not state controllable.On this basis,two corollaries are derived,which clarify the relationship between target controllability,state controllability and output controllability.For the multi-layer networks where the inter-layer couplings are directed chains and directed stars,sufficient conditions for target controllability of networked systems are given,respectively.These conditions are easier to verify than the classic criterion.展开更多
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
In this paper,we introduce the censored composite conditional quantile coefficient(cC-CQC)to rank the relative importance of each predictor in high-dimensional censored regression.The cCCQC takes advantage of all usef...In this paper,we introduce the censored composite conditional quantile coefficient(cC-CQC)to rank the relative importance of each predictor in high-dimensional censored regression.The cCCQC takes advantage of all useful information across quantiles and can detect nonlinear effects including interactions and heterogeneity,effectively.Furthermore,the proposed screening method based on cCCQC is robust to the existence of outliers and enjoys the sure screening property.Simulation results demonstrate that the proposed method performs competitively on survival datasets of high-dimensional predictors,particularly when the variables are highly correlated.展开更多
With the increasing dimensionality of the data,High-dimensional Feature Selection(HFS)becomes an increasingly dif-ficult task.It is not simple to find the best subset of features due to the breadth of the search space...With the increasing dimensionality of the data,High-dimensional Feature Selection(HFS)becomes an increasingly dif-ficult task.It is not simple to find the best subset of features due to the breadth of the search space and the intricacy of the interactions between features.Many of the Feature Selection(FS)approaches now in use for these problems perform sig-nificantly less well when faced with such intricate situations involving high-dimensional search spaces.It is demonstrated that meta-heuristic algorithms can provide sub-optimal results in an acceptable amount of time.This paper presents a new binary Boosted version of the Spider Wasp Optimizer(BSWO)called Binary Boosted SWO(BBSWO),which combines a number of successful and promising strategies,in order to deal with HFS.The shortcomings of the original BSWO,including early convergence,settling into local optimums,limited exploration and exploitation,and lack of population diversity,were addressed by the proposal of this new variant of SWO.The concept of chaos optimization is introduced in BSWO,where initialization is consistently produced by utilizing the properties of sine chaos mapping.A new convergence parameter was then incorporated into BSWO to achieve a promising balance between exploration and exploitation.Multiple exploration mechanisms were then applied in conjunction with several exploitation strategies to effectively enrich the search process of BSWO within the search space.Finally,quantum-based optimization was added to enhance the diversity of the search agents in BSWO.The proposed BBSWO not only offers the most suitable subset of features located,but it also lessens the data's redundancy structure.BBSWO was evaluated using the k-Nearest Neighbor(k-NN)classifier on 23 HFS problems from the biomedical domain taken from the UCI repository.The results were compared with those of traditional BSWO and other well-known meta-heuristics-based FS.The findings indicate that,in comparison to other competing techniques,the proposed BBSWO can,on average,identify the least significant subsets of features with efficient classification accuracy of the k-NN classifier.展开更多
High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation lear...High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation learning to an HDI matrix,whose hyper-parameter adaptation can be implemented through a particle swarm optimizer(PSO) to meet scalable requirements.However, conventional PSO is limited by its premature issues,which leads to the accuracy loss of a resultant LFA model. To address this thorny issue, this study merges the information of each particle's state migration into its evolution process following the principle of a generalized momentum method for improving its search ability, thereby building a state-migration particle swarm optimizer(SPSO), whose theoretical convergence is rigorously proved in this study. It is then incorporated into an LFA model for implementing efficient hyper-parameter adaptation without accuracy loss. Experiments on six HDI matrices indicate that an SPSO-incorporated LFA model outperforms state-of-the-art LFA models in terms of prediction accuracy for missing data of an HDI matrix with competitive computational efficiency.Hence, SPSO's use ensures efficient and reliable hyper-parameter adaptation in an LFA model, thus ensuring practicality and accurate representation learning for HDI matrices.展开更多
It is still a huge challenge for traditional Pareto-dominatedmany-objective optimization algorithms to solve manyobjective optimization problems because these algorithms hardly maintain the balance between convergence...It is still a huge challenge for traditional Pareto-dominatedmany-objective optimization algorithms to solve manyobjective optimization problems because these algorithms hardly maintain the balance between convergence and diversity and can only find a group of solutions focused on a small area on the Pareto front,resulting in poor performance of those algorithms.For this reason,we propose a reference vector-assisted algorithmwith an adaptive niche dominance relation,for short MaOEA-AR.The new dominance relation forms a niche based on the angle between candidate solutions.By comparing these solutions,the solutionwith the best convergence is found to be the non-dominated solution to improve the selection pressure.In reproduction,a mutation strategy of k-bit crossover and hybrid mutation is used to generate high-quality offspring.On 23 test problems with up to 15-objective,we compared the proposed algorithm with five state-of-the-art algorithms.The experimental results verified that the proposed algorithm is competitive.展开更多
The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based o...The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.展开更多
Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for...Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for high-dimensiona multi-agent systems with time-varying delays, where a cos function is defined based on state errors among neighboring agents and control inputs of all the agents. By the state space decomposition approach and the linear matrix inequality(LMI)sufficient conditions for guaranteed cost consensus and consensu alization are given. Moreover, a guaranteed cost upper bound o the cost function is determined. It should be mentioned that these LMI criteria are dependent on the change rate of time delays and the maximum time delay, the guaranteed cost upper bound is only dependent on the maximum time delay but independen of the Laplacian matrix. Finally, numerical simulations are given to demonstrate theoretical results.展开更多
Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high paralleliz...Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high parallelization,large bandwidth,and low power consumption to meet the demand of big data.Here,we demonstrate the dual-layer ONN with Mach-Zehnder interferometer(MZI)network and nonlinear layer,while the nonlinear activation function is achieved by optical-electronic signal conversion.Two frequency components from the microcomb source carrying digit datasets are simultaneously imposed and intelligently recognized through the ONN.We successfully achieve the digit classification of different frequency components by demultiplexing the output signal and testing power distribution.Efficient parallelization feasibility with wavelength division multiplexing is demonstrated in our high-dimensional ONN.This work provides a high-performance architecture for future parallel high-capacity optical analog computing.展开更多
The two-archive 2 algorithm(Two_Arch2) is a manyobjective evolutionary algorithm for balancing the convergence,diversity,and complexity using diversity archive(DA) and convergence archive(CA).However,the individuals i...The two-archive 2 algorithm(Two_Arch2) is a manyobjective evolutionary algorithm for balancing the convergence,diversity,and complexity using diversity archive(DA) and convergence archive(CA).However,the individuals in DA are selected based on the traditional Pareto dominance which decreases the selection pressure in the high-dimensional problems.The traditional algorithm even cannot converge due to the weak selection pressure.Meanwhile,Two_Arch2 adopts DA as the output of the algorithm which is hard to maintain diversity and coverage of the final solutions synchronously and increase the complexity of the algorithm.To increase the evolutionary pressure of the algorithm and improve distribution and convergence of the final solutions,an ε-domination based Two_Arch2 algorithm(ε-Two_Arch2) for many-objective problems(MaOPs) is proposed in this paper.In ε-Two_Arch2,to decrease the computational complexity and speed up the convergence,a novel evolutionary framework with a fast update strategy is proposed;to increase the selection pressure,ε-domination is assigned to update the individuals in DA;to guarantee the uniform distribution of the solution,a boundary protection strategy based on I_(ε+) indicator is designated as two steps selection strategies to update individuals in CA.To evaluate the performance of the proposed algorithm,a series of benchmark functions with different numbers of objectives is solved.The results demonstrate that the proposed method is competitive with the state-of-the-art multi-objective evolutionary algorithms and the efficiency of the algorithm is significantly improved compared with Two_Arch2.展开更多
基金supported by the National Natural Science Foundation of China(Nos.12475174 and 12175101)Yue Lu Shan Center Industrial Innovation(No.2024YCII0108)。
文摘In recent years,the development of new types of nuclear reactors,such as transportable,marine,and space reactors,has presented new challenges for the optimization of reactor radiation-shielding design.Shielding structures typically need to be lightweight,miniaturized,and radiation-protected,which is a multi-parameter and multi-objective optimization problem.The conventional multi-objective(two or three objectives)optimization method for radiation-shielding design exhibits limitations for a number of optimization objectives and variable parameters,as well as a deficiency in achieving a global optimal solution,thereby failing to meet the requirements of shielding optimization for newly developed reactors.In this study,genetic and artificial bee-colony algorithms are combined with a reference-point-selection strategy and applied to the many-objective(having four or more objectives)optimal design of reactor radiation shielding.To validate the reliability of the methods,an optimization simulation is conducted on three-dimensional shielding structures and another complicated shielding-optimization problem.The numerical results demonstrate that the proposed algorithms outperform conventional shielding-design methods in terms of optimization performance,and they exhibit their reliability in practical engineering problems.The many-objective optimization algorithms developed in this study are proven to efficiently and consistently search for Pareto-front shielding schemes.Therefore,the algorithms proposed in this study offer novel insights into improving the shielding-design performance and shielding quality of new reactor types.
基金supported by the Project of the Hubei Provincial Department of Science and Technology(Grant Nos.2022CFB957,2022CFB475)the National Natural Science Foundation of China(Grant No.11847118)。
文摘The decoherence of high-dimensional orbital angular momentum(OAM)entanglement in the weak scintillation regime has been investigated.In this study,we simulate atmospheric turbulence by utilizing a multiple-phase screen imprinted with anisotropic non-Kolmogorov turbulence.The entanglement negativity and fidelity are introduced to quantify the entanglement of a high-dimensional OAM state.The numerical evaluation results indicate that entanglement negativity and fidelity last longer for a high-dimensional OAM state when the azimuthal mode has a lower value.Additionally,the evolution of higher-dimensional OAM entanglement is significantly influenced by OAM beam parameters and turbulence parameters.Compared to isotropic atmospheric turbulence,anisotropic turbulence has a lesser influence on highdimensional OAM entanglement.
基金supported by Youth Talent Project of Scientific Research Program of Hubei Provincial Department of Education under Grant Q20241809Doctoral Scientific Research Foundation of Hubei University of Automotive Technology under Grant 202404.
文摘As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.
基金Supported by the National Natural Science Foundation of China(12201446)the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(22KJB110005)the Shuangchuang Program of Jiangsu Province(JSSCBS20220898)。
文摘It is known that monotone recurrence relations can induce a class of twist homeomorphisms on the high-dimensional cylinder,which is an extension of the class of monotone twist maps on the annulus or two-dimensional cylinder.By constructing a bounded solution of the monotone recurrence relation,the main conclusion in this paper is acquired:The induced homeomorphism has Birkhoff orbits provided there is a compact forward-invariant set.Therefore,it generalizes Angenent's results in low-dimensional cases.
基金funded by National Natural Science Foundation of China(Nos.12402142,11832013 and 11572134)Natural Science Foundation of Hubei Province(No.2024AFB235)+1 种基金Hubei Provincial Department of Education Science and Technology Research Project(No.Q20221714)the Opening Foundation of Hubei Key Laboratory of Digital Textile Equipment(Nos.DTL2023019 and DTL2022012).
文摘Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when tackling high-dimensional optimization challenges.To effectively address these challenges,this study introduces cooperative metaheuristics integrating dynamic dimension reduction(DR).Building upon particle swarm optimization(PSO)and differential evolution(DE),the proposed cooperative methods C-PSO and C-DE are developed.In the proposed methods,the modified principal components analysis(PCA)is utilized to reduce the dimension of design variables,thereby decreasing computational costs.The dynamic DR strategy implements periodic execution of modified PCA after a fixed number of iterations,resulting in the important dimensions being dynamically identified.Compared with the static one,the dynamic DR strategy can achieve precise identification of important dimensions,thereby enabling accelerated convergence toward optimal solutions.Furthermore,the influence of cumulative contribution rate thresholds on optimization problems with different dimensions is investigated.Metaheuristic algorithms(PSO,DE)and cooperative metaheuristics(C-PSO,C-DE)are examined by 15 benchmark functions and two engineering design problems(speed reducer and composite pressure vessel).Comparative results demonstrate that the cooperative methods achieve significantly superior performance compared to standard methods in both solution accuracy and computational efficiency.Compared to standard metaheuristic algorithms,cooperative metaheuristics achieve a reduction in computational cost of at least 40%.The cooperative metaheuristics can be effectively used to tackle both high-dimensional unconstrained and constrained optimization problems.
基金supported in part by the Young Scientists Fund of the National Natural Science Foundation of China(Grant Nos.82304253)(and 82273709)the Foundation for Young Talents in Higher Education of Guangdong Province(Grant No.2022KQNCX021)the PhD Starting Project of Guangdong Medical University(Grant No.GDMUB2022054).
文摘Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemiology and risk assessment,including high dimensionality,correlated exposure,and subtle individual effects.Methods We proposed a novel statistical approach,the generalized functional linear model(GFLM),to analyze the health effects of exposure mixtures.GFLM treats the effect of mixture exposures as a smooth function by reordering exposures based on specific mechanisms and capturing internal correlations to provide a meaningful estimation and interpretation.The robustness and efficiency was evaluated under various scenarios through extensive simulation studies.Results We applied the GFLM to two datasets from the National Health and Nutrition Examination Survey(NHANES).In the first application,we examined the effects of 37 nutrients on BMI(2011–2016 cycles).The GFLM identified a significant mixture effect,with fiber and fat emerging as the nutrients with the greatest negative and positive effects on BMI,respectively.For the second application,we investigated the association between four pre-and perfluoroalkyl substances(PFAS)and gout risk(2007–2018 cycles).Unlike traditional methods,the GFLM indicated no significant association,demonstrating its robustness to multicollinearity.Conclusion GFLM framework is a powerful tool for mixture exposure analysis,offering improved handling of correlated exposures and interpretable results.It demonstrates robust performance across various scenarios and real-world applications,advancing our understanding of complex environmental exposures and their health impacts on environmental epidemiology and toxicology.
基金supported by the Innovation Fund Project of the Gansu Education Department(Grant No.2021B-099).
文摘The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.
文摘Seismic data is commonly acquired sparsely and irregularly, which necessitates the regularization of seismic data with anti-aliasing and anti-leakage methods during seismic data processing. We propose a novel method of 4D anti-aliasing and anti-leakage Fourier transform using a cube-removal strategy to address the combination of irregular sampling and aliasing in high-dimensional seismic data. We compute a weighting function by stacking the spectrum along the radial lines, apply this function to suppress the aliasing energy, and then iteratively pick the dominant amplitude cube to construct the Fourier spectrum. The proposed method is very efficient due to a cube removal strategy for accelerating the convergence of Fourier reconstruction and a well-designed parallel architecture using CPU/GPU collaborative computing. To better fill the acquisition holes from 5D seismic data and meanwhile considering the GPU memory limitation, we developed the anti-aliasing and anti-leakage Fourier transform method in 4D with the remaining spatial dimension looped. The entire workflow is composed of three steps: data splitting, 4D regularization, and data merging. Numerical tests on both synthetic and field data examples demonstrate the high efficiency and effectiveness of our approach.
基金supported in part by the National Natural Science Foundation of China (62376288,U23A20347)the Engineering and Physical Sciences Research Council of UK (EP/X041239/1)the Royal Society International Exchanges Scheme of UK (IEC/NSFC/211404)。
文摘Decomposition of a complex multi-objective optimisation problem(MOP)to multiple simple subMOPs,known as M2M for short,is an effective approach to multi-objective optimisation.However,M2M facilitates little communication/collaboration between subMOPs,which limits its use in complex optimisation scenarios.This paper extends the M2M framework to develop a unified algorithm for both multi-objective and manyobjective optimisation.Through bilevel decomposition,an MOP is divided into multiple subMOPs at upper level,each of which is further divided into a number of single-objective subproblems at lower level.Neighbouring subMOPs are allowed to share some subproblems so that the knowledge gained from solving one subMOP can be transferred to another,and eventually to all the subMOPs.The bilevel decomposition is readily combined with some new mating selection and population update strategies,leading to a high-performance algorithm that competes effectively against a number of state-of-the-arts studied in this paper for both multiand many-objective optimisation.Parameter analysis and component analysis have been also carried out to further justify the proposed algorithm.
基金supported by the National Natural Science Foundation of China (U1808205)Hebei Natural Science Foundation (F2000501005)。
文摘This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighted.The influence of inter-layer couplings on the target controllability of multi-layer networks is discussed.It is found that even if there exists a layer which is not target controllable,the entire multi-layer network can still be target controllable due to the inter-layer couplings.For the multi-layer networks with general structure,a necessary and sufficient condition for target controllability is given by establishing the relationship between uncontrollable subspace and output matrix.By the derived condition,it can be found that the system may be target controllable even if it is not state controllable.On this basis,two corollaries are derived,which clarify the relationship between target controllability,state controllability and output controllability.For the multi-layer networks where the inter-layer couplings are directed chains and directed stars,sufficient conditions for target controllability of networked systems are given,respectively.These conditions are easier to verify than the classic criterion.
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
基金Outstanding Youth Foundation of Hunan Provincial Department of Education(Grant No.22B0911)。
文摘In this paper,we introduce the censored composite conditional quantile coefficient(cC-CQC)to rank the relative importance of each predictor in high-dimensional censored regression.The cCCQC takes advantage of all useful information across quantiles and can detect nonlinear effects including interactions and heterogeneity,effectively.Furthermore,the proposed screening method based on cCCQC is robust to the existence of outliers and enjoys the sure screening property.Simulation results demonstrate that the proposed method performs competitively on survival datasets of high-dimensional predictors,particularly when the variables are highly correlated.
基金supported from the Deanship of Research and Graduate Studies(DRG)at Ajman University,Ajman,UAE(Grant No.2023-IRG-ENIT-34).
文摘With the increasing dimensionality of the data,High-dimensional Feature Selection(HFS)becomes an increasingly dif-ficult task.It is not simple to find the best subset of features due to the breadth of the search space and the intricacy of the interactions between features.Many of the Feature Selection(FS)approaches now in use for these problems perform sig-nificantly less well when faced with such intricate situations involving high-dimensional search spaces.It is demonstrated that meta-heuristic algorithms can provide sub-optimal results in an acceptable amount of time.This paper presents a new binary Boosted version of the Spider Wasp Optimizer(BSWO)called Binary Boosted SWO(BBSWO),which combines a number of successful and promising strategies,in order to deal with HFS.The shortcomings of the original BSWO,including early convergence,settling into local optimums,limited exploration and exploitation,and lack of population diversity,were addressed by the proposal of this new variant of SWO.The concept of chaos optimization is introduced in BSWO,where initialization is consistently produced by utilizing the properties of sine chaos mapping.A new convergence parameter was then incorporated into BSWO to achieve a promising balance between exploration and exploitation.Multiple exploration mechanisms were then applied in conjunction with several exploitation strategies to effectively enrich the search process of BSWO within the search space.Finally,quantum-based optimization was added to enhance the diversity of the search agents in BSWO.The proposed BBSWO not only offers the most suitable subset of features located,but it also lessens the data's redundancy structure.BBSWO was evaluated using the k-Nearest Neighbor(k-NN)classifier on 23 HFS problems from the biomedical domain taken from the UCI repository.The results were compared with those of traditional BSWO and other well-known meta-heuristics-based FS.The findings indicate that,in comparison to other competing techniques,the proposed BBSWO can,on average,identify the least significant subsets of features with efficient classification accuracy of the k-NN classifier.
基金supported in part by the National Natural Science Foundation of China (62372385, 62272078, 62002337)the Chongqing Natural Science Foundation (CSTB2022NSCQ-MSX1486, CSTB2023NSCQ-LZX0069)the Deanship of Scientific Research at King Abdulaziz University, Jeddah, Saudi Arabia (RG-12-135-43)。
文摘High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation learning to an HDI matrix,whose hyper-parameter adaptation can be implemented through a particle swarm optimizer(PSO) to meet scalable requirements.However, conventional PSO is limited by its premature issues,which leads to the accuracy loss of a resultant LFA model. To address this thorny issue, this study merges the information of each particle's state migration into its evolution process following the principle of a generalized momentum method for improving its search ability, thereby building a state-migration particle swarm optimizer(SPSO), whose theoretical convergence is rigorously proved in this study. It is then incorporated into an LFA model for implementing efficient hyper-parameter adaptation without accuracy loss. Experiments on six HDI matrices indicate that an SPSO-incorporated LFA model outperforms state-of-the-art LFA models in terms of prediction accuracy for missing data of an HDI matrix with competitive computational efficiency.Hence, SPSO's use ensures efficient and reliable hyper-parameter adaptation in an LFA model, thus ensuring practicality and accurate representation learning for HDI matrices.
基金supported by the National Natural Science Foundation of China(Grant No.61976101)the University Natural Science Research Project of Anhui Province(Grant No.2023AH040056)+4 种基金the Natural Science Research Project of Anhui Province(Graduate Research Project,Grant No.YJS20210463)the Funding Plan for Scientic Research Activities of Academic and Technical Leaders and Reserve Candidates in Anhui Province(Grant No.2021H264)the Top Talent Project of Disciplines(Majors)in Colleges and Universities in Anhui Province(Grant No.gxbjZD2022021)the University Synergy Innovation Program of Anhui Province,China(GXXT-2022-033)supported by the Innovation Fund for Postgraduates of Huaibei Normal University(Grant Nos.cx2022041,yx2021023,CX2023043).
文摘It is still a huge challenge for traditional Pareto-dominatedmany-objective optimization algorithms to solve manyobjective optimization problems because these algorithms hardly maintain the balance between convergence and diversity and can only find a group of solutions focused on a small area on the Pareto front,resulting in poor performance of those algorithms.For this reason,we propose a reference vector-assisted algorithmwith an adaptive niche dominance relation,for short MaOEA-AR.The new dominance relation forms a niche based on the angle between candidate solutions.By comparing these solutions,the solutionwith the best convergence is found to be the non-dominated solution to improve the selection pressure.In reproduction,a mutation strategy of k-bit crossover and hybrid mutation is used to generate high-quality offspring.On 23 test problems with up to 15-objective,we compared the proposed algorithm with five state-of-the-art algorithms.The experimental results verified that the proposed algorithm is competitive.
文摘The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.
基金supported by Shaanxi Province Natural Science Foundation of Research Projects(2016JM6014)the Innovation Foundation of High-Tech Institute of Xi’an(2015ZZDJJ03)the Youth Foundation of HighTech Institute of Xi’an(2016QNJJ004)
文摘Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for high-dimensiona multi-agent systems with time-varying delays, where a cos function is defined based on state errors among neighboring agents and control inputs of all the agents. By the state space decomposition approach and the linear matrix inequality(LMI)sufficient conditions for guaranteed cost consensus and consensu alization are given. Moreover, a guaranteed cost upper bound o the cost function is determined. It should be mentioned that these LMI criteria are dependent on the change rate of time delays and the maximum time delay, the guaranteed cost upper bound is only dependent on the maximum time delay but independen of the Laplacian matrix. Finally, numerical simulations are given to demonstrate theoretical results.
基金Peng Xie acknowledges the support from the China Scholarship Council(Grant no.201804910829).
文摘Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high parallelization,large bandwidth,and low power consumption to meet the demand of big data.Here,we demonstrate the dual-layer ONN with Mach-Zehnder interferometer(MZI)network and nonlinear layer,while the nonlinear activation function is achieved by optical-electronic signal conversion.Two frequency components from the microcomb source carrying digit datasets are simultaneously imposed and intelligently recognized through the ONN.We successfully achieve the digit classification of different frequency components by demultiplexing the output signal and testing power distribution.Efficient parallelization feasibility with wavelength division multiplexing is demonstrated in our high-dimensional ONN.This work provides a high-performance architecture for future parallel high-capacity optical analog computing.
基金supported by the National Natural Science Foundation of ChinaNatural Science Foundation of Zhejiang Province (52077203,LY19E070003)the Fundamental Research Funds for the Provincial Universities of Zhejiang (2021YW06)。
文摘The two-archive 2 algorithm(Two_Arch2) is a manyobjective evolutionary algorithm for balancing the convergence,diversity,and complexity using diversity archive(DA) and convergence archive(CA).However,the individuals in DA are selected based on the traditional Pareto dominance which decreases the selection pressure in the high-dimensional problems.The traditional algorithm even cannot converge due to the weak selection pressure.Meanwhile,Two_Arch2 adopts DA as the output of the algorithm which is hard to maintain diversity and coverage of the final solutions synchronously and increase the complexity of the algorithm.To increase the evolutionary pressure of the algorithm and improve distribution and convergence of the final solutions,an ε-domination based Two_Arch2 algorithm(ε-Two_Arch2) for many-objective problems(MaOPs) is proposed in this paper.In ε-Two_Arch2,to decrease the computational complexity and speed up the convergence,a novel evolutionary framework with a fast update strategy is proposed;to increase the selection pressure,ε-domination is assigned to update the individuals in DA;to guarantee the uniform distribution of the solution,a boundary protection strategy based on I_(ε+) indicator is designated as two steps selection strategies to update individuals in CA.To evaluate the performance of the proposed algorithm,a series of benchmark functions with different numbers of objectives is solved.The results demonstrate that the proposed method is competitive with the state-of-the-art multi-objective evolutionary algorithms and the efficiency of the algorithm is significantly improved compared with Two_Arch2.