In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems ...In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems (SDSs). First, the dimension reduction with principal component analysis (PCA) is used to transform the high-dimensional spatio-temporal data into a low-dimensional time domain. The MPC strategy is proposed based on the online correction low-dimensional models, where the state of the system at a previous time is used to correct the output of low-dimensional models. Sufficient conditions for closed-loop stability are presented and proven. Simulations demonstrate the accuracy and efficiency of the proposed methodologies.展开更多
This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted av...This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted average and the cosine simplex algorithm. The first approach identifies binding constraints by using the weighted average of each constraint, whereas the second algorithm is based on the cosine similarity between the vector of the objective function and the constraints. These two approaches are complementary, and when used together, they locate the essential subset of initial constraints required for solving medium and large-scale linear programming problems. After reducing the dimension of the linear programming problem using the subset of the essential constraints, the solution method can be chosen from any suitable method for linear programming. The proposed approach was applied to a set of well-known benchmarks as well as more than 2000 random medium and large-scale linear programming problems. The results are promising, indicating that the new approach contributes to the reduction of both the size of the problems and the total number of iterations required. A tree-based classification model also confirmed the need for combining the two approaches. A detailed numerical example, the general numerical results, and the statistical analysis for the decision tree procedure are presented.展开更多
The dimensional reduction theory is applied to an A_(2)O-P_(2)O_(5)-Bi_(2)O_(3) system where P_(2)O_(5) serves as a binary parent while A_(2)O and Bi_(2)O_(3) are regarded as dimensional reduction agents.Thus,three no...The dimensional reduction theory is applied to an A_(2)O-P_(2)O_(5)-Bi_(2)O_(3) system where P_(2)O_(5) serves as a binary parent while A_(2)O and Bi_(2)O_(3) are regarded as dimensional reduction agents.Thus,three novel phosphates ACs_(5)Bi_(4)(PO_(4))_(2)(P_(2)O_(7))_(3)(A=K,Rb and Cs)have been prepared through the high-temperature solution method.Single-crystal X-ray diffraction measurements reveal that they all crystallize in the monoclinic space group P_(2)1/c(no.14)and in their structures,two different kinds of P-O units,i.e.isolated PO_(4) tetrahedra and P_(2)O_(7) dimers,are interconnected by Bi-O groups to construct a ^(3)_(∞)[Bi_(4)(PO_(4))_(2)(P_(2)O_(7))_(3)^(6-)]framework.Note that with the change of cationic size in their structures,the Bi^(3+)cations also exhibit flexible coordination,i.e.BiO_(5) and BiO_(6) polyhedra in KCs_(5)Bi_(4)(PO_(4))_(2)(P_(2)O_(7))_(3) and only BiO_(6) polyhedra in Cs_(6)Bi_(4)(PO_(4))_(2)(P_(2)O_(7))_(3),suggesting that combining the flexible coordination of Bi^(3+)cations with phosphates based on dimensional reduction theory is an effective strategy to guide the synthesis of new compounds.展开更多
Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when ta...Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when tackling high-dimensional optimization challenges.To effectively address these challenges,this study introduces cooperative metaheuristics integrating dynamic dimension reduction(DR).Building upon particle swarm optimization(PSO)and differential evolution(DE),the proposed cooperative methods C-PSO and C-DE are developed.In the proposed methods,the modified principal components analysis(PCA)is utilized to reduce the dimension of design variables,thereby decreasing computational costs.The dynamic DR strategy implements periodic execution of modified PCA after a fixed number of iterations,resulting in the important dimensions being dynamically identified.Compared with the static one,the dynamic DR strategy can achieve precise identification of important dimensions,thereby enabling accelerated convergence toward optimal solutions.Furthermore,the influence of cumulative contribution rate thresholds on optimization problems with different dimensions is investigated.Metaheuristic algorithms(PSO,DE)and cooperative metaheuristics(C-PSO,C-DE)are examined by 15 benchmark functions and two engineering design problems(speed reducer and composite pressure vessel).Comparative results demonstrate that the cooperative methods achieve significantly superior performance compared to standard methods in both solution accuracy and computational efficiency.Compared to standard metaheuristic algorithms,cooperative metaheuristics achieve a reduction in computational cost of at least 40%.The cooperative metaheuristics can be effectively used to tackle both high-dimensional unconstrained and constrained optimization problems.展开更多
The current application of porous catalytic materials for organic synthesis is always confined to comparatively simple small substrates because of the diffusion barrier.Therefore,in this study,dimensional reduction an...The current application of porous catalytic materials for organic synthesis is always confined to comparatively simple small substrates because of the diffusion barrier.Therefore,in this study,dimensional reduction and active site addition strategies were employed for preparing unique porous{RE_(9)}-clusterbased rare-earth metal-organic frameworks(MOFs){[Me_(2)NH_(2)]4[RE_(9)(pddb)_(6)(μ_(3)-O)_(2)(μ_(3)-OH)_(12)(H_(2)O)_(1.5)(HCO_(2))_(3)]·6.5DMF·11H_(2)O}_(n)(MOF-RE,RE=Tb,Y,and Dy)with high-density multiple active sites.It was found that MOF-RE are rare{RE_(9)}-based two-dimensional(2D)networks including triangularnanoporous(1.3 nm)and triangular-microporous(0.8 nm)channels decorated by abundant Lewis acid-base sites(open RE_((III))sites and Npyridine atoms)on the inner surface.As anticipated,due to the coexistence of Lewis acid-base sites,activated samples exhibited better catalytic activity(a yield of 96%,and a TON value of 768 for styrene oxide)than most previously reported 3D MOF materials for the cycloaddition of CO_(2) and multifarious epoxides under moderate conditions.Moreover,as a heterogeneous catalyst,MOF-Tb,has excellent catalytic performance(with a TON value of 396 for benzaldehyde)for the Knoevenagel condensation reaction of malononitrile and aldehydes with high catalytic stability and recoverability.In addition,both reactions possessed high turnover numbers and frequencies.These dimensional reduction and active site addition tactics may permit the exploitation of new nanoporous MOF catalysts based on rare-earth clusters for useful and intricate organic conversions.展开更多
Gas turbine rotors are complex dynamic systems with high-dimensional,discrete,and multi-source nonlinear coupling characteristics.Significant amounts of resources and time are spent during the process of solving dynam...Gas turbine rotors are complex dynamic systems with high-dimensional,discrete,and multi-source nonlinear coupling characteristics.Significant amounts of resources and time are spent during the process of solving dynamic characteristics.Therefore,it is necessary to design a lowdimensional model that can well reflect the dynamic characteristics of high-dimensional system.To build such a low-dimensional model,this study developed a dimensionality reduction method considering global order energy distribution by modifying the proper orthogonal decomposition theory.First,sensitivity analysis of key dimensionality reduction parameters to the energy distribution was conducted.Then a high-dimensional rotor-bearing system considering the nonlinear stiffness and oil film force was reduced,and the accuracy and the reusability of the low-dimensional model under different operating conditions were examined.Finally,the response results of a multi-disk rotor-bearing test bench were reduced using the proposed method,and spectrum results were then compared experimentally.Numerical and experimental results demonstrate that,during the dimensionality reduction process,the solution period of dynamic response results has the most significant influence on the accuracy of energy preservation.The transient signal in the transformation matrix mainly affects the high-order energy distribution of the rotor system.The larger the proportion of steady-state signals is,the closer the energy tends to accumulate towards lower orders.The low-dimensional rotor model accurately reflects the frequency response characteristics of the original high-dimensional system with an accuracy of up to 98%.The proposed dimensionality reduction method exhibits significant application potential in the dynamic analysis of highdimensional systems coupled with strong nonlinearities under variable operating conditions.展开更多
The inversion of large sparse matrices poses a major challenge in geophysics,particularly in Bayesian seismic inversion,significantly limiting computational efficiency and practical applicability to largescale dataset...The inversion of large sparse matrices poses a major challenge in geophysics,particularly in Bayesian seismic inversion,significantly limiting computational efficiency and practical applicability to largescale datasets.Existing dimensionality reduction methods have achieved partial success in addressing this issue.However,they remain limited in terms of the achievable degree of dimensionality reduction.An incremental deep dimensionality reduction approach is proposed herein to significantly reduce matrix size and is applied to Bayesian linearized inversion(BLI),a stochastic seismic inversion approach that heavily depends on large sparse matrices inversion.The proposed method first employs a linear transformation based on the discrete cosine transform(DCT)to extract the matrix's essential information and eliminate redundant components,forming the foundation of the dimensionality reduction framework.Subsequently,an innovative iterative DCT-based dimensionality reduction process is applied,where the reduction magnitude is carefully calibrated at each iteration to incrementally reduce dimensionality,thereby effectively eliminating matrix redundancy in depth.This process is referred to as the incremental discrete cosine transform(IDCT).Ultimately,a linear IDCT-based reduction operator is constructed and applied to the kernel matrix inversion in BLI,resulting in a more efficient BLI framework.The proposed method was evaluated through synthetic and field data tests and compared with conventional dimensionality reduction methods.The IDCT approach significantly improves the dimensionality reduction efficiency of the core inversion matrix while preserving inversion accuracy,demonstrating prominent advantages in solving Bayesian inverse problems more efficiently.展开更多
The proliferation of high-dimensional data and the widespread use of complex models present central challenges in contemporary statistics and data science.Dimension reduction and model checking,as two foundational pil...The proliferation of high-dimensional data and the widespread use of complex models present central challenges in contemporary statistics and data science.Dimension reduction and model checking,as two foundational pillars supporting scientific inference and data-driven decisionmaking,have evolved through the collective wisdom of generations of statisticians.This special issue,titled"Recent Developments in Dimension Reduction and Model Checking for regressions",not only aims to showcase cutting-edge advances in the field but also carries a distinct sense of academic homage to honor the groundbreaking and enduring contributions of Professor Lixing Zhu,a leading scholar whose work has profoundly shaped both areas.展开更多
In this note,the authors revisit the envelope dimension reduction,which was first introduced for estimating a sufficient dimension reduction subspace without inverting the sample covariance.Motivated by the recent dev...In this note,the authors revisit the envelope dimension reduction,which was first introduced for estimating a sufficient dimension reduction subspace without inverting the sample covariance.Motivated by the recent developments in envelope methods and algorithms,the authors refresh the envelope inverse regression as a flexible alternative to the existing inverse regression methods in dimension reduction.The authors discuss the versatility of the envelope approach and demonstrate the advantages of the envelope dimension reduction through simulation studies.展开更多
In this paper,the authors propose a nonlinear dimension reduction technique based on Fréchet inverse regression to achieve sufficient dimension reduction for responses in metric spaces and predictors in Riemannia...In this paper,the authors propose a nonlinear dimension reduction technique based on Fréchet inverse regression to achieve sufficient dimension reduction for responses in metric spaces and predictors in Riemannian manifolds.The authors rigorously establish statistical properties of the estimators,providing formal proofs of their consistency and asymptotic behaviors.The effectiveness of our method is demonstrated through extensive simulations and applications to real-world datasets which highlight its practical utility for complex data with non-Euclidean structures.展开更多
Classical linear discriminant analysis(LDA)(Fisher,1936)implicitly assumes the classification boundary depends on only one linear combination of the predictors.This restriction can lead to poor classification in appli...Classical linear discriminant analysis(LDA)(Fisher,1936)implicitly assumes the classification boundary depends on only one linear combination of the predictors.This restriction can lead to poor classification in applications where the decision boundary depends on multiple linear combinations of the predictors.To overcome this challenge,the authors first project the predictors onto an envelope central space and then perform LDA based on the sufficient predictor.The performance of the proposed method in improving classification accuracy is demonstrated in both synthetic data and real applications.展开更多
In order to accurately identify speech emotion information, the discriminant-cascading effect in dimensionality reduction of speech emotion recognition is investigated. Based on the existing locality preserving projec...In order to accurately identify speech emotion information, the discriminant-cascading effect in dimensionality reduction of speech emotion recognition is investigated. Based on the existing locality preserving projections and graph embedding framework, a novel discriminant-cascading dimensionality reduction method is proposed, which is named discriminant-cascading locality preserving projections (DCLPP). The proposed method specifically utilizes supervised embedding graphs and it keeps the original space for the inner products of samples to maintain enough information for speech emotion recognition. Then, the kernel DCLPP (KDCLPP) is also proposed to extend the mapping form. Validated by the experiments on the corpus of EMO-DB and eNTERFACE'05, the proposed method can clearly outperform the existing common dimensionality reduction methods, such as principal component analysis (PCA), linear discriminant analysis (LDA), locality preserving projections (LPP), local discriminant embedding (LDE), graph-based Fisher analysis (GbFA) and so on, with different categories of classifiers.展开更多
Some dimensionality reduction (DR) approaches based on support vector machine (SVM) are proposed. But the acquirement of the projection matrix in these approaches only considers the between-class margin based on S...Some dimensionality reduction (DR) approaches based on support vector machine (SVM) are proposed. But the acquirement of the projection matrix in these approaches only considers the between-class margin based on SVM while ignoring the within-class information in data. This paper presents a new DR approach, call- ed the dimensionality reduction based on SVM and LDA (DRSL). DRSL considers the between-class margins from SVM and LDA, and the within-class compactness from LDA to obtain the projection matrix. As a result, DRSL can realize the combination of the between-class and within-class information and fit the between-class and within-class structures in data. Hence, the obtained projection matrix increases the generalization ability of subsequent classification techniques. Experiments applied to classification techniques show the effectiveness of the proposed method.展开更多
Comprehensive Summary Developing new catalysts for highly selectivity and conversion of saturated C(sp^(3))-H bonds is of great significance.In order to obtain catalysts with high catalytic performance,six Eu-based MO...Comprehensive Summary Developing new catalysts for highly selectivity and conversion of saturated C(sp^(3))-H bonds is of great significance.In order to obtain catalysts with high catalytic performance,six Eu-based MOFs with different structural characteristics were obtained by using europium ions and different organic acid ligands,namely Eu-1~Eu-6.Eu-1,Eu-2 and Eu-3 featured three-dimensional structures,while Eu-4 and Eu-5 featured two-dimensional structures.展开更多
We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold i...We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold is learned by constructing an approximation for the tangent space at each point, and those tangent spaces are then aligned to give the global coordinates of the data points with respect to the underlying manifold. We also present an error analysis of our algorithm showing that reconstruction errors can be quite small in some cases. We illustrate our algorithm using curves and surfaces both in 2D/3D Euclidean spaces and higher dimensional Euclidean spaces. We also address several theoretical and algorithmic issues for further research and improvements.展开更多
In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number...In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number of design variables are needed, the computational cost becomes prohibitive, and thus original global optimization strategies are required. To address this need, data dimensionality reduction method is combined with global optimization methods, thus forming a new global optimization system, aiming to improve the efficiency of conventional global optimization. The new optimization system involves applying Proper Orthogonal Decomposition(POD) in dimensionality reduction of design space while maintaining the generality of original design space. Besides, an acceleration approach for samples calculation in surrogate modeling is applied to reduce the computational time while providing sufficient accuracy. The optimizations of a transonic airfoil RAE2822 and the transonic wing ONERA M6 are performed to demonstrate the effectiveness of the proposed new optimization system. In both cases, we manage to reduce the number of design variables from 20 to 10 and from 42 to 20 respectively. The new design optimization system converges faster and it takes 1/3 of the total time of traditional optimization to converge to a better design, thus significantly reducing the overall optimization time and improving the efficiency of conventional global design optimization method.展开更多
In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the researc...In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification.展开更多
Graph learning is an effective manner to analyze the intrinsic properties of data.It has been widely used in the fields of dimensionality reduction and classification for data.In this paper,we focus on the graph learn...Graph learning is an effective manner to analyze the intrinsic properties of data.It has been widely used in the fields of dimensionality reduction and classification for data.In this paper,we focus on the graph learning-based dimensionality reduction for a hyperspectral image.Firstly,we review the development of graph learning and its application in a hyperspectral image.Then,we mainly discuss several representative graph methods including two manifold learning methods,two sparse graph learning methods,and two hypergraph learning methods.For manifold learning,we analyze neighborhood preserving embedding and locality preserving projections which are two classic manifold learning methods and can be transformed into the form of a graph.For sparse graph,we introduce sparsity preserving graph embedding and sparse graph-based discriminant analysis which can adaptively reveal data structure to construct a graph.For hypergraph learning,we review binary hypergraph and discriminant hyper-Laplacian projection which can represent the high-order relationship of data.展开更多
Dimensionality reduction and data visualization are useful and important processes in pattern recognition. Many techniques have been developed in the recent years. The self-organizing map (SOM) can be an efficient m...Dimensionality reduction and data visualization are useful and important processes in pattern recognition. Many techniques have been developed in the recent years. The self-organizing map (SOM) can be an efficient method for this purpose. This paper reviews recent advances in this area and related approaches such as multidimensional scaling (MDS), nonlinear PC A, principal manifolds, as well as the connections of the SOM and its recent variant, the visualization induced SOM (ViSOM), with these approaches. The SOM is shown to produce a quantized, qualitative scaling and while the ViSOM a quantitative or metric scaling and approximates principal curve/surface. The SOM can also be regarded as a generalized MDS to relate two metric spaces by forming a topological mapping between them. The relationships among various recently proposed techniques such as ViSOM, Isomap, LLE, and eigenmap are discussed and compared.展开更多
Image feature optimization is an important means to deal with high-dimensional image data in image semantic understanding and its applications. We formulate image feature optimization as the establishment of a mapping...Image feature optimization is an important means to deal with high-dimensional image data in image semantic understanding and its applications. We formulate image feature optimization as the establishment of a mapping between highand low-dimensional space via a five-tuple model. Nonlinear dimensionality reduction based on manifold learning provides a feasible way for solving such a problem. We propose a novel globular neighborhood based locally linear embedding (GNLLE) algorithm using neighborhood update and an incremental neighbor search scheme, which not only can handle sparse datasets but also has strong anti-noise capability and good topological stability. Given that the distance measure adopted in nonlinear dimensionality reduction is usually based on pairwise similarity calculation, we also present a globular neighborhood and path clustering based locally linear embedding (GNPCLLE) algorithm based on path-based clustering. Due to its full consideration of correlations between image data, GNPCLLE can eliminate the distortion of the overall topological structure within the dataset on the manifold. Experimental results on two image sets show the effectiveness and efficiency of the proposed algorithms.展开更多
基金supported by National High Technology Research and Development Program of China (863 Program)(No. 2009AA04Z162)National Nature Science Foundation of China(No. 60825302, No. 60934007, No. 61074061)+1 种基金Program of Shanghai Subject Chief Scientist,"Shu Guang" project supported by Shang-hai Municipal Education Commission and Shanghai Education Development FoundationKey Project of Shanghai Science and Technology Commission, China (No. 10JC1403400)
文摘In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems (SDSs). First, the dimension reduction with principal component analysis (PCA) is used to transform the high-dimensional spatio-temporal data into a low-dimensional time domain. The MPC strategy is proposed based on the online correction low-dimensional models, where the state of the system at a previous time is used to correct the output of low-dimensional models. Sufficient conditions for closed-loop stability are presented and proven. Simulations demonstrate the accuracy and efficiency of the proposed methodologies.
文摘This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted average and the cosine simplex algorithm. The first approach identifies binding constraints by using the weighted average of each constraint, whereas the second algorithm is based on the cosine similarity between the vector of the objective function and the constraints. These two approaches are complementary, and when used together, they locate the essential subset of initial constraints required for solving medium and large-scale linear programming problems. After reducing the dimension of the linear programming problem using the subset of the essential constraints, the solution method can be chosen from any suitable method for linear programming. The proposed approach was applied to a set of well-known benchmarks as well as more than 2000 random medium and large-scale linear programming problems. The results are promising, indicating that the new approach contributes to the reduction of both the size of the problems and the total number of iterations required. A tree-based classification model also confirmed the need for combining the two approaches. A detailed numerical example, the general numerical results, and the statistical analysis for the decision tree procedure are presented.
基金supported by the National Natural Science Foundation of China(Grant No.51802217,22071179,51972230,51890864,and 51890865)Natural Science Foundation of Tianjin(Grant No.20JCJQJC00060 and 19JCZDJC38200)the National Key R&D Program(Grant No.2016YFB0402103).
文摘The dimensional reduction theory is applied to an A_(2)O-P_(2)O_(5)-Bi_(2)O_(3) system where P_(2)O_(5) serves as a binary parent while A_(2)O and Bi_(2)O_(3) are regarded as dimensional reduction agents.Thus,three novel phosphates ACs_(5)Bi_(4)(PO_(4))_(2)(P_(2)O_(7))_(3)(A=K,Rb and Cs)have been prepared through the high-temperature solution method.Single-crystal X-ray diffraction measurements reveal that they all crystallize in the monoclinic space group P_(2)1/c(no.14)and in their structures,two different kinds of P-O units,i.e.isolated PO_(4) tetrahedra and P_(2)O_(7) dimers,are interconnected by Bi-O groups to construct a ^(3)_(∞)[Bi_(4)(PO_(4))_(2)(P_(2)O_(7))_(3)^(6-)]framework.Note that with the change of cationic size in their structures,the Bi^(3+)cations also exhibit flexible coordination,i.e.BiO_(5) and BiO_(6) polyhedra in KCs_(5)Bi_(4)(PO_(4))_(2)(P_(2)O_(7))_(3) and only BiO_(6) polyhedra in Cs_(6)Bi_(4)(PO_(4))_(2)(P_(2)O_(7))_(3),suggesting that combining the flexible coordination of Bi^(3+)cations with phosphates based on dimensional reduction theory is an effective strategy to guide the synthesis of new compounds.
基金funded by National Natural Science Foundation of China(Nos.12402142,11832013 and 11572134)Natural Science Foundation of Hubei Province(No.2024AFB235)+1 种基金Hubei Provincial Department of Education Science and Technology Research Project(No.Q20221714)the Opening Foundation of Hubei Key Laboratory of Digital Textile Equipment(Nos.DTL2023019 and DTL2022012).
文摘Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when tackling high-dimensional optimization challenges.To effectively address these challenges,this study introduces cooperative metaheuristics integrating dynamic dimension reduction(DR).Building upon particle swarm optimization(PSO)and differential evolution(DE),the proposed cooperative methods C-PSO and C-DE are developed.In the proposed methods,the modified principal components analysis(PCA)is utilized to reduce the dimension of design variables,thereby decreasing computational costs.The dynamic DR strategy implements periodic execution of modified PCA after a fixed number of iterations,resulting in the important dimensions being dynamically identified.Compared with the static one,the dynamic DR strategy can achieve precise identification of important dimensions,thereby enabling accelerated convergence toward optimal solutions.Furthermore,the influence of cumulative contribution rate thresholds on optimization problems with different dimensions is investigated.Metaheuristic algorithms(PSO,DE)and cooperative metaheuristics(C-PSO,C-DE)are examined by 15 benchmark functions and two engineering design problems(speed reducer and composite pressure vessel).Comparative results demonstrate that the cooperative methods achieve significantly superior performance compared to standard methods in both solution accuracy and computational efficiency.Compared to standard metaheuristic algorithms,cooperative metaheuristics achieve a reduction in computational cost of at least 40%.The cooperative metaheuristics can be effectively used to tackle both high-dimensional unconstrained and constrained optimization problems.
基金support from the NSFC(22071194)Natural Science Foundation of Henan Province(232300421232).
文摘The current application of porous catalytic materials for organic synthesis is always confined to comparatively simple small substrates because of the diffusion barrier.Therefore,in this study,dimensional reduction and active site addition strategies were employed for preparing unique porous{RE_(9)}-clusterbased rare-earth metal-organic frameworks(MOFs){[Me_(2)NH_(2)]4[RE_(9)(pddb)_(6)(μ_(3)-O)_(2)(μ_(3)-OH)_(12)(H_(2)O)_(1.5)(HCO_(2))_(3)]·6.5DMF·11H_(2)O}_(n)(MOF-RE,RE=Tb,Y,and Dy)with high-density multiple active sites.It was found that MOF-RE are rare{RE_(9)}-based two-dimensional(2D)networks including triangularnanoporous(1.3 nm)and triangular-microporous(0.8 nm)channels decorated by abundant Lewis acid-base sites(open RE_((III))sites and Npyridine atoms)on the inner surface.As anticipated,due to the coexistence of Lewis acid-base sites,activated samples exhibited better catalytic activity(a yield of 96%,and a TON value of 768 for styrene oxide)than most previously reported 3D MOF materials for the cycloaddition of CO_(2) and multifarious epoxides under moderate conditions.Moreover,as a heterogeneous catalyst,MOF-Tb,has excellent catalytic performance(with a TON value of 396 for benzaldehyde)for the Knoevenagel condensation reaction of malononitrile and aldehydes with high catalytic stability and recoverability.In addition,both reactions possessed high turnover numbers and frequencies.These dimensional reduction and active site addition tactics may permit the exploitation of new nanoporous MOF catalysts based on rare-earth clusters for useful and intricate organic conversions.
基金supported by the China Postdoctoral Science Foundation(No.2024M764171)the Postdoctoral Research Start-up Funds,China(No.AUGA5710027424)+1 种基金the National Natural Science Foundation of China(No.U2341237)the Development and construction funds for the School of Mechatronics Engineering of HIT,China(No.CBQQ8880103624)。
文摘Gas turbine rotors are complex dynamic systems with high-dimensional,discrete,and multi-source nonlinear coupling characteristics.Significant amounts of resources and time are spent during the process of solving dynamic characteristics.Therefore,it is necessary to design a lowdimensional model that can well reflect the dynamic characteristics of high-dimensional system.To build such a low-dimensional model,this study developed a dimensionality reduction method considering global order energy distribution by modifying the proper orthogonal decomposition theory.First,sensitivity analysis of key dimensionality reduction parameters to the energy distribution was conducted.Then a high-dimensional rotor-bearing system considering the nonlinear stiffness and oil film force was reduced,and the accuracy and the reusability of the low-dimensional model under different operating conditions were examined.Finally,the response results of a multi-disk rotor-bearing test bench were reduced using the proposed method,and spectrum results were then compared experimentally.Numerical and experimental results demonstrate that,during the dimensionality reduction process,the solution period of dynamic response results has the most significant influence on the accuracy of energy preservation.The transient signal in the transformation matrix mainly affects the high-order energy distribution of the rotor system.The larger the proportion of steady-state signals is,the closer the energy tends to accumulate towards lower orders.The low-dimensional rotor model accurately reflects the frequency response characteristics of the original high-dimensional system with an accuracy of up to 98%.The proposed dimensionality reduction method exhibits significant application potential in the dynamic analysis of highdimensional systems coupled with strong nonlinearities under variable operating conditions.
基金partly supported by Hainan Provincial Joint Project of Sanya Yazhou Bay Science and Technology City(2021JJLH0052)National Natural Science Foundation of China(42274154,42304116)+2 种基金Natural Science Foundation of Heilongjiang Province,China(LH2024D013)Heilongjiang Postdoctoral Fund(LBHZ23103)Hainan Yazhou Bay Science and Technology City Jingying Talent Project(SKJC-JYRC-2024-05)。
文摘The inversion of large sparse matrices poses a major challenge in geophysics,particularly in Bayesian seismic inversion,significantly limiting computational efficiency and practical applicability to largescale datasets.Existing dimensionality reduction methods have achieved partial success in addressing this issue.However,they remain limited in terms of the achievable degree of dimensionality reduction.An incremental deep dimensionality reduction approach is proposed herein to significantly reduce matrix size and is applied to Bayesian linearized inversion(BLI),a stochastic seismic inversion approach that heavily depends on large sparse matrices inversion.The proposed method first employs a linear transformation based on the discrete cosine transform(DCT)to extract the matrix's essential information and eliminate redundant components,forming the foundation of the dimensionality reduction framework.Subsequently,an innovative iterative DCT-based dimensionality reduction process is applied,where the reduction magnitude is carefully calibrated at each iteration to incrementally reduce dimensionality,thereby effectively eliminating matrix redundancy in depth.This process is referred to as the incremental discrete cosine transform(IDCT).Ultimately,a linear IDCT-based reduction operator is constructed and applied to the kernel matrix inversion in BLI,resulting in a more efficient BLI framework.The proposed method was evaluated through synthetic and field data tests and compared with conventional dimensionality reduction methods.The IDCT approach significantly improves the dimensionality reduction efficiency of the core inversion matrix while preserving inversion accuracy,demonstrating prominent advantages in solving Bayesian inverse problems more efficiently.
文摘The proliferation of high-dimensional data and the widespread use of complex models present central challenges in contemporary statistics and data science.Dimension reduction and model checking,as two foundational pillars supporting scientific inference and data-driven decisionmaking,have evolved through the collective wisdom of generations of statisticians.This special issue,titled"Recent Developments in Dimension Reduction and Model Checking for regressions",not only aims to showcase cutting-edge advances in the field but also carries a distinct sense of academic homage to honor the groundbreaking and enduring contributions of Professor Lixing Zhu,a leading scholar whose work has profoundly shaped both areas.
基金supported by the National Natural Science Foundation of China under Grant No.12301365supported by the National Natural Science Foundation of China under Grant No.2241200071Guangdong Basic and Applied Basic Research Foundation under Grant No.2023A1515110001。
文摘In this note,the authors revisit the envelope dimension reduction,which was first introduced for estimating a sufficient dimension reduction subspace without inverting the sample covariance.Motivated by the recent developments in envelope methods and algorithms,the authors refresh the envelope inverse regression as a flexible alternative to the existing inverse regression methods in dimension reduction.The authors discuss the versatility of the envelope approach and demonstrate the advantages of the envelope dimension reduction through simulation studies.
文摘In this paper,the authors propose a nonlinear dimension reduction technique based on Fréchet inverse regression to achieve sufficient dimension reduction for responses in metric spaces and predictors in Riemannian manifolds.The authors rigorously establish statistical properties of the estimators,providing formal proofs of their consistency and asymptotic behaviors.The effectiveness of our method is demonstrated through extensive simulations and applications to real-world datasets which highlight its practical utility for complex data with non-Euclidean structures.
文摘Classical linear discriminant analysis(LDA)(Fisher,1936)implicitly assumes the classification boundary depends on only one linear combination of the predictors.This restriction can lead to poor classification in applications where the decision boundary depends on multiple linear combinations of the predictors.To overcome this challenge,the authors first project the predictors onto an envelope central space and then perform LDA based on the sufficient predictor.The performance of the proposed method in improving classification accuracy is demonstrated in both synthetic data and real applications.
基金The National Natural Science Foundation of China(No.61231002,61273266)the Ph.D.Program Foundation of Ministry of Education of China(No.20110092130004)China Postdoctoral Science Foundation(No.2015M571637)
文摘In order to accurately identify speech emotion information, the discriminant-cascading effect in dimensionality reduction of speech emotion recognition is investigated. Based on the existing locality preserving projections and graph embedding framework, a novel discriminant-cascading dimensionality reduction method is proposed, which is named discriminant-cascading locality preserving projections (DCLPP). The proposed method specifically utilizes supervised embedding graphs and it keeps the original space for the inner products of samples to maintain enough information for speech emotion recognition. Then, the kernel DCLPP (KDCLPP) is also proposed to extend the mapping form. Validated by the experiments on the corpus of EMO-DB and eNTERFACE'05, the proposed method can clearly outperform the existing common dimensionality reduction methods, such as principal component analysis (PCA), linear discriminant analysis (LDA), locality preserving projections (LPP), local discriminant embedding (LDE), graph-based Fisher analysis (GbFA) and so on, with different categories of classifiers.
文摘Some dimensionality reduction (DR) approaches based on support vector machine (SVM) are proposed. But the acquirement of the projection matrix in these approaches only considers the between-class margin based on SVM while ignoring the within-class information in data. This paper presents a new DR approach, call- ed the dimensionality reduction based on SVM and LDA (DRSL). DRSL considers the between-class margins from SVM and LDA, and the within-class compactness from LDA to obtain the projection matrix. As a result, DRSL can realize the combination of the between-class and within-class information and fit the between-class and within-class structures in data. Hence, the obtained projection matrix increases the generalization ability of subsequent classification techniques. Experiments applied to classification techniques show the effectiveness of the proposed method.
基金supported by Changsha Municipal Natural Science Foundation(kq2014164)the Natural Science Foundation of Hunan Province(Grant 2020JJ4684)Science and Technology Innovation Project of Hunan Academy of Agricultural Sciences(2020CX45).
文摘Comprehensive Summary Developing new catalysts for highly selectivity and conversion of saturated C(sp^(3))-H bonds is of great significance.In order to obtain catalysts with high catalytic performance,six Eu-based MOFs with different structural characteristics were obtained by using europium ions and different organic acid ligands,namely Eu-1~Eu-6.Eu-1,Eu-2 and Eu-3 featured three-dimensional structures,while Eu-4 and Eu-5 featured two-dimensional structures.
文摘We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold is learned by constructing an approximation for the tangent space at each point, and those tangent spaces are then aligned to give the global coordinates of the data points with respect to the underlying manifold. We also present an error analysis of our algorithm showing that reconstruction errors can be quite small in some cases. We illustrate our algorithm using curves and surfaces both in 2D/3D Euclidean spaces and higher dimensional Euclidean spaces. We also address several theoretical and algorithmic issues for further research and improvements.
基金supported by the National Natural Science Foundation of China (No. 11502211)
文摘In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number of design variables are needed, the computational cost becomes prohibitive, and thus original global optimization strategies are required. To address this need, data dimensionality reduction method is combined with global optimization methods, thus forming a new global optimization system, aiming to improve the efficiency of conventional global optimization. The new optimization system involves applying Proper Orthogonal Decomposition(POD) in dimensionality reduction of design space while maintaining the generality of original design space. Besides, an acceleration approach for samples calculation in surrogate modeling is applied to reduce the computational time while providing sufficient accuracy. The optimizations of a transonic airfoil RAE2822 and the transonic wing ONERA M6 are performed to demonstrate the effectiveness of the proposed new optimization system. In both cases, we manage to reduce the number of design variables from 20 to 10 and from 42 to 20 respectively. The new design optimization system converges faster and it takes 1/3 of the total time of traditional optimization to converge to a better design, thus significantly reducing the overall optimization time and improving the efficiency of conventional global design optimization method.
基金supported by the National Natural Science Foundation of China(5110505261173163)the Liaoning Provincial Natural Science Foundation of China(201102037)
文摘In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification.
基金This work is supported by the National Natural Science Foundation of China[grant number 61801336]the China Postdoctoral Science Foundation[grant number 2019M662717 and 2017M622521]the China Postdoctoral Program for Innovative Talent[grant number BX201700182].
文摘Graph learning is an effective manner to analyze the intrinsic properties of data.It has been widely used in the fields of dimensionality reduction and classification for data.In this paper,we focus on the graph learning-based dimensionality reduction for a hyperspectral image.Firstly,we review the development of graph learning and its application in a hyperspectral image.Then,we mainly discuss several representative graph methods including two manifold learning methods,two sparse graph learning methods,and two hypergraph learning methods.For manifold learning,we analyze neighborhood preserving embedding and locality preserving projections which are two classic manifold learning methods and can be transformed into the form of a graph.For sparse graph,we introduce sparsity preserving graph embedding and sparse graph-based discriminant analysis which can adaptively reveal data structure to construct a graph.For hypergraph learning,we review binary hypergraph and discriminant hyper-Laplacian projection which can represent the high-order relationship of data.
文摘Dimensionality reduction and data visualization are useful and important processes in pattern recognition. Many techniques have been developed in the recent years. The self-organizing map (SOM) can be an efficient method for this purpose. This paper reviews recent advances in this area and related approaches such as multidimensional scaling (MDS), nonlinear PC A, principal manifolds, as well as the connections of the SOM and its recent variant, the visualization induced SOM (ViSOM), with these approaches. The SOM is shown to produce a quantized, qualitative scaling and while the ViSOM a quantitative or metric scaling and approximates principal curve/surface. The SOM can also be regarded as a generalized MDS to relate two metric spaces by forming a topological mapping between them. The relationships among various recently proposed techniques such as ViSOM, Isomap, LLE, and eigenmap are discussed and compared.
基金Project (No 2008AA01Z132) supported by the National High-Tech Research and Development Program of China
文摘Image feature optimization is an important means to deal with high-dimensional image data in image semantic understanding and its applications. We formulate image feature optimization as the establishment of a mapping between highand low-dimensional space via a five-tuple model. Nonlinear dimensionality reduction based on manifold learning provides a feasible way for solving such a problem. We propose a novel globular neighborhood based locally linear embedding (GNLLE) algorithm using neighborhood update and an incremental neighbor search scheme, which not only can handle sparse datasets but also has strong anti-noise capability and good topological stability. Given that the distance measure adopted in nonlinear dimensionality reduction is usually based on pairwise similarity calculation, we also present a globular neighborhood and path clustering based locally linear embedding (GNPCLLE) algorithm based on path-based clustering. Due to its full consideration of correlations between image data, GNPCLLE can eliminate the distortion of the overall topological structure within the dataset on the manifold. Experimental results on two image sets show the effectiveness and efficiency of the proposed algorithms.