The inversion of large sparse matrices poses a major challenge in geophysics,particularly in Bayesian seismic inversion,significantly limiting computational efficiency and practical applicability to largescale dataset...The inversion of large sparse matrices poses a major challenge in geophysics,particularly in Bayesian seismic inversion,significantly limiting computational efficiency and practical applicability to largescale datasets.Existing dimensionality reduction methods have achieved partial success in addressing this issue.However,they remain limited in terms of the achievable degree of dimensionality reduction.An incremental deep dimensionality reduction approach is proposed herein to significantly reduce matrix size and is applied to Bayesian linearized inversion(BLI),a stochastic seismic inversion approach that heavily depends on large sparse matrices inversion.The proposed method first employs a linear transformation based on the discrete cosine transform(DCT)to extract the matrix's essential information and eliminate redundant components,forming the foundation of the dimensionality reduction framework.Subsequently,an innovative iterative DCT-based dimensionality reduction process is applied,where the reduction magnitude is carefully calibrated at each iteration to incrementally reduce dimensionality,thereby effectively eliminating matrix redundancy in depth.This process is referred to as the incremental discrete cosine transform(IDCT).Ultimately,a linear IDCT-based reduction operator is constructed and applied to the kernel matrix inversion in BLI,resulting in a more efficient BLI framework.The proposed method was evaluated through synthetic and field data tests and compared with conventional dimensionality reduction methods.The IDCT approach significantly improves the dimensionality reduction efficiency of the core inversion matrix while preserving inversion accuracy,demonstrating prominent advantages in solving Bayesian inverse problems more efficiently.展开更多
Gas turbine rotors are complex dynamic systems with high-dimensional,discrete,and multi-source nonlinear coupling characteristics.Significant amounts of resources and time are spent during the process of solving dynam...Gas turbine rotors are complex dynamic systems with high-dimensional,discrete,and multi-source nonlinear coupling characteristics.Significant amounts of resources and time are spent during the process of solving dynamic characteristics.Therefore,it is necessary to design a lowdimensional model that can well reflect the dynamic characteristics of high-dimensional system.To build such a low-dimensional model,this study developed a dimensionality reduction method considering global order energy distribution by modifying the proper orthogonal decomposition theory.First,sensitivity analysis of key dimensionality reduction parameters to the energy distribution was conducted.Then a high-dimensional rotor-bearing system considering the nonlinear stiffness and oil film force was reduced,and the accuracy and the reusability of the low-dimensional model under different operating conditions were examined.Finally,the response results of a multi-disk rotor-bearing test bench were reduced using the proposed method,and spectrum results were then compared experimentally.Numerical and experimental results demonstrate that,during the dimensionality reduction process,the solution period of dynamic response results has the most significant influence on the accuracy of energy preservation.The transient signal in the transformation matrix mainly affects the high-order energy distribution of the rotor system.The larger the proportion of steady-state signals is,the closer the energy tends to accumulate towards lower orders.The low-dimensional rotor model accurately reflects the frequency response characteristics of the original high-dimensional system with an accuracy of up to 98%.The proposed dimensionality reduction method exhibits significant application potential in the dynamic analysis of highdimensional systems coupled with strong nonlinearities under variable operating conditions.展开更多
The electric double layer(EDL),formed by charge adsorption at the electrolyte–electrode interface,constitutes the microenvironment governing electrochemical reactions.However,due to scale mismatch between the EDL thi...The electric double layer(EDL),formed by charge adsorption at the electrolyte–electrode interface,constitutes the microenvironment governing electrochemical reactions.However,due to scale mismatch between the EDL thickness and electrode topography,solving the two-dimensional(2D)nonhomogeneous Poisson–Nernst–Planck(N-PNP)equations remains computationally intractable.This limitation hinders understanding of fundamental phenomena such as curvature-driven instabilities in 2D EDL.Here,we propose a dimensionality-decomposition strategy embedding a fully connected neural network(FCNN)to solve 2D N-PNP equations,in which the FCNN is trained on key electrochemical parameters by reducing the electrostatic boundary into multiple equivalent 1D representations.Through a representative case of LiPF6 reduction on lithium metal half-cell,nucleus size is unexpectedly found to have an important influence on dendrite morphology and tip kinetics.This work paves the way for bridging nanoscale and macroscale simulations with expandability to 2D situations of other 1D EDL models.展开更多
Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when ta...Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when tackling high-dimensional optimization challenges.To effectively address these challenges,this study introduces cooperative metaheuristics integrating dynamic dimension reduction(DR).Building upon particle swarm optimization(PSO)and differential evolution(DE),the proposed cooperative methods C-PSO and C-DE are developed.In the proposed methods,the modified principal components analysis(PCA)is utilized to reduce the dimension of design variables,thereby decreasing computational costs.The dynamic DR strategy implements periodic execution of modified PCA after a fixed number of iterations,resulting in the important dimensions being dynamically identified.Compared with the static one,the dynamic DR strategy can achieve precise identification of important dimensions,thereby enabling accelerated convergence toward optimal solutions.Furthermore,the influence of cumulative contribution rate thresholds on optimization problems with different dimensions is investigated.Metaheuristic algorithms(PSO,DE)and cooperative metaheuristics(C-PSO,C-DE)are examined by 15 benchmark functions and two engineering design problems(speed reducer and composite pressure vessel).Comparative results demonstrate that the cooperative methods achieve significantly superior performance compared to standard methods in both solution accuracy and computational efficiency.Compared to standard metaheuristic algorithms,cooperative metaheuristics achieve a reduction in computational cost of at least 40%.The cooperative metaheuristics can be effectively used to tackle both high-dimensional unconstrained and constrained optimization problems.展开更多
In recent years,the research on superconductivity in one-dimensional(1D)materials has been attracting increasing attention due to its potential applications in low-dimensional nanodevices.However,the critical temperat...In recent years,the research on superconductivity in one-dimensional(1D)materials has been attracting increasing attention due to its potential applications in low-dimensional nanodevices.However,the critical temperature(T_(c))of 1D superconductors is low.In this work,we theoretically investigate the possible high T_(c) superconductivity of(5,5)carbon nanotube(CNT).The pristine(5,5)CNT is a Dirac semimetal and can be modulated into a semiconductor by full hydrogenation.Interestingly,by further hole doping,it can be regulated into a metallic state with the sp3-hybridized𝜎electrons metalized,and a giant Kohn anomaly appears in the optical phonons.The two factors together enhance the electron–phonon coupling,and lead to high-T_(c) superconductivity.When the hole doping concentration of hydrogenated-(5,5)CNT is 2.5 hole/cell,the calculated T_(c) is 82.3 K,exceeding the boiling point of liquid nitrogen.Therefore,the predicted hole-doped hydrogenated-(5,5)CNT provides a new platform for 1D high-T_(c) superconductivity and may have potential applications in 1D nanodevices.展开更多
Compared to the well-studied two-dimensional(2D)ferroelectricity,the appearance of 2D antiferroelectricity is much rarer,where local dipoles from the nonequivalent sublattices within 2D monolayers are oppositely orien...Compared to the well-studied two-dimensional(2D)ferroelectricity,the appearance of 2D antiferroelectricity is much rarer,where local dipoles from the nonequivalent sublattices within 2D monolayers are oppositely oriented.Using NbOCl_(2) monolayer with competing ferroelectric(FE)and antiferroelectric(AFE)phases as a 2D material platform,we demonstrate the emergence of intrinsic antiferroelectricity in NbOCl_(2) monolayer under experimentally accessible shear strain,along with new functionality associated with electric field-induced AFE-to-FE phase transition.Specifically,the complex configuration space accommodating FE and AFE phases,polarization switching kinetics,and finite temperature thermodynamic properties of 2D NbOCl_(2) are all accurately predicted by large-scale molecular dynamics simulations based on deep learning interatomic potential model.Moreover,room temperature stable antiferroelectricity with low polarization switching barrier and one-dimensional collinear polarization arrangement is predicted in shear-deformed NbOCl_(2) monolayer.The transition from AFE to FE phase in 2D NbOCl_(2) can be triggered by a low critical electric field,leading to a double polarization–electric(P–E)loop with small hysteresis.A new type of optoelectronic device composed of AFE-NbOCl_(2) is proposed,enabling electric“writing”and nonlinear optical“reading”logical operation with fast operation speed and low power consumption.展开更多
In order to accurately identify speech emotion information, the discriminant-cascading effect in dimensionality reduction of speech emotion recognition is investigated. Based on the existing locality preserving projec...In order to accurately identify speech emotion information, the discriminant-cascading effect in dimensionality reduction of speech emotion recognition is investigated. Based on the existing locality preserving projections and graph embedding framework, a novel discriminant-cascading dimensionality reduction method is proposed, which is named discriminant-cascading locality preserving projections (DCLPP). The proposed method specifically utilizes supervised embedding graphs and it keeps the original space for the inner products of samples to maintain enough information for speech emotion recognition. Then, the kernel DCLPP (KDCLPP) is also proposed to extend the mapping form. Validated by the experiments on the corpus of EMO-DB and eNTERFACE'05, the proposed method can clearly outperform the existing common dimensionality reduction methods, such as principal component analysis (PCA), linear discriminant analysis (LDA), locality preserving projections (LPP), local discriminant embedding (LDE), graph-based Fisher analysis (GbFA) and so on, with different categories of classifiers.展开更多
Some dimensionality reduction (DR) approaches based on support vector machine (SVM) are proposed. But the acquirement of the projection matrix in these approaches only considers the between-class margin based on S...Some dimensionality reduction (DR) approaches based on support vector machine (SVM) are proposed. But the acquirement of the projection matrix in these approaches only considers the between-class margin based on SVM while ignoring the within-class information in data. This paper presents a new DR approach, call- ed the dimensionality reduction based on SVM and LDA (DRSL). DRSL considers the between-class margins from SVM and LDA, and the within-class compactness from LDA to obtain the projection matrix. As a result, DRSL can realize the combination of the between-class and within-class information and fit the between-class and within-class structures in data. Hence, the obtained projection matrix increases the generalization ability of subsequent classification techniques. Experiments applied to classification techniques show the effectiveness of the proposed method.展开更多
We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold i...We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold is learned by constructing an approximation for the tangent space at each point, and those tangent spaces are then aligned to give the global coordinates of the data points with respect to the underlying manifold. We also present an error analysis of our algorithm showing that reconstruction errors can be quite small in some cases. We illustrate our algorithm using curves and surfaces both in 2D/3D Euclidean spaces and higher dimensional Euclidean spaces. We also address several theoretical and algorithmic issues for further research and improvements.展开更多
In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the researc...In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification.展开更多
In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number...In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number of design variables are needed, the computational cost becomes prohibitive, and thus original global optimization strategies are required. To address this need, data dimensionality reduction method is combined with global optimization methods, thus forming a new global optimization system, aiming to improve the efficiency of conventional global optimization. The new optimization system involves applying Proper Orthogonal Decomposition(POD) in dimensionality reduction of design space while maintaining the generality of original design space. Besides, an acceleration approach for samples calculation in surrogate modeling is applied to reduce the computational time while providing sufficient accuracy. The optimizations of a transonic airfoil RAE2822 and the transonic wing ONERA M6 are performed to demonstrate the effectiveness of the proposed new optimization system. In both cases, we manage to reduce the number of design variables from 20 to 10 and from 42 to 20 respectively. The new design optimization system converges faster and it takes 1/3 of the total time of traditional optimization to converge to a better design, thus significantly reducing the overall optimization time and improving the efficiency of conventional global design optimization method.展开更多
Dimensionality reduction and data visualization are useful and important processes in pattern recognition. Many techniques have been developed in the recent years. The self-organizing map (SOM) can be an efficient m...Dimensionality reduction and data visualization are useful and important processes in pattern recognition. Many techniques have been developed in the recent years. The self-organizing map (SOM) can be an efficient method for this purpose. This paper reviews recent advances in this area and related approaches such as multidimensional scaling (MDS), nonlinear PC A, principal manifolds, as well as the connections of the SOM and its recent variant, the visualization induced SOM (ViSOM), with these approaches. The SOM is shown to produce a quantized, qualitative scaling and while the ViSOM a quantitative or metric scaling and approximates principal curve/surface. The SOM can also be regarded as a generalized MDS to relate two metric spaces by forming a topological mapping between them. The relationships among various recently proposed techniques such as ViSOM, Isomap, LLE, and eigenmap are discussed and compared.展开更多
The frame of text classification system was presented. The high dimensionality in feature space for text classification was studied. The mutual information is a widely used information theoretic measure, in a descript...The frame of text classification system was presented. The high dimensionality in feature space for text classification was studied. The mutual information is a widely used information theoretic measure, in a descriptive way, to measure the stochastic dependency of discrete random variables. The measure method was used as a criterion to reduce high dimensionality of feature vectors in text classification on Web. Feature selections or conversions were performed by using maximum mutual information including linear and non-linear feature conversions. Entropy was used and extended to find right features commendably in pattern recognition systems. Favorable foundation would be established for text classification mining.展开更多
Arc sound is well known as the potential and available resource for monitoring and controlling of the weld penetration status,which is very important to the welding process quality control,so any attentions have been ...Arc sound is well known as the potential and available resource for monitoring and controlling of the weld penetration status,which is very important to the welding process quality control,so any attentions have been paid to the relationships between the arc sound and welding parameters.Some non-linear mapping models correlating the arc sound to welding parameters have been established with the help of neural networks.However,the research of utilizing arc sound to monitor and diagnose welding process is still in its infancy.A self-made real-time sensing system is applied to make a study of arc sound under typical penetration status,including partial penetration,unstable penetration,full penetration and excessive penetration,in metal inert-gas(MIG) flat tailored welding with spray transfer.Arc sound is pretreated by using wavelet de-noising and short-time windowing technologies,and its characteristics,characterizing weld penetration status,of time-domain,frequency-domain,cepstrum-domain and geometric-domain are extracted.Subsequently,high-dimensional eigenvector is constructed and feature-level parameters are successfully fused utilizing the concept of primary principal component analysis(PCA).Ultimately,60-demensional eigenvector is replaced by the synthesis of 8-demensional vector,which achieves compression for feature space and provides technical supports for pattern classification of typical penetration status with the help of arc sound in MIG welding in the future.展开更多
Graph learning is an effective manner to analyze the intrinsic properties of data.It has been widely used in the fields of dimensionality reduction and classification for data.In this paper,we focus on the graph learn...Graph learning is an effective manner to analyze the intrinsic properties of data.It has been widely used in the fields of dimensionality reduction and classification for data.In this paper,we focus on the graph learning-based dimensionality reduction for a hyperspectral image.Firstly,we review the development of graph learning and its application in a hyperspectral image.Then,we mainly discuss several representative graph methods including two manifold learning methods,two sparse graph learning methods,and two hypergraph learning methods.For manifold learning,we analyze neighborhood preserving embedding and locality preserving projections which are two classic manifold learning methods and can be transformed into the form of a graph.For sparse graph,we introduce sparsity preserving graph embedding and sparse graph-based discriminant analysis which can adaptively reveal data structure to construct a graph.For hypergraph learning,we review binary hypergraph and discriminant hyper-Laplacian projection which can represent the high-order relationship of data.展开更多
This paper presents two novel algorithms for feature extraction-Subpattern Complete Two Dimensional Linear Discriminant Principal Component Analysis (SpC2DLDPCA) and Subpattern Complete Two Dimensional Locality Preser...This paper presents two novel algorithms for feature extraction-Subpattern Complete Two Dimensional Linear Discriminant Principal Component Analysis (SpC2DLDPCA) and Subpattern Complete Two Dimensional Locality Preserving Principal Component Analysis (SpC2DLPPCA). The modified SpC2DLDPCA and SpC2DLPPCA algorithm over their non-subpattern version and Subpattern Complete Two Dimensional Principal Component Analysis (SpC2DPCA) methods benefit greatly in the following four points: (1) SpC2DLDPCA and SpC2DLPPCA can avoid the failure that the larger dimension matrix may bring about more consuming time on computing their eigenvalues and eigenvectors. (2) SpC2DLDPCA and SpC2DLPPCA can extract local information to implement recognition. (3)The idea of subblock is introduced into Two Dimensional Principal Component Analysis (2DPCA) and Two Dimensional Linear Discriminant Analysis (2DLDA). SpC2DLDPCA combines a discriminant analysis and a compression technique with low energy loss. (4) The idea is also introduced into 2DPCA and Two Dimensional Locality Preserving projections (2DLPP), so SpC2DLPPCA can preserve local neighbor graph structure and compact feature expressions. Finally, the experiments on the CASIA(B) gait database show that SpC2DLDPCA and SpC2DLPPCA have higher recognition accuracies than their non-subpattern versions and SpC2DPCA.展开更多
Dimension reduction is defined as the processes of projecting high-dimensional data to a much lower-dimensional space. Dimension reduction methods variously applied in regression, classification, feature analysis and ...Dimension reduction is defined as the processes of projecting high-dimensional data to a much lower-dimensional space. Dimension reduction methods variously applied in regression, classification, feature analysis and visualization. In this paper, we review in details the last and most new version of methods that extensively developed in the past decade.展开更多
The high dimensions of hyperspectral imagery have caused burden for further processing. A new Fast Independent Component Analysis (FastICA) approach to dimensionality reduction for hyperspectral imagery is presented. ...The high dimensions of hyperspectral imagery have caused burden for further processing. A new Fast Independent Component Analysis (FastICA) approach to dimensionality reduction for hyperspectral imagery is presented. The virtual dimensionality is introduced to determine the number of dimensions needed to be preserved. Since there is no prioritization among independent components generated by the FastICA,the mixing matrix of FastICA is initialized by endmembers,which were extracted by using unsupervised maximum distance method. Minimum Noise Fraction (MNF) is used for preprocessing of original data,which can reduce the computational complexity of FastICA significantly. Finally,FastICA is performed on the selected principal components acquired by MNF to generate the expected independent components in accordance with the order of endmembers. Experimental results demonstrate that the proposed method outperforms second-order statistics-based transforms such as principle components analysis.展开更多
Psychometric theory requires unidimensionality (i.e., scale items should represent a common latent variable). One advocated approach to test unidimensionality within the Rasch model is to identify two item sets from a...Psychometric theory requires unidimensionality (i.e., scale items should represent a common latent variable). One advocated approach to test unidimensionality within the Rasch model is to identify two item sets from a Principal Component Analysis (PCA) of residuals, estimate separate person measures based on the two item sets, compare the two estimates on a person-by-person basis using t-tests and determine the number of cases that differ significantly at the 0.05-level;if ≤5% of tests are significant, or the lower bound of a binomial 95% confidence interval (CI) of the observed proportion overlaps 5%, then it is suggested that strict unidimensionality can be inferred;otherwise the scale is multidimensional. Given its proposed significance and potential implications, this procedure needs detailed scrutiny. This paper explores the impact of sample size and method of estimating the 95% binomial CI upon conclusions according to recommended conventions. Normal approximation, “exact”, Wilson, Agresti-Coull, and Jeffreys binomial CIs were calculated for observed proportions of 0.06, 0.08 and 0.10 and sample sizes from n= 100 to n= 2500. Lower 95%CI boundaries were inspected regarding coverage of the 5% threshold. Results showed that all binomial 95% CIs included as well as excluded 5% as an effect of sample size for all three investigated proportions, except for the Wilson, Agresti-Coull, and JeffreysCIs, which did not include 5% for any sample size with a 10% observed proportion. The normal approximation CI was most sensitive to sample size. These data illustrate that the PCA/t-test protocol should be used and interpreted as any hypothesis testing procedure and is dependent on sample size as well as binomial CI estimation procedure. The PCA/t-test protocol should not be viewed as a “definite” test of unidimensionality and does not replace an integrated quantitative/qualitative interpretation based on an explicit variable definition in view of the perspective, context and purpose of measurement.展开更多
This paper establishes a non-linear finite element model (NFEM) of L4-L5 lumbar spinal segment with accurate three-dimensional solid ligaments and intervertebral disc. For the purpose, the intervertebral disc and surr...This paper establishes a non-linear finite element model (NFEM) of L4-L5 lumbar spinal segment with accurate three-dimensional solid ligaments and intervertebral disc. For the purpose, the intervertebral disc and surrounding ligaments are modeled with four-nodal three-dimensional tetrahedral elements with hyper-elastic material properties. Pure moment of 10 N·m without preload is applied to the upper vertebral body under the loading conditions of lateral bending, backward extension, torsion, and forward flexion, respectively. The simulate relationship curves between generalized forces and generalized displacement of the NFEM are compared with the in vitro experimental result curves to verify NFEM. The verified results show that: (1) The range of simulated motion is a good agreement with the in vitro experimental data; (2) The NFEM can more effectively reffect the actual mechanical properties than the FE model using cable and spring elements ligaments; (3) The NFEM can be used as the basis for further research on lumbar degenerative diseases.展开更多
基金partly supported by Hainan Provincial Joint Project of Sanya Yazhou Bay Science and Technology City(2021JJLH0052)National Natural Science Foundation of China(42274154,42304116)+2 种基金Natural Science Foundation of Heilongjiang Province,China(LH2024D013)Heilongjiang Postdoctoral Fund(LBHZ23103)Hainan Yazhou Bay Science and Technology City Jingying Talent Project(SKJC-JYRC-2024-05)。
文摘The inversion of large sparse matrices poses a major challenge in geophysics,particularly in Bayesian seismic inversion,significantly limiting computational efficiency and practical applicability to largescale datasets.Existing dimensionality reduction methods have achieved partial success in addressing this issue.However,they remain limited in terms of the achievable degree of dimensionality reduction.An incremental deep dimensionality reduction approach is proposed herein to significantly reduce matrix size and is applied to Bayesian linearized inversion(BLI),a stochastic seismic inversion approach that heavily depends on large sparse matrices inversion.The proposed method first employs a linear transformation based on the discrete cosine transform(DCT)to extract the matrix's essential information and eliminate redundant components,forming the foundation of the dimensionality reduction framework.Subsequently,an innovative iterative DCT-based dimensionality reduction process is applied,where the reduction magnitude is carefully calibrated at each iteration to incrementally reduce dimensionality,thereby effectively eliminating matrix redundancy in depth.This process is referred to as the incremental discrete cosine transform(IDCT).Ultimately,a linear IDCT-based reduction operator is constructed and applied to the kernel matrix inversion in BLI,resulting in a more efficient BLI framework.The proposed method was evaluated through synthetic and field data tests and compared with conventional dimensionality reduction methods.The IDCT approach significantly improves the dimensionality reduction efficiency of the core inversion matrix while preserving inversion accuracy,demonstrating prominent advantages in solving Bayesian inverse problems more efficiently.
基金supported by the China Postdoctoral Science Foundation(No.2024M764171)the Postdoctoral Research Start-up Funds,China(No.AUGA5710027424)+1 种基金the National Natural Science Foundation of China(No.U2341237)the Development and construction funds for the School of Mechatronics Engineering of HIT,China(No.CBQQ8880103624)。
文摘Gas turbine rotors are complex dynamic systems with high-dimensional,discrete,and multi-source nonlinear coupling characteristics.Significant amounts of resources and time are spent during the process of solving dynamic characteristics.Therefore,it is necessary to design a lowdimensional model that can well reflect the dynamic characteristics of high-dimensional system.To build such a low-dimensional model,this study developed a dimensionality reduction method considering global order energy distribution by modifying the proper orthogonal decomposition theory.First,sensitivity analysis of key dimensionality reduction parameters to the energy distribution was conducted.Then a high-dimensional rotor-bearing system considering the nonlinear stiffness and oil film force was reduced,and the accuracy and the reusability of the low-dimensional model under different operating conditions were examined.Finally,the response results of a multi-disk rotor-bearing test bench were reduced using the proposed method,and spectrum results were then compared experimentally.Numerical and experimental results demonstrate that,during the dimensionality reduction process,the solution period of dynamic response results has the most significant influence on the accuracy of energy preservation.The transient signal in the transformation matrix mainly affects the high-order energy distribution of the rotor system.The larger the proportion of steady-state signals is,the closer the energy tends to accumulate towards lower orders.The low-dimensional rotor model accurately reflects the frequency response characteristics of the original high-dimensional system with an accuracy of up to 98%.The proposed dimensionality reduction method exhibits significant application potential in the dynamic analysis of highdimensional systems coupled with strong nonlinearities under variable operating conditions.
基金supported by the National Natural Science Foundation of China(Grant Nos.92472207,52472223,and 92572301)。
文摘The electric double layer(EDL),formed by charge adsorption at the electrolyte–electrode interface,constitutes the microenvironment governing electrochemical reactions.However,due to scale mismatch between the EDL thickness and electrode topography,solving the two-dimensional(2D)nonhomogeneous Poisson–Nernst–Planck(N-PNP)equations remains computationally intractable.This limitation hinders understanding of fundamental phenomena such as curvature-driven instabilities in 2D EDL.Here,we propose a dimensionality-decomposition strategy embedding a fully connected neural network(FCNN)to solve 2D N-PNP equations,in which the FCNN is trained on key electrochemical parameters by reducing the electrostatic boundary into multiple equivalent 1D representations.Through a representative case of LiPF6 reduction on lithium metal half-cell,nucleus size is unexpectedly found to have an important influence on dendrite morphology and tip kinetics.This work paves the way for bridging nanoscale and macroscale simulations with expandability to 2D situations of other 1D EDL models.
基金funded by National Natural Science Foundation of China(Nos.12402142,11832013 and 11572134)Natural Science Foundation of Hubei Province(No.2024AFB235)+1 种基金Hubei Provincial Department of Education Science and Technology Research Project(No.Q20221714)the Opening Foundation of Hubei Key Laboratory of Digital Textile Equipment(Nos.DTL2023019 and DTL2022012).
文摘Owing to their global search capabilities and gradient-free operation,metaheuristic algorithms are widely applied to a wide range of optimization problems.However,their computational demands become prohibitive when tackling high-dimensional optimization challenges.To effectively address these challenges,this study introduces cooperative metaheuristics integrating dynamic dimension reduction(DR).Building upon particle swarm optimization(PSO)and differential evolution(DE),the proposed cooperative methods C-PSO and C-DE are developed.In the proposed methods,the modified principal components analysis(PCA)is utilized to reduce the dimension of design variables,thereby decreasing computational costs.The dynamic DR strategy implements periodic execution of modified PCA after a fixed number of iterations,resulting in the important dimensions being dynamically identified.Compared with the static one,the dynamic DR strategy can achieve precise identification of important dimensions,thereby enabling accelerated convergence toward optimal solutions.Furthermore,the influence of cumulative contribution rate thresholds on optimization problems with different dimensions is investigated.Metaheuristic algorithms(PSO,DE)and cooperative metaheuristics(C-PSO,C-DE)are examined by 15 benchmark functions and two engineering design problems(speed reducer and composite pressure vessel).Comparative results demonstrate that the cooperative methods achieve significantly superior performance compared to standard methods in both solution accuracy and computational efficiency.Compared to standard metaheuristic algorithms,cooperative metaheuristics achieve a reduction in computational cost of at least 40%.The cooperative metaheuristics can be effectively used to tackle both high-dimensional unconstrained and constrained optimization problems.
基金supported by the National Natural Science Foundation of China (Grant Nos.12074213 and 11574108)the Major Basic Program of Natural Science Foundation of Shandong Province (Grant No.ZR2021ZD01)the Natural Science Foundation of Shandong Province (Grant No.ZR2023MA082)。
文摘In recent years,the research on superconductivity in one-dimensional(1D)materials has been attracting increasing attention due to its potential applications in low-dimensional nanodevices.However,the critical temperature(T_(c))of 1D superconductors is low.In this work,we theoretically investigate the possible high T_(c) superconductivity of(5,5)carbon nanotube(CNT).The pristine(5,5)CNT is a Dirac semimetal and can be modulated into a semiconductor by full hydrogenation.Interestingly,by further hole doping,it can be regulated into a metallic state with the sp3-hybridized𝜎electrons metalized,and a giant Kohn anomaly appears in the optical phonons.The two factors together enhance the electron–phonon coupling,and lead to high-T_(c) superconductivity.When the hole doping concentration of hydrogenated-(5,5)CNT is 2.5 hole/cell,the calculated T_(c) is 82.3 K,exceeding the boiling point of liquid nitrogen.Therefore,the predicted hole-doped hydrogenated-(5,5)CNT provides a new platform for 1D high-T_(c) superconductivity and may have potential applications in 1D nanodevices.
基金supported by the National Natural Science Foundation of China (Grant No.11574244 for G.Y.G.)the XJTU Research Fund for AI Science (Grant No.2025YXYC011 for G.Y.G.)the Hong Kong Global STEM Professorship Scheme (for X.C.Z.)。
文摘Compared to the well-studied two-dimensional(2D)ferroelectricity,the appearance of 2D antiferroelectricity is much rarer,where local dipoles from the nonequivalent sublattices within 2D monolayers are oppositely oriented.Using NbOCl_(2) monolayer with competing ferroelectric(FE)and antiferroelectric(AFE)phases as a 2D material platform,we demonstrate the emergence of intrinsic antiferroelectricity in NbOCl_(2) monolayer under experimentally accessible shear strain,along with new functionality associated with electric field-induced AFE-to-FE phase transition.Specifically,the complex configuration space accommodating FE and AFE phases,polarization switching kinetics,and finite temperature thermodynamic properties of 2D NbOCl_(2) are all accurately predicted by large-scale molecular dynamics simulations based on deep learning interatomic potential model.Moreover,room temperature stable antiferroelectricity with low polarization switching barrier and one-dimensional collinear polarization arrangement is predicted in shear-deformed NbOCl_(2) monolayer.The transition from AFE to FE phase in 2D NbOCl_(2) can be triggered by a low critical electric field,leading to a double polarization–electric(P–E)loop with small hysteresis.A new type of optoelectronic device composed of AFE-NbOCl_(2) is proposed,enabling electric“writing”and nonlinear optical“reading”logical operation with fast operation speed and low power consumption.
基金The National Natural Science Foundation of China(No.61231002,61273266)the Ph.D.Program Foundation of Ministry of Education of China(No.20110092130004)China Postdoctoral Science Foundation(No.2015M571637)
文摘In order to accurately identify speech emotion information, the discriminant-cascading effect in dimensionality reduction of speech emotion recognition is investigated. Based on the existing locality preserving projections and graph embedding framework, a novel discriminant-cascading dimensionality reduction method is proposed, which is named discriminant-cascading locality preserving projections (DCLPP). The proposed method specifically utilizes supervised embedding graphs and it keeps the original space for the inner products of samples to maintain enough information for speech emotion recognition. Then, the kernel DCLPP (KDCLPP) is also proposed to extend the mapping form. Validated by the experiments on the corpus of EMO-DB and eNTERFACE'05, the proposed method can clearly outperform the existing common dimensionality reduction methods, such as principal component analysis (PCA), linear discriminant analysis (LDA), locality preserving projections (LPP), local discriminant embedding (LDE), graph-based Fisher analysis (GbFA) and so on, with different categories of classifiers.
文摘Some dimensionality reduction (DR) approaches based on support vector machine (SVM) are proposed. But the acquirement of the projection matrix in these approaches only considers the between-class margin based on SVM while ignoring the within-class information in data. This paper presents a new DR approach, call- ed the dimensionality reduction based on SVM and LDA (DRSL). DRSL considers the between-class margins from SVM and LDA, and the within-class compactness from LDA to obtain the projection matrix. As a result, DRSL can realize the combination of the between-class and within-class information and fit the between-class and within-class structures in data. Hence, the obtained projection matrix increases the generalization ability of subsequent classification techniques. Experiments applied to classification techniques show the effectiveness of the proposed method.
文摘We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold is learned by constructing an approximation for the tangent space at each point, and those tangent spaces are then aligned to give the global coordinates of the data points with respect to the underlying manifold. We also present an error analysis of our algorithm showing that reconstruction errors can be quite small in some cases. We illustrate our algorithm using curves and surfaces both in 2D/3D Euclidean spaces and higher dimensional Euclidean spaces. We also address several theoretical and algorithmic issues for further research and improvements.
基金supported by the National Natural Science Foundation of China(5110505261173163)the Liaoning Provincial Natural Science Foundation of China(201102037)
文摘In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification.
基金supported by the National Natural Science Foundation of China (No. 11502211)
文摘In aerodynamic optimization, global optimization methods such as genetic algorithms are preferred in many cases because of their advantage on reaching global optimum. However,for complex problems in which large number of design variables are needed, the computational cost becomes prohibitive, and thus original global optimization strategies are required. To address this need, data dimensionality reduction method is combined with global optimization methods, thus forming a new global optimization system, aiming to improve the efficiency of conventional global optimization. The new optimization system involves applying Proper Orthogonal Decomposition(POD) in dimensionality reduction of design space while maintaining the generality of original design space. Besides, an acceleration approach for samples calculation in surrogate modeling is applied to reduce the computational time while providing sufficient accuracy. The optimizations of a transonic airfoil RAE2822 and the transonic wing ONERA M6 are performed to demonstrate the effectiveness of the proposed new optimization system. In both cases, we manage to reduce the number of design variables from 20 to 10 and from 42 to 20 respectively. The new design optimization system converges faster and it takes 1/3 of the total time of traditional optimization to converge to a better design, thus significantly reducing the overall optimization time and improving the efficiency of conventional global design optimization method.
文摘Dimensionality reduction and data visualization are useful and important processes in pattern recognition. Many techniques have been developed in the recent years. The self-organizing map (SOM) can be an efficient method for this purpose. This paper reviews recent advances in this area and related approaches such as multidimensional scaling (MDS), nonlinear PC A, principal manifolds, as well as the connections of the SOM and its recent variant, the visualization induced SOM (ViSOM), with these approaches. The SOM is shown to produce a quantized, qualitative scaling and while the ViSOM a quantitative or metric scaling and approximates principal curve/surface. The SOM can also be regarded as a generalized MDS to relate two metric spaces by forming a topological mapping between them. The relationships among various recently proposed techniques such as ViSOM, Isomap, LLE, and eigenmap are discussed and compared.
文摘The frame of text classification system was presented. The high dimensionality in feature space for text classification was studied. The mutual information is a widely used information theoretic measure, in a descriptive way, to measure the stochastic dependency of discrete random variables. The measure method was used as a criterion to reduce high dimensionality of feature vectors in text classification on Web. Feature selections or conversions were performed by using maximum mutual information including linear and non-linear feature conversions. Entropy was used and extended to find right features commendably in pattern recognition systems. Favorable foundation would be established for text classification mining.
基金supported by Harbin Academic Pacesetter Foundation of China (Grant No. RC2012XK006002)Zhegjiang Provincial Natural Science Foundation of China (Grant No. Y1110262)+2 种基金Ningbo Municipal Natural Science Foundation of China (Grant No. 2011A610148)Ningbo Municipal Major Industrial Support Project of China (Grant No.2011B1007)Heilongjiang Provincial Natural Science Foundation of China (Grant No. E2007-01)
文摘Arc sound is well known as the potential and available resource for monitoring and controlling of the weld penetration status,which is very important to the welding process quality control,so any attentions have been paid to the relationships between the arc sound and welding parameters.Some non-linear mapping models correlating the arc sound to welding parameters have been established with the help of neural networks.However,the research of utilizing arc sound to monitor and diagnose welding process is still in its infancy.A self-made real-time sensing system is applied to make a study of arc sound under typical penetration status,including partial penetration,unstable penetration,full penetration and excessive penetration,in metal inert-gas(MIG) flat tailored welding with spray transfer.Arc sound is pretreated by using wavelet de-noising and short-time windowing technologies,and its characteristics,characterizing weld penetration status,of time-domain,frequency-domain,cepstrum-domain and geometric-domain are extracted.Subsequently,high-dimensional eigenvector is constructed and feature-level parameters are successfully fused utilizing the concept of primary principal component analysis(PCA).Ultimately,60-demensional eigenvector is replaced by the synthesis of 8-demensional vector,which achieves compression for feature space and provides technical supports for pattern classification of typical penetration status with the help of arc sound in MIG welding in the future.
基金This work is supported by the National Natural Science Foundation of China[grant number 61801336]the China Postdoctoral Science Foundation[grant number 2019M662717 and 2017M622521]the China Postdoctoral Program for Innovative Talent[grant number BX201700182].
文摘Graph learning is an effective manner to analyze the intrinsic properties of data.It has been widely used in the fields of dimensionality reduction and classification for data.In this paper,we focus on the graph learning-based dimensionality reduction for a hyperspectral image.Firstly,we review the development of graph learning and its application in a hyperspectral image.Then,we mainly discuss several representative graph methods including two manifold learning methods,two sparse graph learning methods,and two hypergraph learning methods.For manifold learning,we analyze neighborhood preserving embedding and locality preserving projections which are two classic manifold learning methods and can be transformed into the form of a graph.For sparse graph,we introduce sparsity preserving graph embedding and sparse graph-based discriminant analysis which can adaptively reveal data structure to construct a graph.For hypergraph learning,we review binary hypergraph and discriminant hyper-Laplacian projection which can represent the high-order relationship of data.
基金Sponsored by the National Science Foundation of China( Grant No. 61201370,61100103)the Independent Innovation Foundation of Shandong University( Grant No. 2012DX07)
文摘This paper presents two novel algorithms for feature extraction-Subpattern Complete Two Dimensional Linear Discriminant Principal Component Analysis (SpC2DLDPCA) and Subpattern Complete Two Dimensional Locality Preserving Principal Component Analysis (SpC2DLPPCA). The modified SpC2DLDPCA and SpC2DLPPCA algorithm over their non-subpattern version and Subpattern Complete Two Dimensional Principal Component Analysis (SpC2DPCA) methods benefit greatly in the following four points: (1) SpC2DLDPCA and SpC2DLPPCA can avoid the failure that the larger dimension matrix may bring about more consuming time on computing their eigenvalues and eigenvectors. (2) SpC2DLDPCA and SpC2DLPPCA can extract local information to implement recognition. (3)The idea of subblock is introduced into Two Dimensional Principal Component Analysis (2DPCA) and Two Dimensional Linear Discriminant Analysis (2DLDA). SpC2DLDPCA combines a discriminant analysis and a compression technique with low energy loss. (4) The idea is also introduced into 2DPCA and Two Dimensional Locality Preserving projections (2DLPP), so SpC2DLPPCA can preserve local neighbor graph structure and compact feature expressions. Finally, the experiments on the CASIA(B) gait database show that SpC2DLDPCA and SpC2DLPPCA have higher recognition accuracies than their non-subpattern versions and SpC2DPCA.
文摘Dimension reduction is defined as the processes of projecting high-dimensional data to a much lower-dimensional space. Dimension reduction methods variously applied in regression, classification, feature analysis and visualization. In this paper, we review in details the last and most new version of methods that extensively developed in the past decade.
基金Supported by the National Natural Science Foundation of China (No. 60572135)
文摘The high dimensions of hyperspectral imagery have caused burden for further processing. A new Fast Independent Component Analysis (FastICA) approach to dimensionality reduction for hyperspectral imagery is presented. The virtual dimensionality is introduced to determine the number of dimensions needed to be preserved. Since there is no prioritization among independent components generated by the FastICA,the mixing matrix of FastICA is initialized by endmembers,which were extracted by using unsupervised maximum distance method. Minimum Noise Fraction (MNF) is used for preprocessing of original data,which can reduce the computational complexity of FastICA significantly. Finally,FastICA is performed on the selected principal components acquired by MNF to generate the expected independent components in accordance with the order of endmembers. Experimental results demonstrate that the proposed method outperforms second-order statistics-based transforms such as principle components analysis.
文摘Psychometric theory requires unidimensionality (i.e., scale items should represent a common latent variable). One advocated approach to test unidimensionality within the Rasch model is to identify two item sets from a Principal Component Analysis (PCA) of residuals, estimate separate person measures based on the two item sets, compare the two estimates on a person-by-person basis using t-tests and determine the number of cases that differ significantly at the 0.05-level;if ≤5% of tests are significant, or the lower bound of a binomial 95% confidence interval (CI) of the observed proportion overlaps 5%, then it is suggested that strict unidimensionality can be inferred;otherwise the scale is multidimensional. Given its proposed significance and potential implications, this procedure needs detailed scrutiny. This paper explores the impact of sample size and method of estimating the 95% binomial CI upon conclusions according to recommended conventions. Normal approximation, “exact”, Wilson, Agresti-Coull, and Jeffreys binomial CIs were calculated for observed proportions of 0.06, 0.08 and 0.10 and sample sizes from n= 100 to n= 2500. Lower 95%CI boundaries were inspected regarding coverage of the 5% threshold. Results showed that all binomial 95% CIs included as well as excluded 5% as an effect of sample size for all three investigated proportions, except for the Wilson, Agresti-Coull, and JeffreysCIs, which did not include 5% for any sample size with a 10% observed proportion. The normal approximation CI was most sensitive to sample size. These data illustrate that the PCA/t-test protocol should be used and interpreted as any hypothesis testing procedure and is dependent on sample size as well as binomial CI estimation procedure. The PCA/t-test protocol should not be viewed as a “definite” test of unidimensionality and does not replace an integrated quantitative/qualitative interpretation based on an explicit variable definition in view of the perspective, context and purpose of measurement.
基金supported by the National Natural Science Foundation of China (10832012, 10872078 and10972090)Scientific Advancing Front and Interdiscipline Innovation Project of Jilin University (200903169)
文摘This paper establishes a non-linear finite element model (NFEM) of L4-L5 lumbar spinal segment with accurate three-dimensional solid ligaments and intervertebral disc. For the purpose, the intervertebral disc and surrounding ligaments are modeled with four-nodal three-dimensional tetrahedral elements with hyper-elastic material properties. Pure moment of 10 N·m without preload is applied to the upper vertebral body under the loading conditions of lateral bending, backward extension, torsion, and forward flexion, respectively. The simulate relationship curves between generalized forces and generalized displacement of the NFEM are compared with the in vitro experimental result curves to verify NFEM. The verified results show that: (1) The range of simulated motion is a good agreement with the in vitro experimental data; (2) The NFEM can more effectively reffect the actual mechanical properties than the FE model using cable and spring elements ligaments; (3) The NFEM can be used as the basis for further research on lumbar degenerative diseases.