期刊文献+
共找到3,661篇文章
< 1 2 184 >
每页显示 20 50 100
Birkhoff Orbits for Twist Homeomorphisms on the High-Dimensional Cylinder
1
作者 ZHOU Tong 《Wuhan University Journal of Natural Sciences》 2025年第1期43-48,共6页
It is known that monotone recurrence relations can induce a class of twist homeomorphisms on the high-dimensional cylinder,which is an extension of the class of monotone twist maps on the annulus or two-dimensional cy... It is known that monotone recurrence relations can induce a class of twist homeomorphisms on the high-dimensional cylinder,which is an extension of the class of monotone twist maps on the annulus or two-dimensional cylinder.By constructing a bounded solution of the monotone recurrence relation,the main conclusion in this paper is acquired:The induced homeomorphism has Birkhoff orbits provided there is a compact forward-invariant set.Therefore,it generalizes Angenent's results in low-dimensional cases. 展开更多
关键词 monotone recurrence relation twist homeomorphism high-dimensional cylinder bounded action Birkhoff orbit
原文传递
Generalized Functional Linear Models:Efficient Modeling for High-dimensional Correlated Mixture Exposures
2
作者 Bingsong Zhang Haibin Yu +11 位作者 Xin Peng Haiyi Yan Siran Li Shutong Luo Renhuizi Wei Zhujiang Zhou Yalin Kuang Yihuan Zheng Chulan Ou Linhua Liu Yuehua Hu Jindong Ni 《Biomedical and Environmental Sciences》 2025年第8期961-976,共16页
Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemio... Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemiology and risk assessment,including high dimensionality,correlated exposure,and subtle individual effects.Methods We proposed a novel statistical approach,the generalized functional linear model(GFLM),to analyze the health effects of exposure mixtures.GFLM treats the effect of mixture exposures as a smooth function by reordering exposures based on specific mechanisms and capturing internal correlations to provide a meaningful estimation and interpretation.The robustness and efficiency was evaluated under various scenarios through extensive simulation studies.Results We applied the GFLM to two datasets from the National Health and Nutrition Examination Survey(NHANES).In the first application,we examined the effects of 37 nutrients on BMI(2011–2016 cycles).The GFLM identified a significant mixture effect,with fiber and fat emerging as the nutrients with the greatest negative and positive effects on BMI,respectively.For the second application,we investigated the association between four pre-and perfluoroalkyl substances(PFAS)and gout risk(2007–2018 cycles).Unlike traditional methods,the GFLM indicated no significant association,demonstrating its robustness to multicollinearity.Conclusion GFLM framework is a powerful tool for mixture exposure analysis,offering improved handling of correlated exposures and interpretable results.It demonstrates robust performance across various scenarios and real-world applications,advancing our understanding of complex environmental exposures and their health impacts on environmental epidemiology and toxicology. 展开更多
关键词 Mixture exposure modeling Functional data analysis high-dimensional data Correlated exposures Environmental epidemiology
暂未订购
Decoherence of high-dimensional orbital angular momentum entanglement in anisotropic turbulence
3
作者 Xiang Yan Peng-Fei Zhang +4 位作者 Cheng-Yu Fan Heng Zhao Jing-Hui Zhang Bo-Yun Wang Jun-Yan Wang 《Communications in Theoretical Physics》 2025年第4期39-44,共6页
The decoherence of high-dimensional orbital angular momentum(OAM)entanglement in the weak scintillation regime has been investigated.In this study,we simulate atmospheric turbulence by utilizing a multiple-phase scree... The decoherence of high-dimensional orbital angular momentum(OAM)entanglement in the weak scintillation regime has been investigated.In this study,we simulate atmospheric turbulence by utilizing a multiple-phase screen imprinted with anisotropic non-Kolmogorov turbulence.The entanglement negativity and fidelity are introduced to quantify the entanglement of a high-dimensional OAM state.The numerical evaluation results indicate that entanglement negativity and fidelity last longer for a high-dimensional OAM state when the azimuthal mode has a lower value.Additionally,the evolution of higher-dimensional OAM entanglement is significantly influenced by OAM beam parameters and turbulence parameters.Compared to isotropic atmospheric turbulence,anisotropic turbulence has a lesser influence on highdimensional OAM entanglement. 展开更多
关键词 orbital angular momentum high-dimensional entangled state anisotropic turbulence
原文传递
An Efficient Reliability-Based Optimization Method Utilizing High-Dimensional Model Representation and Weight-Point Estimation Method 被引量:1
4
作者 Xiaoyi Wang Xinyue Chang +2 位作者 Wenxuan Wang Zijie Qiao Feng Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1775-1796,共22页
The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the effi... The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method. 展开更多
关键词 Reliability-based design optimization high-dimensional model decomposition point estimation method Lagrange interpolation aviation hydraulic piping system
在线阅读 下载PDF
Efficient anti-aliasing and anti-leakage Fourier transform for high-dimensional seismic data regularization using cube removal and GPU
5
作者 Lu Liu Sindi Ghada +3 位作者 Fu-Hao Qin Youngseo Kim Vladimir Aleksic Hong-Wei Liu 《Petroleum Science》 SCIE EI CAS CSCD 2024年第5期3079-3089,共11页
Seismic data is commonly acquired sparsely and irregularly, which necessitates the regularization of seismic data with anti-aliasing and anti-leakage methods during seismic data processing. We propose a novel method o... Seismic data is commonly acquired sparsely and irregularly, which necessitates the regularization of seismic data with anti-aliasing and anti-leakage methods during seismic data processing. We propose a novel method of 4D anti-aliasing and anti-leakage Fourier transform using a cube-removal strategy to address the combination of irregular sampling and aliasing in high-dimensional seismic data. We compute a weighting function by stacking the spectrum along the radial lines, apply this function to suppress the aliasing energy, and then iteratively pick the dominant amplitude cube to construct the Fourier spectrum. The proposed method is very efficient due to a cube removal strategy for accelerating the convergence of Fourier reconstruction and a well-designed parallel architecture using CPU/GPU collaborative computing. To better fill the acquisition holes from 5D seismic data and meanwhile considering the GPU memory limitation, we developed the anti-aliasing and anti-leakage Fourier transform method in 4D with the remaining spatial dimension looped. The entire workflow is composed of three steps: data splitting, 4D regularization, and data merging. Numerical tests on both synthetic and field data examples demonstrate the high efficiency and effectiveness of our approach. 展开更多
关键词 high-dimensional regularization GPU ANTI-ALIASING ANTI-LEAKAGE
原文传递
Target Controllability of Multi-Layer Networks With High-Dimensional Nodes
6
作者 Lifu Wang Zhaofei Li +1 位作者 Ge Guo Zhi Kong 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第9期1999-2010,共12页
This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighte... This paper studies the target controllability of multilayer complex networked systems,in which the nodes are highdimensional linear time invariant(LTI)dynamical systems,and the network topology is directed and weighted.The influence of inter-layer couplings on the target controllability of multi-layer networks is discussed.It is found that even if there exists a layer which is not target controllable,the entire multi-layer network can still be target controllable due to the inter-layer couplings.For the multi-layer networks with general structure,a necessary and sufficient condition for target controllability is given by establishing the relationship between uncontrollable subspace and output matrix.By the derived condition,it can be found that the system may be target controllable even if it is not state controllable.On this basis,two corollaries are derived,which clarify the relationship between target controllability,state controllability and output controllability.For the multi-layer networks where the inter-layer couplings are directed chains and directed stars,sufficient conditions for target controllability of networked systems are given,respectively.These conditions are easier to verify than the classic criterion. 展开更多
关键词 high-dimensional nodes inter-layer couplings multi-layer networks target controllability
在线阅读 下载PDF
Multi-Objective Equilibrium Optimizer for Feature Selection in High-Dimensional English Speech Emotion Recognition
7
作者 Liya Yue Pei Hu +1 位作者 Shu-Chuan Chu Jeng-Shyang Pan 《Computers, Materials & Continua》 SCIE EI 2024年第2期1957-1975,共19页
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext... Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER. 展开更多
关键词 Speech emotion recognition filter-wrapper high-dimensionAL feature selection equilibrium optimizer MULTI-OBJECTIVE
在线阅读 下载PDF
Censored Composite Conditional Quantile Screening for High-Dimensional Survival Data
8
作者 LIU Wei LI Yingqiu 《应用概率统计》 CSCD 北大核心 2024年第5期783-799,共17页
In this paper,we introduce the censored composite conditional quantile coefficient(cC-CQC)to rank the relative importance of each predictor in high-dimensional censored regression.The cCCQC takes advantage of all usef... In this paper,we introduce the censored composite conditional quantile coefficient(cC-CQC)to rank the relative importance of each predictor in high-dimensional censored regression.The cCCQC takes advantage of all useful information across quantiles and can detect nonlinear effects including interactions and heterogeneity,effectively.Furthermore,the proposed screening method based on cCCQC is robust to the existence of outliers and enjoys the sure screening property.Simulation results demonstrate that the proposed method performs competitively on survival datasets of high-dimensional predictors,particularly when the variables are highly correlated. 展开更多
关键词 high-dimensional survival data censored composite conditional quantile coefficient sure screening property rank consistency property
在线阅读 下载PDF
A State-Migration Particle Swarm Optimizer for Adaptive Latent Factor Analysis of High-Dimensional and Incomplete Data
9
作者 Jiufang Chen Kechen Liu +4 位作者 Xin Luo Ye Yuan Khaled Sedraoui Yusuf Al-Turki MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第11期2220-2235,共16页
High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation lear... High-dimensional and incomplete(HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis(LFA) model is capable of conducting efficient representation learning to an HDI matrix,whose hyper-parameter adaptation can be implemented through a particle swarm optimizer(PSO) to meet scalable requirements.However, conventional PSO is limited by its premature issues,which leads to the accuracy loss of a resultant LFA model. To address this thorny issue, this study merges the information of each particle's state migration into its evolution process following the principle of a generalized momentum method for improving its search ability, thereby building a state-migration particle swarm optimizer(SPSO), whose theoretical convergence is rigorously proved in this study. It is then incorporated into an LFA model for implementing efficient hyper-parameter adaptation without accuracy loss. Experiments on six HDI matrices indicate that an SPSO-incorporated LFA model outperforms state-of-the-art LFA models in terms of prediction accuracy for missing data of an HDI matrix with competitive computational efficiency.Hence, SPSO's use ensures efficient and reliable hyper-parameter adaptation in an LFA model, thus ensuring practicality and accurate representation learning for HDI matrices. 展开更多
关键词 Data science generalized momentum high-dimensional and incomplete(HDI)data hyper-parameter adaptation latent factor analysis(LFA) particle swarm optimization(PSO)
在线阅读 下载PDF
Efficient Hierarchical Kriging Modeling Method for High-dimension Multi-delity Problems
10
作者 Youwei He Jinliang Luo 《Chinese Journal of Mechanical Engineering》 CSCD 2024年第6期286-302,共17页
The multi-fidelity Kriging model is a promising technique in surrogate-based design,balancing model accuracy and the cost of sample generation by combining low-and high-fidelity data.However,the cost of building a mul... The multi-fidelity Kriging model is a promising technique in surrogate-based design,balancing model accuracy and the cost of sample generation by combining low-and high-fidelity data.However,the cost of building a multifidelity Kriging model increases significantly as problem complexity grows.To address this issue,we propose an e cient Hierarchical Kriging modeling method.In building the low-fidelity model,distance correlation is used to determine the relative value of the hyperparameter.This transforms the maximum likelihood estimation problem into a one-dimensional optimization task,which can be solved e ciently,significantly improving modeling e ciency.The high-fidelity model is built similarly,with the low-fidelity model's hyperparameter used as the relative value for the high-fidelity model's hyperparameter.The proposed method's e ectiveness is evaluated through analytical problems and a real-world engineering problem involving modeling the isentropic e ciency of a compressor rotor.Experimental results show that the proposed method reduces modeling time significantly without compromising accuracy.For the compressor rotor isentropic e ciency model,the proposed method yields over 99%cost savings compared to conventional approaches,while also achieving higher accuracy. 展开更多
关键词 Surrogate Multi-fidelity model Hierarchical Kriging high-dimension modeling
在线阅读 下载PDF
Boosted Spider Wasp Optimizer for High-dimensional Feature Selection
11
作者 Elfadil A.Mohamed Malik Sh.Braik +1 位作者 Mohammed Azmi Al-Betar Mohammed A.Awadallah 《Journal of Bionic Engineering》 CSCD 2024年第5期2424-2459,共36页
With the increasing dimensionality of the data,High-dimensional Feature Selection(HFS)becomes an increasingly dif-ficult task.It is not simple to find the best subset of features due to the breadth of the search space... With the increasing dimensionality of the data,High-dimensional Feature Selection(HFS)becomes an increasingly dif-ficult task.It is not simple to find the best subset of features due to the breadth of the search space and the intricacy of the interactions between features.Many of the Feature Selection(FS)approaches now in use for these problems perform sig-nificantly less well when faced with such intricate situations involving high-dimensional search spaces.It is demonstrated that meta-heuristic algorithms can provide sub-optimal results in an acceptable amount of time.This paper presents a new binary Boosted version of the Spider Wasp Optimizer(BSWO)called Binary Boosted SWO(BBSWO),which combines a number of successful and promising strategies,in order to deal with HFS.The shortcomings of the original BSWO,including early convergence,settling into local optimums,limited exploration and exploitation,and lack of population diversity,were addressed by the proposal of this new variant of SWO.The concept of chaos optimization is introduced in BSWO,where initialization is consistently produced by utilizing the properties of sine chaos mapping.A new convergence parameter was then incorporated into BSWO to achieve a promising balance between exploration and exploitation.Multiple exploration mechanisms were then applied in conjunction with several exploitation strategies to effectively enrich the search process of BSWO within the search space.Finally,quantum-based optimization was added to enhance the diversity of the search agents in BSWO.The proposed BBSWO not only offers the most suitable subset of features located,but it also lessens the data's redundancy structure.BBSWO was evaluated using the k-Nearest Neighbor(k-NN)classifier on 23 HFS problems from the biomedical domain taken from the UCI repository.The results were compared with those of traditional BSWO and other well-known meta-heuristics-based FS.The findings indicate that,in comparison to other competing techniques,the proposed BBSWO can,on average,identify the least significant subsets of features with efficient classification accuracy of the k-NN classifier. 展开更多
关键词 high-dimensional features SWO algorithm Feature selection Optimization Machine learning
在线阅读 下载PDF
Optimal Estimation of High-Dimensional Covariance Matrices with Missing and Noisy Data
12
作者 Meiyin Wang Wanzhou Ye 《Advances in Pure Mathematics》 2024年第4期214-227,共14页
The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based o... The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method. 展开更多
关键词 high-dimensional Covariance Matrix Missing Data Sub-Gaussian Noise Optimal Estimation
在线阅读 下载PDF
Chip-Based High-Dimensional Optical Neural Network 被引量:8
13
作者 Xinyu Wang Peng Xie +1 位作者 Bohan Chen Xingcai Zhang 《Nano-Micro Letters》 SCIE EI CAS CSCD 2022年第12期570-578,共9页
Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high paralleliz... Parallel multi-thread processing in advanced intelligent processors is the core to realize high-speed and high-capacity signal processing systems.Optical neural network(ONN)has the native advantages of high parallelization,large bandwidth,and low power consumption to meet the demand of big data.Here,we demonstrate the dual-layer ONN with Mach-Zehnder interferometer(MZI)network and nonlinear layer,while the nonlinear activation function is achieved by optical-electronic signal conversion.Two frequency components from the microcomb source carrying digit datasets are simultaneously imposed and intelligently recognized through the ONN.We successfully achieve the digit classification of different frequency components by demultiplexing the output signal and testing power distribution.Efficient parallelization feasibility with wavelength division multiplexing is demonstrated in our high-dimensional ONN.This work provides a high-performance architecture for future parallel high-capacity optical analog computing. 展开更多
关键词 Integrated optics Optical neural network high-dimension Mach-Zehnder interferometer Nonlinear activation function Parallel high-capacity analog computing
在线阅读 下载PDF
Guaranteed Cost Consensus for High-dimensional Multi-agent Systems With Time-varying Delays 被引量:8
14
作者 Zhong Wang Ming He +2 位作者 Tang Zheng Zhiliang Fan Guangbin Liu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2018年第1期181-189,共9页
Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for... Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for high-dimensiona multi-agent systems with time-varying delays, where a cos function is defined based on state errors among neighboring agents and control inputs of all the agents. By the state space decomposition approach and the linear matrix inequality(LMI)sufficient conditions for guaranteed cost consensus and consensu alization are given. Moreover, a guaranteed cost upper bound o the cost function is determined. It should be mentioned that these LMI criteria are dependent on the change rate of time delays and the maximum time delay, the guaranteed cost upper bound is only dependent on the maximum time delay but independen of the Laplacian matrix. Finally, numerical simulations are given to demonstrate theoretical results. 展开更多
关键词 Guaranteed cost consensus high-dimensional multi-agent system time-varying delay
在线阅读 下载PDF
Similarity measurement method of high-dimensional data based on normalized net lattice subspace 被引量:4
15
作者 李文法 Wang Gongming +1 位作者 Li Ke Huang Su 《High Technology Letters》 EI CAS 2017年第2期179-184,共6页
The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities... The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction. 展开更多
关键词 high-dimensional data the curse of dimensionality SIMILARITY NORMALIZATION SUBSPACE NPsim
在线阅读 下载PDF
A Sequence Image Matching Method Based on Improved High-Dimensional Combined Features 被引量:2
16
作者 Leng Xuefei Gong Zhe +1 位作者 Fu Runzhe Liu Yang 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2018年第5期820-828,共9页
Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dim... Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dimensional combined feature is presented based on sequence image matching navigation.To balance between the distribution of high-dimensional combined features and the shortcomings of the only use of geometric relations,we propose a method based on Delaunay triangulation to improve the feature,and add the regional characteristics of the features together with their geometric characteristics.Finally,k-nearest neighbor(KNN)algorithm is adopted to optimize searching process.Simulation results show that the matching can be realized at the rotation angle of-8°to 8°and the scale factor of 0.9 to 1.1,and when the image size is 160 pixel×160 pixel,the matching time is less than 0.5 s.Therefore,the proposed algorithm can substantially reduce computational complexity,improve the matching speed,and exhibit robustness to the rotation and scale changes. 展开更多
关键词 SEQUENCE image MATCHING navigation DELAUNAY TRIANGULATION high-dimensionAL combined feature k-nearest NEIGHBOR
在线阅读 下载PDF
Randomized Latent Factor Model for High-dimensional and Sparse Matrices from Industrial Applications 被引量:14
17
作者 Mingsheng Shang Xin Luo +3 位作者 Zhigang Liu Jia Chen Ye Yuan MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2019年第1期131-141,共11页
Latent factor(LF)models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS)matrices which are commonly seen in various industrial applications.An LF model usually adopts iterativ... Latent factor(LF)models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS)matrices which are commonly seen in various industrial applications.An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost.Hence,determining how to accelerate the training process for LF models has become a significant issue.To address this,this work proposes a randomized latent factor(RLF)model.It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices,thereby greatly alleviating computational burden.It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models,RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices,which is especially desired for industrial applications demanding highly efficient models. 展开更多
关键词 Big data high-dimensional and sparse matrix latent factor analysis latent factor model randomized learning
在线阅读 下载PDF
Robust Latent Factor Analysis for Precise Representation of High-Dimensional and Sparse Data 被引量:5
18
作者 Di Wu Xin Luo 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第4期796-805,共10页
High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurat... High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurately represent them is of great significance.A latent factor(LF)model is one of the most popular and successful ways to address this issue.Current LF models mostly adopt L2-norm-oriented Loss to represent an HiDS matrix,i.e.,they sum the errors between observed data and predicted ones with L2-norm.Yet L2-norm is sensitive to outlier data.Unfortunately,outlier data usually exist in such matrices.For example,an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users.To address this issue,this work proposes a smooth L1-norm-oriented latent factor(SL-LF)model.Its main idea is to adopt smooth L1-norm rather than L2-norm to form its Loss,making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix.Experimental results on eight HiDS matrices generated by industrial applications verify that the proposed SL-LF model not only is robust to the outlier data but also has significantly higher prediction accuracy than state-of-the-art models when they are used to predict the missing data of HiDS matrices. 展开更多
关键词 high-dimensional and sparse matrix L1-norm L2 norm latent factor model recommender system smooth L1-norm
在线阅读 下载PDF
Computation of the Rational Representation for Solutions of High-dimensional Systems 被引量:3
19
作者 TAN CHANG ZHANG SHU-GONG 《Communications in Mathematical Research》 CSCD 2010年第2期119-130,共12页
This paper deals with the representation of the solutions of a polynomial system, and concentrates on the high-dimensional case. Based on the rational univari- ate representation of zero-dimensional polynomial systems... This paper deals with the representation of the solutions of a polynomial system, and concentrates on the high-dimensional case. Based on the rational univari- ate representation of zero-dimensional polynomial systems, we give a new description called rational representation for the solutions of a high-dimensional polynomial sys- tem and propose an algorithm for computing it. By this way all the solutions of any high-dimensional polynomial system can be represented by a set of so-called rational- representation sets. 展开更多
关键词 rational univariate representation high-dimensional ideal maximally independent set rational representation irreducible component
在线阅读 下载PDF
上一页 1 2 184 下一页 到第
使用帮助 返回顶部