How to extract optimal composite attributes from a variety of conventional seismic attributes to detect reservoir features is a reservoir predication key,which is usually solved by reducing dimensionality.Principle co...How to extract optimal composite attributes from a variety of conventional seismic attributes to detect reservoir features is a reservoir predication key,which is usually solved by reducing dimensionality.Principle component analysis(PCA) is the most widely-used linear dimensionality reduction method at present.However,the relationships between seismic attributes and reservoir features are non-linear,so seismic attribute dimensionality reduction based on linear transforms can't solve non-linear problems well,reducing reservoir prediction precision.As a new non-linear learning method,manifold learning supplies a new method for seismic attribute analysis.It can discover the intrinsic features and rules hidden in the data by computing low-dimensional,neighborhood-preserving embeddings of high-dimensional inputs.In this paper,we try to extract seismic attributes using locally linear embedding(LLE),realizing inter-horizon attributes dimensionality reduction of 3D seismic data first and discuss the optimization of its key parameters.Combining model analysis and case studies,we compare the dimensionality reduction and clustering effects of LLE and PCA,both of which indicate that LLE can retain the intrinsic structure of the inputs.The composite attributes and clustering results based on LLE better characterize the distribution of sedimentary facies,reservoir,and even reservoir fluids.展开更多
Unsupervised feature selection is fundamental in statistical pattern recognition,and has drawn persistent attention in the past several decades.Recently,much work has shown that feature selection can be formulated as ...Unsupervised feature selection is fundamental in statistical pattern recognition,and has drawn persistent attention in the past several decades.Recently,much work has shown that feature selection can be formulated as nonlinear dimensionality reduction with discrete constraints.This line of research emphasizes utilizing the manifold learning techniques,where feature selection and learning can be studied based on the manifold assumption in data distribution.Many existing feature selection methods such as Laplacian score,SPEC(spectrum decomposition of graph Laplacian),TR(trace ratio)criterion,MSFS(multi-cluster feature selection)and EVSC(eigenvalue sensitive criterion)apply the basic properties of graph Laplacian,and select the optimal feature subsets which best preserve the manifold structure defined on the graph Laplacian.In this paper,we propose a new feature selection perspective from locally linear embedding(LLE),which is another popular manifold learning method.The main difficulty of using LLE for feature selection is that its optimization involves quadratic programming and eigenvalue decomposition,both of which are continuous procedures and different from discrete feature selection.We prove that the LLE objective can be decomposed with respect to data dimensionalities in the subset selection problem,which also facilitates constructing better coordinates from data using the principal component analysis(PCA)technique.Based on these results,we propose a novel unsupervised feature selection algorithm,called locally linear selection(LLS),to select a feature subset representing the underlying data manifold.The local relationship among samples is computed from the LLE formulation,which is then used to estimate the contribution of each individual feature to the underlying manifold structure.These contributions,represented as LLS scores,are ranked and selected as the candidate solution to feature selection.We further develop a locally linear rotation-selection(LLRS)algorithm which extends LLS to identify the optimal coordinate subset from a new space.Experimental results on real-world datasets show that our method can be more effective than Laplacian eigenmap based feature selection methods.展开更多
A fault detection method based on incremental locally linear embedding(LLE)is presented to improve fault detecting accuracy for satellites with telemetry data.Since conventional LLE algorithm cannot handle incremental...A fault detection method based on incremental locally linear embedding(LLE)is presented to improve fault detecting accuracy for satellites with telemetry data.Since conventional LLE algorithm cannot handle incremental learning,an incremental LLE method is proposed to acquire low-dimensional feature embedded in high-dimensional space.Then,telemetry data of Satellite TX-I are analyzed.Therefore,fault detection are performed by analyzing feature information extracted from the telemetry data with the statistical indexes T2 and squared prediction error(SPE)and SPE.Simulation results verify the fault detection scheme.展开更多
Inspired by the tremendous achievements of meta-learning in various fields,this paper proposes the local quadratic embedding learning(LQEL)algorithm for regression problems based on metric learning and neural networks...Inspired by the tremendous achievements of meta-learning in various fields,this paper proposes the local quadratic embedding learning(LQEL)algorithm for regression problems based on metric learning and neural networks(NNs).First,Mahalanobis metric learning is improved by optimizing the global consistency of the metrics between instances in the input and output space.Then,we further prove that the improved metric learning problem is equivalent to a convex programming problem by relaxing the constraints.Based on the hypothesis of local quadratic interpolation,the algorithm introduces two lightweight NNs;one is used to learn the coefficient matrix in the local quadratic model,and the other is implemented for weight assignment for the prediction results obtained from different local neighbors.Finally,the two sub-mod els are embedded in a unified regression framework,and the parameters are learned by means of a stochastic gradient descent(SGD)algorithm.The proposed algorithm can make full use of the information implied in target labels to find more reliable reference instances.Moreover,it prevents the model degradation caused by sensor drift and unmeasurable variables by modeling variable differences with the LQEL algorithm.Simulation results on multiple benchmark datasets and two practical industrial applications show that the proposed method outperforms several popular regression methods.展开更多
基金National Key Science & Technology Special Projects(Grant No.2008ZX05000-004)CNPC Projects(Grant No.2008E-0610-10).
文摘How to extract optimal composite attributes from a variety of conventional seismic attributes to detect reservoir features is a reservoir predication key,which is usually solved by reducing dimensionality.Principle component analysis(PCA) is the most widely-used linear dimensionality reduction method at present.However,the relationships between seismic attributes and reservoir features are non-linear,so seismic attribute dimensionality reduction based on linear transforms can't solve non-linear problems well,reducing reservoir prediction precision.As a new non-linear learning method,manifold learning supplies a new method for seismic attribute analysis.It can discover the intrinsic features and rules hidden in the data by computing low-dimensional,neighborhood-preserving embeddings of high-dimensional inputs.In this paper,we try to extract seismic attributes using locally linear embedding(LLE),realizing inter-horizon attributes dimensionality reduction of 3D seismic data first and discuss the optimization of its key parameters.Combining model analysis and case studies,we compare the dimensionality reduction and clustering effects of LLE and PCA,both of which indicate that LLE can retain the intrinsic structure of the inputs.The composite attributes and clustering results based on LLE better characterize the distribution of sedimentary facies,reservoir,and even reservoir fluids.
基金Supported by National Natural Science Foundation of China(61071131,61271388)Natural Science Foundation of Beijing(4122040)+1 种基金Research Project of Tsinghua University(2012Z01011)Doctoral Fund of Ministry of Education of China(20120002110036)
文摘Unsupervised feature selection is fundamental in statistical pattern recognition,and has drawn persistent attention in the past several decades.Recently,much work has shown that feature selection can be formulated as nonlinear dimensionality reduction with discrete constraints.This line of research emphasizes utilizing the manifold learning techniques,where feature selection and learning can be studied based on the manifold assumption in data distribution.Many existing feature selection methods such as Laplacian score,SPEC(spectrum decomposition of graph Laplacian),TR(trace ratio)criterion,MSFS(multi-cluster feature selection)and EVSC(eigenvalue sensitive criterion)apply the basic properties of graph Laplacian,and select the optimal feature subsets which best preserve the manifold structure defined on the graph Laplacian.In this paper,we propose a new feature selection perspective from locally linear embedding(LLE),which is another popular manifold learning method.The main difficulty of using LLE for feature selection is that its optimization involves quadratic programming and eigenvalue decomposition,both of which are continuous procedures and different from discrete feature selection.We prove that the LLE objective can be decomposed with respect to data dimensionalities in the subset selection problem,which also facilitates constructing better coordinates from data using the principal component analysis(PCA)technique.Based on these results,we propose a novel unsupervised feature selection algorithm,called locally linear selection(LLS),to select a feature subset representing the underlying data manifold.The local relationship among samples is computed from the LLE formulation,which is then used to estimate the contribution of each individual feature to the underlying manifold structure.These contributions,represented as LLS scores,are ranked and selected as the candidate solution to feature selection.We further develop a locally linear rotation-selection(LLRS)algorithm which extends LLS to identify the optimal coordinate subset from a new space.Experimental results on real-world datasets show that our method can be more effective than Laplacian eigenmap based feature selection methods.
基金supported by the Fundamental Research Funds for the Central Universities(No.2016083)
文摘A fault detection method based on incremental locally linear embedding(LLE)is presented to improve fault detecting accuracy for satellites with telemetry data.Since conventional LLE algorithm cannot handle incremental learning,an incremental LLE method is proposed to acquire low-dimensional feature embedded in high-dimensional space.Then,telemetry data of Satellite TX-I are analyzed.Therefore,fault detection are performed by analyzing feature information extracted from the telemetry data with the statistical indexes T2 and squared prediction error(SPE)and SPE.Simulation results verify the fault detection scheme.
基金supported by the National Key Research and Development Program of China(2016YFB0303401)the International(Regional)Cooperation and Exchange Project(61720106008)+1 种基金the National Science Fund for Distinguished Young Scholars(61725301)the Shanghai AI Lab。
文摘Inspired by the tremendous achievements of meta-learning in various fields,this paper proposes the local quadratic embedding learning(LQEL)algorithm for regression problems based on metric learning and neural networks(NNs).First,Mahalanobis metric learning is improved by optimizing the global consistency of the metrics between instances in the input and output space.Then,we further prove that the improved metric learning problem is equivalent to a convex programming problem by relaxing the constraints.Based on the hypothesis of local quadratic interpolation,the algorithm introduces two lightweight NNs;one is used to learn the coefficient matrix in the local quadratic model,and the other is implemented for weight assignment for the prediction results obtained from different local neighbors.Finally,the two sub-mod els are embedded in a unified regression framework,and the parameters are learned by means of a stochastic gradient descent(SGD)algorithm.The proposed algorithm can make full use of the information implied in target labels to find more reliable reference instances.Moreover,it prevents the model degradation caused by sensor drift and unmeasurable variables by modeling variable differences with the LQEL algorithm.Simulation results on multiple benchmark datasets and two practical industrial applications show that the proposed method outperforms several popular regression methods.