摘要
在机器学习研究领域,人们提出了很多特征变换算法。这些算法的思路是把数据从原始特征空间映射到新的特征空间,从而改善数据的表示或区分能力。所用技术主要包括特征向量或谱方法、最优化理论、图论等。算法的步骤都是:(1)构造原始数据及关系的结构;(2)定义目标函数;(3)运用优化理论使目标函数最优,求得问题的解。本文给出了两类常用的局部保持特征变换主要算法步骤,分析了算法优缺点,这使我们对特征变换的研究有较全面的了解。
Many feature transform methods have been proposed for the machine learning research area. They generally try to project the available data from the original feature space to a new feature space so that those data are more representative, or discriminative if they are intended to be assigned with some specific labels. General techniques mainly involve the Eigenvector or Spectral method, the optimization theories (Linear or Convex), the graph theories, and so on. It is generally (1) to construct a structure for the original data and their correlations, (2) to define an objective function to evaluate the purpose of the projection or the characteristics of the new space, (3) to apply optimization theories to optimize the objective function to get the solution to the problem. This paper gives two classical methods of locality preserving transformation. By analyzing their key points together with their deficiencies, we get a general view of the currently most critical problems.
出处
《计算机工程与科学》
CSCD
北大核心
2010年第1期80-82,91,共4页
Computer Engineering & Science