期刊文献+

一种新的最小二乘支持向量机稀疏化算法 被引量:6

New sparse least squares support vector machine algorithm
在线阅读 下载PDF
导出
摘要 普通的最小二乘支持向量机(LS-SVM)稀疏化算法在处理有些常见的模式识别问题时,随着训练样本的删减,识别率下滑很快,往往达不到稀疏化的目的。针对这种情况,提出了一种新的LS-SVM稀疏化算法来弥补这种不足,从而使得LS-SVM稀疏化算法体系更加完善。将新算法应用到雷达一维距离像的识别中,实验结果证明了新算法的有效性。 The recognition rate of Least Squares Support Vector Machine (LS-SVM) sparse algorithm rapidly decreases with the reduction of training samples in dealing with some pattern recognition issues, and the sparsifieation can not be achieved. To overcome such a shortage, a new sparse algorithm was proposed. The method was applied to radar range profile's recognition and the experimental results show its validity in recognition.
作者 吴宗亮 窦衡
出处 《计算机应用》 CSCD 北大核心 2009年第6期1559-1562,1581,共5页 journal of Computer Applications
关键词 最小二乘支持向量机 稀疏化 雷达一维距离像 Least Squares Support Vector Machine (LS-SVM) sparsification radar range profile
  • 相关文献

参考文献5

二级参考文献42

  • 1朱家元 张恒喜.基于支持向量机的R&D项目中止决策研究[J].计算机科学,2002,29(9).
  • 2Müller K-R Smola A Rtsch G et al In: Schlkopf B Burges C J C Smola A J. Eds.Predicting time Series with Support vector machines[A].In: Schlkopf B, Burges C J C, Smola A J. Eds.Advances in Kernel Methods-Support Vector Learning[C].MA:MIT Press,1999.243-254.
  • 3Müller K-R Mika S Rtsch G et al.An Introduction to Kernel-Based Learning Algorithms[J].IEEE Transactions on Neural Networks,2001,12(2):181-201.
  • 4Schlkopf B Smola A Müller K-R.Nonlinear Component Analysis as a Kernel Eigenvalue Problem[J].Neural Computation,1998,10:1299-1319.
  • 5Cristianini N, Shawe-Taylor J. An Introduction to Support Vector machines[ M]. Cambridge, UK : Cambridge University Press, 2000.
  • 6Muller K-R, Mika S, Ratsch G, et al. An Introduction to Kernel-Based Learning Algorithms[J]. IEEE Transactions on Neural Networks, 2001,12(2) : 181 - 201.
  • 7Vapnik V N. The Nature of Statistical Learning Theory[M]. NY. Springer, 1995.
  • 8Scholkopf B, Platt J C, Shawe-Taylor J, et al. Estimating the Support of a High-dimensional Distribution[ R]. Technical Report MSR-TR-99-87, Microsoft Research, 1999.
  • 9Tax D, Duin R. Data Domain Description by Support Vectors[A]. In. Verleysen M.Ed. D. Proc. ESANN[C]. Brussels:Facto Press, 1999, 251 - 256.
  • 10Ben-Hur A, Horn D, Siegelmann H T, et al. Support Vector Clustering[J]. Journal of Machine Learning Research,2001,2:125-137.

共引文献71

同被引文献48

  • 1陈爱军,宋执环,李平.基于矢量基学习的最小二乘支持向量机建模[J].控制理论与应用,2007,24(1):1-5. 被引量:21
  • 2甘良志,孙宗海,孙优贤.稀疏最小二乘支持向量机[J].浙江大学学报(工学版),2007,41(2):245-248. 被引量:27
  • 3DONOHO D L. Compressed sensing [ J]. IEEE Transactions on In- formation Theory, 2006, 52(4) : 1289 - 1306.
  • 4DONOHO D L, TSAIG Y. Extensions of compressed sensing [ J]. Signal Processing, 2006, 86(3) : 533 - 548.
  • 5DONOHO D L. For most large underdetermined systems of linear e- quations the minimal ll-norm solution is also the sparsest solution [J]. Communications on Pure and Applied Mathematics, 2006, 59 (6) : 797 - 829.
  • 6CANDES E, TAO T. Decoding by linear programming [ J]. IEEE Transactions on Information Theory, 2005, 51( 12): 4203 -4215.
  • 7CANDES E, ROMBERG J. Quantitative robust uncertainty princi- ples and optimally sparse decompositions [ J]. Foundations of Com- putational Mathematics, 2006, 6(2) : 227 -254.
  • 8CANDES E, ROMBERG J, TAO T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency infor- mation [ J]. IEEE Transactions on Information Theory, 2006, 52 (2) : 489 -509.
  • 9CANDES E, TAO T. Near-optimal signal recovery from random pro- jections: Universal encoding strategies? [ J]. IEEE Transactions on Information Theory, 2006, 52(12): 5406-5425.
  • 10BLUMENSATH T, DAVIES M E. How to use the iterative hard thresholding algorithm [ C]//SPARS'09: Proceeding of Signal Pro-cessing with Adaptive Sparse Structure Representations. Saint-Ma- lo, France: [s. n.], 2009:33-38.

引证文献6

二级引证文献24

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部