期刊文献+

基于独立成分分析的分解向前SVM降维算法 被引量:2

Decomposition forward SVM dimension-reduction algorithm based on independent component analysis
在线阅读 下载PDF
导出
摘要 提出一种基于大样本学习的分解向前支持向量机算法和一种新的基于独立成分分析的降维学习模型,其算法的复杂度比传统块算法和标准SVM低。利用不完备ICA思想,达到数据压缩而降维的目的。实验发现,由于降低了输入维数,简化了数据结构,从而减少了SVM识别的计算复杂度,当把向量维数从110维降低到5维时,平均识别率超过传统神经网络达到93%,因而从计算时间和识别效率二者的综合情况来考虑,ICA降维模型是一种理想的实际应用模型。 A Decomposition Forward Support Vector Machine (DFSVM) algorithm for large scale samples learning and a new dimension reduction model based on Independent Component Analysis (ICA) were proposed. The calculational complexity is lower than that of the traditional chunking algorithm and the standard SVM. Use the idea of imcomplete ICA to compress data and reduce the dimension. Because of the reduced input dimension and simplified data structure, the calculational complexity of SVM has been reduced. Experiment indicates that if reducing the dimension from one hundred and ten dimension to five-dimension, the average recognition rate is superior to traditional neural network and comes to 93%. Considering the time cost and the recognition efficiency, ICA dimension reduction model is an ideal application model in practice.
出处 《计算机应用》 CSCD 北大核心 2007年第9期2249-2252,共4页 journal of Computer Applications
基金 重庆市教育委员会科学技术研究项目(KJ0707022)
关键词 独立成分分析 分解向前支持向量机 蛋白质序列识别 Independent component analysis (ICA) Decomposition Forward Support Vector Machine (DFSVM) recognition for protein sequence
  • 相关文献

参考文献11

  • 1RISTANIEMI T,JOUTSENSALO J.On the performance of blind source separation in CDMA downlink[C]// Proceedings of the First Workshop on Independent Component Analysis and Signal Separation (ICA'99).Aussois:[s.n.],1999:437-441.
  • 2SAGI B,NEMAT-NASSER S C,KERR R,et al.A biologically motivation solution to the cocktail party problem[J].Neural Computation,2001,13(7):1575-1602.
  • 3Yang H H,Amari S I.Adaptive online learning algorithms for blind sources separation:maximum entropy and minimum mutual information[J].Neural Computation,1997,9(7):1457-1482.
  • 4MARKUS S,FRITHJOF K,HABIB B.ICA of fMRI group study data[J].NeuroImage,2002,16(3):551-563.
  • 5BOSER B E,GUYON I M,VAPNIK V N.A training algorithm for optimal margin classifiers[C]// HAUSSLER D,ed.Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory.San Diego:ACM Press,1992:144-152.
  • 6萧嵘,王继成,孙正兴,张福炎.一种SVM增量学习算法α-ISVM[J].软件学报,2001,12(12):1818-1824. 被引量:84
  • 7RUPING S.Incremental Learning with Support Vector Machines[C]// The First IEEE International Conference on Data Mining(ICDM).[S.l.]:IEEE Press,2001:641-642.
  • 8奉国和,黄榕波,罗泽举,朱思铭.基于支持向量机的分解合作的加权算法及其应用[J].计算机科学,2005,32(4):91-93. 被引量:4
  • 9HYVARINEN A.Fast and robust fixed-point algorithms for independent component analysis[J].IEEE Transactions on Neural Networks,1999,10(3):626-634.
  • 10MURZIN A G,BRENNER S E,HUBBARD T,et al.SCOP:a structural classification of proteins database for the investigation of sequences and structures.Journal of Molecular Biology,1995,247:536-540.

二级参考文献6

  • 1Vapnik V N. Statistical learning theory [M]. New York:Wiley,1998
  • 2Vapnik V N. The Nature of Statistical Learning Theory [M]. New York :Springer, 1999
  • 3Cristianini N, Tayor J S. An introduction to support vector machines and other kernel-based learning methods [M]. New York:Cambridge Press, 2000
  • 4Tay F E H,Cao L J. Modified support vector machines in financial time series forecasting [J]. Neurocomputing, 2002,48: 847~861
  • 5Tay F E H,Cao L J. Application of support vector machines in financial time series forecasting [J]. omega, 2001,29: 309~317
  • 6Christopher J.C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition[J] 1998,Data Mining and Knowledge Discovery(2):121~167

共引文献85

同被引文献21

  • 1刘隽,周涛,周佩玲.GA优化支持向量机用于混沌时间序列预测[J].中国科学技术大学学报,2005,35(2):258-263. 被引量:21
  • 2张建明,林亚平,吴宏斌,杨格兰.独立成分分析的研究进展[J].系统仿真学报,2006,18(4):992-997. 被引量:31
  • 3何中市,刘里.基于上下文关系的文本分类特征描述方法[J].计算机科学,2007,34(5):183-186. 被引量:6
  • 4Qu Shouning,Wang Sujuan,Zou Yan.Improvement of text feature selection method based on TFIDF[C].Proceedings of FITME'08.USA:IEEE Computer Society,2008:79-81.
  • 5Gabrilovich E,Markovitch S.Text categorization with many redundant features:Using aggressive feature selection to make SVMs competitive with C4.5[C].Banff,Alberta,Canada:Proceedings of ICML'04,2004:321-328.
  • 6Yu Lei,Liu Huan.Efficiently handling feature redundancy in high-dimensional data[C].Washington,DC:Proceedings of the ACM SIGKDD'03,2003:685-690.
  • 7Tan Feng,Fu Xuezheng,Zhang Yanqing,et al.A genetic algorithm-based method for feature subset selection[J].Soft Computing,2007(12):111-120.
  • 8Bai Rujiang,Wang Xiaoyue,Liao Junhua.Combination of rough sets and genetic algorithms for text classification[C].Proceedings of the 2nd International Conference on Autonomous Intelligent Systems:Agents and Data Mining,2007:256-268.
  • 9Lu Yi-hong,Huang Yan.Document categorization with entropy based TF/IDF classifier[C].Proceedings of GCIS'09.USA:IEEE Computer Society,2009:269-273.
  • 10吴新杰,许超.基于独立成分分析和小波变换处理两相流信号[J].辽宁工程技术大学学报(自然科学版),2007,26(5):795-797. 被引量:1

引证文献2

二级引证文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部