期刊文献+

基于径向基神经网络和正则化极限学习机的多标签学习模型 被引量:13

Multi-label Learning Model Based on Multi-label Radial Basis Function Neural Network and Regularized Extreme Learning Machine
在线阅读 下载PDF
导出
摘要 相比径向基(RBF)神经网络,极限学习机(ELM)训练速度更快,泛化能力更强.同时,近邻传播聚类算法(AP)可以自动确定聚类个数.因此,文中提出融合AP聚类、多标签RBF(ML-RBF)和正则化ELM(RELM)的多标签学习模型(ML-AP-RBF-RELM).首先,在该模型中输入层使用ML-RBF进行映射,且通过AP聚类算法自动确定每一类标签的聚类个数,计算隐层节点个数.然后,利用每类标签的聚类个数通过K均值聚类确定隐层节点RBF函数的中心.最后,通过RELM快速求解隐层到输出层的连接权值.实验表明,ML-AP-RBF-RELM效果较好. Extreme learning machine (ELM) possesses the characteristics of fast training and good generalization ability compared with radial basis function neural network (RBFNN), and the affinity propagation (AP) clustering algorithm can automatically determine the number of clusters without a prior knowledge. Therefore, a multi-label learning model named ML-AP-RBF-RELM is proposed, and AP clustering algorithm, multi-label back propagation neural network (ML-RBF) and regularized ELM (RELM) are integrated in this model. ML-RBF is used to map in the input layer. In the hidden layer, the number of hidden nodes can be automatically determined by the sum of clustering centers of AP algorithm, and the center of the RBF function can be computed through the center of K-means clustering algorithm while the clustering number is determined by AP algorithm. Finally, the weights from hidden layer to output layer are rapidly calculated through RELM. The simulation results demonstrate that ML-AP-RBF-RELM performs well.
作者 单东 许新征
出处 《模式识别与人工智能》 EI CSCD 北大核心 2017年第9期833-840,共8页 Pattern Recognition and Artificial Intelligence
基金 国家自然科学基金项目(No.61672522 61379101) 中央高校基本科研业务费专项资金项目(No.2015XKMS088)资助~~
关键词 多标签学习 正则化极限学习机(RELM) 径向基神经网络(RBFNN) 前馈神经网络 Multi-label Learning, Regularized Extreme Learning Machine (RELM), Radial BasisFunction Neural Network(RBFNN) , Feedforward Neural Network
  • 相关文献

参考文献2

二级参考文献18

  • 1Hornik K. Approximation capabilities of multilayer feedforward networks. Neural Networks, 1991, 4(2): 251-257.
  • 2Leshno M, Lin V Y, Pinkus A, Schocken S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 1993, 6(6) : 861-867.
  • 3Huang G-B, Babri H A. Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Transactions on Neural Networks, 1998, 9(1): 224-229.
  • 4Huang G-B. Learning capability and storage capacity of two hidden-layer feedforward networks. IEEE Transactions on Neural Networks, 2003, 14(2): 274-281.
  • 5Huang G-B, Zhu Q-Y, Siew C-K. Extreme learning machine: Theory and applications. Neurocomputing, 2006, 70 (1-3): 489-501.
  • 6Vapnik V N. The Nature of Statistical Learning Theory. New York: Springer, 1995.
  • 7Rousseeuw P J, Leroy A. Robust Regression and Outlier Detection. New York: Wiley, 1987.
  • 8Rumelhart D E, McClelland J L. Parallel Distributed Processing. Cambridge.. MIT Press, 1986, 1(2): 125-187.
  • 9Cristianini N, Shawe-Taylor J. An Introduction to Support Vector Machines. Cambridge: Cambridge University Press, 2000.
  • 10Tamura S, Tateishi M. Capabilities of a four-layered feedforward neural network: Four layers versus three. IEEE Transactions on Neural Networks, 1997, 8(2): 251-255.

共引文献168

同被引文献97

引证文献13

二级引证文献57

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部