期刊文献+

ROBUST RVM BASED ON SPIKE-SLAB PRIOR 被引量:2

ROBUST RVM BASED ON SPIKE-SLAB PRIOR
在线阅读 下载PDF
导出
摘要 Although Relevance Vector Machine (RVM) is the most popular algorithms in machine learning and computer vision, outliers in the training data make the estimation unreliable. In the paper, a robust RVM model under non-parametric Bayesian framework is proposed. We decompose the noise term in the RVM model into two components, a Gaussian noise term and a spiky noise term. Therefore the observed data is assumed represented as: where is the relevance vector component, of which is the kernel function matrix and is the weight matrix, is the spiky term and is the Gaussian noise term. A spike-slab sparse prior is imposed on the weight vector which gives a more intuitive constraint on the sparsity than the Student's t-distribution described in the traditional RVM. For the spiky component a spike-slab sparse prior is also introduced to recognize outliers in the training data effectively. Several experiments demonstrate the better performance over the RVM regression. Although Relevance Vector Machine (RVM) is the most popular algorithms in machine learning and computer vision, outliers in the training data make the estimation unreliable. In the paper, a robust RVM model under non-parametric Bayesian framework is proposed. We decompose the noise term in the RVM model into two components, a Gaussian noise term and a spiky noise term. Therefore the observed data is assumed represented as: y=Dw+s+e, where Dw is the relevance vector component, of which D is the kernel function matrix and w is the weight matrix, s is the spiky term and e is the Gaussian noise term. A spike-slab sparse prior is imposed on the weight vector w, which gives a more intuitive constraint on the sparsity than the Student's t-distribution described in the traditional RVM. For the spiky component s, a spike-slab sparse prior is also introduced to recognize outliers in the training data effectively. Several experiments demonstrate the better per- formance over the RVM regression.
出处 《Journal of Electronics(China)》 2012年第6期593-597,共5页 电子科学学刊(英文版)
基金 Supported by the National Natural Science Foundation of China (No. 30900328, 61172179) the Fundamental Research Funds for the Central Universities (No. 201112-1051) the Natural Science Foundation of Fujian Province of China (No. 2012J05160)
关键词 Relevance Vector Machine (RVM) Bayesian nonparametric OUTLIERS Spike-slab sparse prior Relevance Vector Machine (RVM) Bayesian nonparametric Outliers Spike-slab sparseprior
  • 相关文献

参考文献11

  • 1V. N. Vapnik. The nature of statistical learningtheory. 1995.
  • 2M. E. Tipping. Sparse Bayesian learning and therelevance vector machine. Journal of MachineLearning Research, 1(2001), 211-244.
  • 3H. Takeda, S. Farsiu, and P. Milanfar. Robust kernelregression for restoration and reconstruction ofimages from sparse noisy data. IEEE InternationalConference on Image Processing, Atlanta, GA, Oct.2006, 1257-1260.
  • 4B. Demir and S. ErtUrk. Hyperspectral Imageclassification using relevance vector machines. IEEEGeoscience and Remote Sensing Letters, 4(2007)4,586—590.
  • 5A. Veeraxaghavan, K. Mitra, and R. Chellappa.Robust RVM regression using sparse outlier model.IEEE Computer Vision and Pattern RecognitionConference, San Francisco, U.S., June 2010,1887-1894.
  • 6Pang Xun and Jeff Gill. Spike and slab priordistributions for simultaneous bayesian hypothesistoting, model selection, and prediction, of nonlinearoutcome. Working Paper, Washington University inSt. Louis, http://poly_meth. wustl.edu/mediaDetail.php?docid=914, July 13,2009.
  • 7T. J. Mitchell and J. J. Beauchamp. Bayian variableselection in linear regression. Journal of the AmericanStatistical Association, 83(1988), 1023-1036.
  • 8I. G. Eduward and R. E. McCulloch. Variableselection via gibbs sampling. Journal of the AmericanStatistical Association, 88(1993), 881-889.
  • 9B. Chen, J. Paisley, and L. Carin. Sparse linearregression with beta process priors. IEEE AcousticsSpeech and Signal Processing Conference, Dallas,U.S., March 2010, 1234-1237.
  • 10M. Aharon, M. Elad, and A. M. Bruckstein. K-SVD:An algorithm for designing over complete dictionariesfor sparse representation. IEEE Transactions onSignal Processing、54(2006)11, 4311-4322.

同被引文献1

引证文献2

二级引证文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部