期刊文献+

新型ε-不敏感损失函数支持向量诱导回归算法及售后服务数据模型预测系统 被引量:2

A New ε-insensitivity Function Support Vector Inductive Regression Algorithm and After-sales Service Data Model Forecast System
在线阅读 下载PDF
导出
摘要 对含有噪声的数据序列根据预测置信度进行去噪处理,将训练集和测试集及预测数据共同作为训练向量集,以此建立新型支持向量诱导回归算法。本文利用该算法对实时售后服务的“千车故障数”进行了时间序列分析,并建立了新型的ε-不敏感损失函数小样本模型预测系统。预测显示误差小于5.3%的值占了总体的98.1%,其预测署信度达到0.983,与二次和Huber损失函数相比其MAPE值只有2.3%。用计算机模拟仿真单批次预测显,当时间参量t→+∞,“千车故障数”将收敛于定值74.0601,这和实际相当吻合,表明所建预测模型的有效性。文章最后还和传统神经网络模型作了比较,说明新型SVM机比神经网络处理小样本能力更强。 To filter noises according to prediction confidence level in the sequence of data that contains noises, set the training set, prediction set and testing set as the training set, we set up a new support vector inductive regression algorithm. To analyze the real-time after-sales service data time sequence of “the number of thousand cars malfunction” by using this algorithm and set up a new support vector machines models forecast system based on small sample and ε-insensitivity function. The predict value whose error is less than 5.3% is 98.3% of total. Further more, the confidence level come to 0. 981. Compareed with quadratic loss function and Huber loss function, the MAPE is only 2.3%. From the computer analog simulation the single batch, we find that when time parameter t→+∞, the “the number of thousand cars malfunction”will converge to fixed value 74. 0601, this is correspond with the reality and show the model is very availability. In the end of this paper, contrasted with neural network, the new SVM is superior to traditional neural network in the capacity for handling small sample.
出处 《计算机科学》 CSCD 北大核心 2005年第8期138-141,154,共5页 Computer Science
基金 国家自然科学基金(No.10371135)
关键词 诱导回归算法 售后服务 预测系统 回归算法 损失函数 数据模型 支持向量 诱导 敏感 时间序列分析 Inductive regression algorithm, After-sales service, Forecast system
  • 相关文献

参考文献5

  • 1Burgers C. A Tutorial on Support Vector Machines for Pattern Recognition. Data mining and .Discovery, 1998,2 (2).
  • 2Vapnik V N. Statistical Learning Theory. John Wiley & Son,Inc. 1998.
  • 3Platt J. Sequential Minimal Optimizat-ion: A Fast Algorithm for Training Support Vector Machines. In: Scholkopf B, Burges C. j.C,Smola A J. eds. Advances in KerneL Methods -SuppOrt Vector Learning, MIT Press, Cambridge, MA, 1999. 185~208.
  • 4Joachims T. Making Large-Scale SVM Learning Practical. In:Scholkopf B, Burges C j C, Sm-ola A J, eds. Adva-nces in KerneL Methods -Support Vector Learning,MIT Press,Cambridge, MA,1999. 169~184.
  • 5孙德山,吴今培.支持向量回归中的预测信任度[J].计算机科学,2003,30(8):126-127. 被引量:5

二级参考文献7

  • 1Vapnik V N. Statistical Learning Theory [M'. New York, Wiley,1998.
  • 2Smola A J, Scholkopf B. A tutorial on support vector regression [R]: [NeuroCOLT TR NC-TR-98-030]. Royal Holloway College Urdversity of London, UK, 1998.
  • 3Fernandez R. Predicting time series with a local support vector regression machine [C]. IN ACAI 99,1999.
  • 4Burges C J C. A Tutorial on Support Vector Machines for Pattern Recognition [R]. Knowledge Discovery and Data Mining, 1998,2(2).
  • 5Cortes C, Vapnie V. Support-vector networks [J] .Machine Learning, 1995,20(3) :273-297.
  • 6MacKay D J C. Bayesian interpolation [J]. Neural Computation, 1992,4(3) : 415-447.
  • 7MacKay D J C. The evidence framework applied to classification networks [J]. Neural Computation, 1992,4 : 720- 736.

共引文献4

同被引文献13

引证文献2

二级引证文献7

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部