期刊文献+

一种非凸随机优化框架下的矩阵补全算法研究

Research on a Matrix Completion Algorithm under the Non-convex Stochastic Optimization Framework
在线阅读 下载PDF
导出
摘要 矩阵补全问题可转化为非凸优化问题进行求解,但在高维矩阵或海量数据场景下,传统优化方法易受“维数灾难”制约而难以有效实施。为提升求解效率,文章提出一种融合方差缩减技术的非凸随机优化算法MC_SVR。通过设计minibatch加速策略,该算法在保持计算精度的同时显著提升了运算效率。多组数据集实验表明,相较于传统方法,MC_SVR算法在收敛速度、补全精度等关键指标上均展现出显著优势,尤其在处理大规模矩阵补全问题时,其平均相对误差、迭代次数都有明显的变化。该研究为高维矩阵补全问题提供了新的解决方案,对推荐系统、图像修复等实际应用具有重要参考价值。 The matrix completion problem can be solved by transforming into a non-convex optimization problem.However,in high-dimensional matrix or massive data scenarios,traditional optimization methods are easily constrained by “dimension disaster” and they are difficult to implement effectively.In order to improve the efficiency of the solution,this paper proposes a non-convex stochastic optimization algorithm MC_SVR that integrates Variance Reduction technique.By designing the minibatch acceleration strategy,the algorithm significantly improves the computational efficiency while maintaining the computational accuracy.Experiments on multiple sets of datasets show that compared with the traditional method,the MC_SVR algorithm shows significant advantages in key indicators such as convergence speed and completion accuracy.Especially when dealing with large-scale matrix completion problems,the Mean Relative Error and the number of iterations have obvious changes.This study provides a new solution to the problem of high-dimensional matrix completion,and has important reference value for practical applications such as recommendation systems and image inpainting.
作者 王学伟 WANG Xuewei(School of Mathematics,Yunnan Normal University,Kunming 650500,China;Yunnan Key Laboratory of Modern Analytical Mathematics and Applications,Kunming 650500,China)
出处 《现代信息科技》 2025年第4期103-106,111,共5页 Modern Information Technology
基金 云南省现代分析数学及其应用重点实验室基金资助(202302AN360007) 国家自然科学基金项目(62266055)。
关键词 矩阵补全 非凸问题 随机优化 方差减小 matrix completion non-convex problem random optimization Variance Reduction
  • 相关文献

参考文献3

二级参考文献25

  • 1Tseng P. Approximation accuracy, gradient methods, and error bound for structured convex optimization. Mathematical Programming, 2010,125(2):263-295. [doi: 10.1007/sl0107-010-0394-2].
  • 2Nemirovski A, Juditsky A, Lan G, Shapiro A. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 2009,19(4):1574-1609. [doi: 10.1137/070704277].
  • 3Shalev-Shwartz S, Tewari A. Stochastic methods for LI regularized loss minimization. In: Proc. of the 26th Annual Int’l Conf. on Machine Learning. 2009. 929-936.
  • 4Johnson R, Zhang T. Accelerating stochastic gradient descent using predictive variance reduction. In: Proc. of the Advances in Neural Information Processing Systems 26. 2013. 315-323.
  • 5Shalev-Shwartz S, Zhang T. Stochastic dual coordinate ascent methods for regularized loss minimization. arXiv preprint arXiv: 1209.1873,2012.
  • 6Le Roux N, Schmidt M, Bach F. A stochastic gradient method with an exponential convergence rate for strongly convex optimization with finite training sets. arXiv preprint, arXiv: 1202.6258, 2012.
  • 7Xiao L. Dual averaging methods for regularized stochastic learning and online optimization. In: Advances in Neural Information Processing Systems. 2009. 2116-2124.
  • 8Xiao L, Zhang T. A proximal stochastic gradient method with progressive variance reduction. arXiv: 1403.4699vl, 2014.
  • 9Duchi J, Shalev-Shwartz S, Singer Y, Tewari A. Composite objective mirror descent. In: Proc. of the 23rd Annual Workshop on Computational Learning Theory. ACM Press, 2010. 116-128.
  • 10Duchi J, Shalev-Shwartz S, Singer Y. Efficient projections onto the Ll-ball for learning in high dimensions. In: Proc. of the 25th Int’l Conf. on Machine Learning. 2008. 272-279.

共引文献30

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部