Tensor robust principal component analysis(TRPCA) problem aims to separate a low-rank tensor and a sparse tensor from their sum. This problem has recently attracted considerable research attention due to its wide ra...Tensor robust principal component analysis(TRPCA) problem aims to separate a low-rank tensor and a sparse tensor from their sum. This problem has recently attracted considerable research attention due to its wide range of potential applications in computer vision and pattern recognition. In this paper, we propose a new model to deal with the TRPCA problem by an alternation minimization algorithm along with two adaptive rankadjusting strategies. For the underlying low-rank tensor, we simultaneously perform low-rank matrix factorizations to its all-mode matricizations; while for the underlying sparse tensor,a soft-threshold shrinkage scheme is applied. Our method can be used to deal with the separation between either an exact or an approximate low-rank tensor and a sparse one. We established the subsequence convergence of our algorithm in the sense that any limit point of the iterates satisfies the KKT conditions. When the iteration stops, the output will be modified by applying a high-order SVD approach to achieve an exactly low-rank final result as the accurate rank has been calculated. The numerical experiments demonstrate that our method could achieve better results than the compared methods.展开更多
基金Supported by the National Natural Science Foundation of China(Grant Nos.6157209961320106008+2 种基金91230103)National Science and Technology Major Project(Grant Nos.2013ZX040050212014ZX04001011)
文摘Tensor robust principal component analysis(TRPCA) problem aims to separate a low-rank tensor and a sparse tensor from their sum. This problem has recently attracted considerable research attention due to its wide range of potential applications in computer vision and pattern recognition. In this paper, we propose a new model to deal with the TRPCA problem by an alternation minimization algorithm along with two adaptive rankadjusting strategies. For the underlying low-rank tensor, we simultaneously perform low-rank matrix factorizations to its all-mode matricizations; while for the underlying sparse tensor,a soft-threshold shrinkage scheme is applied. Our method can be used to deal with the separation between either an exact or an approximate low-rank tensor and a sparse one. We established the subsequence convergence of our algorithm in the sense that any limit point of the iterates satisfies the KKT conditions. When the iteration stops, the output will be modified by applying a high-order SVD approach to achieve an exactly low-rank final result as the accurate rank has been calculated. The numerical experiments demonstrate that our method could achieve better results than the compared methods.