Partial eigenvalue decomposition(PEVD) and partial singular value decomposition(PSVD) of large sparse matrices are of fundamental importance in a wide range of applications, including latent semantic indexing, spectra...Partial eigenvalue decomposition(PEVD) and partial singular value decomposition(PSVD) of large sparse matrices are of fundamental importance in a wide range of applications, including latent semantic indexing, spectral clustering, and kernel methods for machine learning. The more challenging problems are when a large number of eigenpairs or singular triplets need to be computed. We develop practical and efficient algorithms for these challenging problems. Our algorithms are based on a filter-accelerated block Davidson method.Two types of filters are utilized, one is Chebyshev polynomial filtering, the other is rational-function filtering by solving linear equations. The former utilizes the fastest growth of the Chebyshev polynomial among same degree polynomials; the latter employs the traditional idea of shift-invert, for which we address the important issue of automatic choice of shifts and propose a practical method for solving the shifted linear equations inside the block Davidson method. Our two filters can efficiently generate high-quality basis vectors to augment the projection subspace at each Davidson iteration step, which allows a restart scheme using an active projection subspace of small dimension. This makes our algorithms memory-economical, thus practical for large PEVD/PSVD calculations. We compare our algorithms with representative methods, including ARPACK, PROPACK, the randomized SVD method, and the limited memory SVD method. Extensive numerical tests on representative datasets demonstrate that, in general, our methods have similar or faster convergence speed in terms of CPU time, while requiring much lower memory comparing with other methods. The much lower memory requirement makes our methods more practical for large-scale PEVD/PSVD computations.展开更多
在短码直扩信号伪码(pseudo-noise,PN)序列的盲估计中,特征值分解(eigenvalue decomposition,EVD)算法、奇异值分解(singular value decomposition,SVD)算法和压缩投影逼近子空间跟踪(projection approximation subspace tracking with ...在短码直扩信号伪码(pseudo-noise,PN)序列的盲估计中,特征值分解(eigenvalue decomposition,EVD)算法、奇异值分解(singular value decomposition,SVD)算法和压缩投影逼近子空间跟踪(projection approximation subspace tracking with deflation,PASTd)算法常被用来估计PN序列。然而,当非同步时延未知时,最大特征值和次大特征值可能相近,此时估计出的最大特征向量实际上是最大特征值和次大特征值对应特征向量的任一非零线性组合,即估计出的最大特征向量存在酉模糊,这会导致从最大特征向量中估计PN序列的算法性能可能很差。针对此问题提出了一种利用协方差矩阵性质估计PN序列的算法。仿真结果表明:所提算法不仅能解决非同步时延未知时估计PN序列算法性能可能很差的问题,还能在低信噪比下获得良好的估计性能。展开更多
基金supported by National Science Foundation of USA (Grant Nos. DMS1228271 and DMS-1522587)National Natural Science Foundation of China for Creative Research Groups (Grant No. 11321061)+1 种基金the National Basic Research Program of China (Grant No. 2011CB309703)the National Center for Mathematics and Interdisciplinary Sciences, Chinese Academy of Sciences
文摘Partial eigenvalue decomposition(PEVD) and partial singular value decomposition(PSVD) of large sparse matrices are of fundamental importance in a wide range of applications, including latent semantic indexing, spectral clustering, and kernel methods for machine learning. The more challenging problems are when a large number of eigenpairs or singular triplets need to be computed. We develop practical and efficient algorithms for these challenging problems. Our algorithms are based on a filter-accelerated block Davidson method.Two types of filters are utilized, one is Chebyshev polynomial filtering, the other is rational-function filtering by solving linear equations. The former utilizes the fastest growth of the Chebyshev polynomial among same degree polynomials; the latter employs the traditional idea of shift-invert, for which we address the important issue of automatic choice of shifts and propose a practical method for solving the shifted linear equations inside the block Davidson method. Our two filters can efficiently generate high-quality basis vectors to augment the projection subspace at each Davidson iteration step, which allows a restart scheme using an active projection subspace of small dimension. This makes our algorithms memory-economical, thus practical for large PEVD/PSVD calculations. We compare our algorithms with representative methods, including ARPACK, PROPACK, the randomized SVD method, and the limited memory SVD method. Extensive numerical tests on representative datasets demonstrate that, in general, our methods have similar or faster convergence speed in terms of CPU time, while requiring much lower memory comparing with other methods. The much lower memory requirement makes our methods more practical for large-scale PEVD/PSVD computations.
文摘在短码直扩信号伪码(pseudo-noise,PN)序列的盲估计中,特征值分解(eigenvalue decomposition,EVD)算法、奇异值分解(singular value decomposition,SVD)算法和压缩投影逼近子空间跟踪(projection approximation subspace tracking with deflation,PASTd)算法常被用来估计PN序列。然而,当非同步时延未知时,最大特征值和次大特征值可能相近,此时估计出的最大特征向量实际上是最大特征值和次大特征值对应特征向量的任一非零线性组合,即估计出的最大特征向量存在酉模糊,这会导致从最大特征向量中估计PN序列的算法性能可能很差。针对此问题提出了一种利用协方差矩阵性质估计PN序列的算法。仿真结果表明:所提算法不仅能解决非同步时延未知时估计PN序列算法性能可能很差的问题,还能在低信噪比下获得良好的估计性能。