Sciweavers

CORR
2010
Springer

An algorithm for the principal component analysis of large data sets

14 years 16 days ago
An algorithm for the principal component analysis of large data sets
Recently popularized randomized methods for principal component analysis (PCA) efficiently and reliably produce nearly optimal accuracy -- even on parallel processors -- unlike the classical (deterministic) alternatives. We adapt one of these randomized methods for use with data sets that are too large to be stored in random-access memory (RAM). (The traditional terminology is that our procedure works efficiently out-of-core.) We illustrate the performance of the algorithm via several numerical examples. For example, we report on the PCA of a data set stored on disk that is so large that less than a hundredth of it can fit in our computer's RAM. Key words. algorithm, principal component analysis, PCA, SVD, singular value decomposition, low rank AMS subject classifications. 65F15, 65C60, 68W20
Nathan Halko, Per-Gunnar Martinsson, Yoel Shkolnis
Added 09 Dec 2010
Updated 09 Dec 2010
Type Journal
Year 2010
Where CORR
Authors Nathan Halko, Per-Gunnar Martinsson, Yoel Shkolnisky, Mark Tygert
Comments (0)