September 18, 2019, 2:00 PM - 2:40 PM
Busch Campus Student Center
604 Bartholomew Rd
Click here for map.
Ming Gu, University of California, Berkeley
Low-rank matrix approximations have become a technique of central importance in large scale data science. In this talk we discuss a set of novel low-rank matrix approximation algorithms that are taylored for all levels of accuracy requirements for maximum computational efficiency. These algorithms include spectrum-revealing matrix factorizations that are optimal up to dimension-dependent constants, and an efficient truncated SVD (singular value decomposition) that is accurate up to a given tolerance. We provide theoretical error bounds for both singular values and singular subspaces and numerical evidence that demonstrate the supriority of our algorithms over existing ones, and show their usefulness in a number of data science applications.