Abstract
The covariance matrix of a p-dimensional random variable is a fundamental quantity in data analysis. Given n i.i.d. observations, it is typically estimated by the sample covariance matrix, at a computational cost of O(np2) operations. When n, p are large, this computation may be prohibitively slow. Moreover, in several contemporary applications, the population matrix is approximately sparse, and only its few large entries are of interest. This raises the following question: Assuming approximate sparsity of the covariance matrix, can its large entries be detected much faster, say in sub-quadratic time, without explicitly computing all its p2 entries? In this paper, we present and theoretically analyze two randomized algorithms that detect the large entries of an approximately sparse sample covariance matrix using only O(np poly log p) operations. Furthermore, assuming sparsity of the population matrix, we derive sufficient conditions on the underlying random variable and on the number of samples n, for the sample covariance matrix to satisfy our approximate sparsity requirements. Finally, we illustrate the performance of our algorithms via several simulations.
Original language | English |
---|---|
Pages (from-to) | 304-330 |
Number of pages | 27 |
Journal | Information and Inference: A Journal of the IMA |
Volume | 5 |
Issue number | 3 |
Early online date | 24 Mar 2016 |
DOIs | |
State | Published - Sep 2016 |
Keywords
- Multi-scale group testing
- Sparse covariance matrix
- Sub-quadratic time complexity
All Science Journal Classification (ASJC) codes
- Computational Theory and Mathematics
- Analysis
- Applied Mathematics
- Statistics and Probability
- Numerical Analysis