TY - GEN
T1 - Non-redundant Spectral Dimensionality Reduction
AU - Blau, Yochai
AU - Michaeli, Tomer
N1 - Publisher Copyright: © 2017, Springer International Publishing AG.
PY - 2017
Y1 - 2017
N2 - Spectral dimensionality reduction algorithms are widely used in numerous domains, including for recognition, segmentation, tracking and visualization. However, despite their popularity, these algorithms suffer from a major limitation known as the “repeated eigen-directions” phenomenon. That is, many of the embedding coordinates they produce typically capture the same direction along the data manifold. This leads to redundant and inefficient representations that do not reveal the true intrinsic dimensionality of the data. In this paper, we propose a general method for avoiding redundancy in spectral algorithms. Our approach relies on replacing the orthogonality constraints underlying those methods by unpredictability constraints. Specifically, we require that each embedding coordinate be unpredictable (in the statistical sense) from all previous ones. We prove that these constraints necessarily prevent redundancy, and provide a simple technique to incorporate them into existing methods. As we illustrate on challenging high-dimensional scenarios, our approach produces significantly more informative and compact representations, which improve visualization and classification tasks.
AB - Spectral dimensionality reduction algorithms are widely used in numerous domains, including for recognition, segmentation, tracking and visualization. However, despite their popularity, these algorithms suffer from a major limitation known as the “repeated eigen-directions” phenomenon. That is, many of the embedding coordinates they produce typically capture the same direction along the data manifold. This leads to redundant and inefficient representations that do not reveal the true intrinsic dimensionality of the data. In this paper, we propose a general method for avoiding redundancy in spectral algorithms. Our approach relies on replacing the orthogonality constraints underlying those methods by unpredictability constraints. Specifically, we require that each embedding coordinate be unpredictable (in the statistical sense) from all previous ones. We prove that these constraints necessarily prevent redundancy, and provide a simple technique to incorporate them into existing methods. As we illustrate on challenging high-dimensional scenarios, our approach produces significantly more informative and compact representations, which improve visualization and classification tasks.
UR - http://www.scopus.com/inward/record.url?scp=85040226868&partnerID=8YFLogxK
U2 - https://doi.org/10.1007/978-3-319-71249-9_16
DO - https://doi.org/10.1007/978-3-319-71249-9_16
M3 - منشور من مؤتمر
SN - 9783319712482
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 256
EP - 271
BT - Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2017, Proceedings
A2 - Ceci, Michelangelo
A2 - Dzeroski, Saso
A2 - Vens, Celine
A2 - Todorovski, Ljupco
A2 - Hollmen, Jaakko
T2 - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2017
Y2 - 18 September 2017 through 22 September 2017
ER -