Multi-view diffusion maps

Ofir Lindenbaum, Arie Yeredor, Moshe Salhov, Amir Averbuch

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we address the challenging task of achieving multi-view dimensionality reduction. The goal is to effectively use the availability of multiple views for extracting a coherent low-dimensional representation of the data. The proposed method exploits the intrinsic relation within each view, as well as the mutual relations between views. The multi-view dimensionality reduction is achieved by defining a cross-view model in which an implied random walk process is restrained to hop between objects in the different views. The method is robust to scaling and insensitive to small structural changes in the data. We define new diffusion distances and analyze the spectra of the proposed kernel. We show that the proposed framework is useful for various machine learning applications such as clustering, classification, and manifold learning. Finally, by fusing multi-sensor seismic data we present a method for automatic identification of seismic events.

Original languageEnglish
Pages (from-to)127-149
Number of pages23
JournalInformation Fusion
Volume55
DOIs
StatePublished - Mar 2020

Keywords

  • Diffusion maps
  • Dimensionality reduction
  • Manifold learning
  • Multi-view

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Information Systems
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Multi-view diffusion maps'. Together they form a unique fingerprint.

Cite this