Supervised Domain Adaptation Based on Marginal and Conditional Distributions Alignment

Ori Katz, Ronen Talmon, Uri Shaham

Research output: Contribution to journalArticlepeer-review

Abstract

Supervised domain adaptation (SDA) is an area of machine learning, where the goal is to achieve good generalization performance on data from a target domain, given a small corpus of labeled training data from the target domain and a large corpus of labeled data from a related source domain. In this work, based on a generalization of a well-known theoretical result of Ben-David et al. (2010), we propose an SDA approach, in which the adaptation is performed by aligning the marginal and conditional components of the input-label joint distributions. In addition to being theoretically grounded, we demonstrate that the proposed approach has two advantages over existing SDA approaches. First, it applies to a broad collection of learning tasks, such as regression, classification, multi-label classification, and few-shot learning. Second, it takes into account the geometric structure of the input and label spaces. Experimentally, despite its generality, our approach demonstrates on-par or superior results compared with recent state-of-the-art task-specific methods. Our code is available here.

Original languageAmerican English
JournalTransactions on Machine Learning Research
Volume2024
StatePublished - 1 Jan 2024

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Supervised Domain Adaptation Based on Marginal and Conditional Distributions Alignment'. Together they form a unique fingerprint.

Cite this