TY - GEN
T1 - Joint autoencoders
T2 - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML-PKDD 2018
AU - Epstein, Baruch
AU - Meir, Ron
AU - Michaeli, Tomer
N1 - Publisher Copyright: © 2019, Springer Nature Switzerland AG.
PY - 2019
Y1 - 2019
N2 - We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task. Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations. The method deals with meta-learning (such as domain adaptation, transfer and multi-task learning) in a unified fashion, and can deal with data arising from different modalities. Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network. In particular, we handle transfer learning between multiple tasks in a straightforward manner, as opposed to many competing state-of-the-art methods, that are unable to handle more than two tasks. We also illustrate the network’s ability to distill task-specific and shared features.
AB - We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task. Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations. The method deals with meta-learning (such as domain adaptation, transfer and multi-task learning) in a unified fashion, and can deal with data arising from different modalities. Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network. In particular, we handle transfer learning between multiple tasks in a straightforward manner, as opposed to many competing state-of-the-art methods, that are unable to handle more than two tasks. We also illustrate the network’s ability to distill task-specific and shared features.
KW - Autoencoders
KW - Meta-learning
KW - Weakly-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85061154396&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-10925-7_30
DO - 10.1007/978-3-030-10925-7_30
M3 - منشور من مؤتمر
SN - 9783030109240
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 494
EP - 509
BT - Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2018, Proceedings
A2 - Bonchi, Francesco
A2 - Gärtner, Thomas
A2 - Hurley, Neil
A2 - Ifrim, Georgiana
A2 - Berlingerio, Michele
Y2 - 10 September 2018 through 14 September 2018
ER -