Lifelong Learning by Adjusting Priors

Ron Amit, Ron Meir

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In representational lifelong learning an agent aims to learn to solve novel tasks
while updating its representation in light of previous tasks. Under the assumption
that future tasks are ‘related’ to previous tasks, representations should be learned
in such a way that they capture the common structure across learned tasks, while
allowing the learner sufficient flexibility to adapt to novel aspects of a new task.
We develop a framework for lifelong learning in deep neural networks that is based
on generalization bounds, developed within the PAC-Bayes framework. Learning
takes place through the construction of a distribution over networks based on the
tasks seen so far, and its utilization for learning a new task. Thus, prior knowledge is incorporated through setting a history-dependent prior for novel tasks. We
develop a gradient-based algorithm implementing these ideas, based on minimizing an objective function motivated by generalization bounds, and demonstrate its
effectiveness through numerical examples.
Original languageEnglish
Title of host publicationICLR 2018 Conference
StatePublished - 2018
Event6th International Conference on Learning Representations, ICLR 2018 - Vancouver, Canada
Duration: 30 Apr 20183 May 2018

Conference

Conference6th International Conference on Learning Representations, ICLR 2018
Country/TerritoryCanada
CityVancouver
Period30/04/183/05/18

Fingerprint

Dive into the research topics of 'Lifelong Learning by Adjusting Priors'. Together they form a unique fingerprint.

Cite this