Neural Processes

Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, Yee Whye Teh

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

A neural network (NN) is a parameterised function that can be tuned via gradient descent to approximate a labelled collection of data with high precision. A Gaussian process (GP), on the other hand, is a probabilistic model that defines a distribution over possible functions, and is updated in light of data via the rules of probabilistic inference. GPs are probabilistic, data-efficient and flexible, however they are also computationally intensive and thus limited in their applicability. We introduce a class of neural latent variable models which we call Neural Processes (NPs), combining the best of both worlds. Like GPs, NPs define distributions over functions, are capable of rapid adaptation to new observations, and can estimate the uncertainty in their predictions. Like NNs, NPs are computationally efficient during training and evaluation but also learn to adapt their priors to data. We demonstrate the performance of NPs on a range of learning tasks, including regression and optimisation, and compare and contrast with related models in the literature.
Original languageEnglish
Title of host publicationTheoretical Foundations and Applications of Deep Generative Models Workshop, International Conference on Machine Learning (ICML)
StatePublished - 4 Jul 2018
Externally publishedYes

Keywords

  • cs.LG
  • stat.ML

Fingerprint

Dive into the research topics of 'Neural Processes'. Together they form a unique fingerprint.

Cite this