Workshop on Invertible Neural Nets and Normalizing Flows


Overview

Research on invertible neural networks has recently seen a significant resurgence of interest in the ICML community. Invertible transformations offer two key benefits:
  • They allow exact reconstruction of inputs and hence obviate the need to store hidden activations in memory for backpropagation
  • They can be designed to track the changes in the probability density of the inputs that the transformation induces (in which case they are known as normalizing flows)
Like autoregressive models, normalizing flows can be powerful generative models that allow exact likelihood computations. With the right architecture, they can also generate data much faster than autoregressive models. As such, normalizing flows have been particularly successful in density estimation and variational inference.

The main goals of this workshop are:
  • To provide an accessible introduction to normalizing flows for the wider community
  • To create connections among researchers in the field, and encourage new ones to enter
  • To track and summarize new works on invertible neural networks
  • To identify existing applications and explore new ones
We're welcoming questions from the public for our panel discussion! Please submit your question here.

See workshop videos here!

Key Dates

  • Paper submission deadline: extended to May 1st 2019 at 23h59 anywhere on earth.
  • Acceptance notification: May 27 2019
  • Final paper submission deadline: June 5 2019, 23h59 AoE
  • Workshop Date: June 15 2019, room 103


Questions? Contact us at invertibleworkshop@gmail.com.