Overview
Research on invertible neural networks has recently seen a significant resurgence of interest in the ICML community.
Invertible transformations offer two key benefits:
- They allow exact reconstruction of inputs and hence obviate the need to store hidden activations in memory for backpropagation
- They can be designed to track the changes in the probability density of the inputs that the transformation induces (in which case they are known as normalizing flows)
Like autoregressive models, normalizing flows can be powerful generative models that allow exact likelihood computations. With the right architecture, they can also generate data much faster than autoregressive models. As such, normalizing
flows have been particularly successful in density estimation and variational inference.
The main goals of this workshop are:
- To provide an accessible introduction to normalizing flows for the wider community
- To create connections among researchers in the field, and encourage new ones to enter
- To track and summarize new works on invertible neural networks
- To identify existing applications and explore new ones