Normalizing flows are explicit likelihood models that use invertible neural networks to construct
flexible probability distributions of high-dimensional data. Compared to other generative models, the
main advantage of normalizing flows is that they can offer exact and efficient likelihood computation and
data generation. Since their recent introduction, flow-based models have seen a significant resurgence
of interest in the machine learning community. As a result, powerful flow-based models have been developed,
with successes in density estimation, variational inference, and generative modeling of
images, audio, video and fundamental sciences.
This workshop is the 3rd iteration of the ICML workshop on Invertible Neural Networks and
Normalizing Flows, having already taken place in
As the field is moving forward, the main goal of the workshop is to consolidate recent progress and connect ideas from related fields.
Over the past few years, we've seen that normalizing flows are deeply connected to
latent variable models, autoregressive models, and more recently, diffusion-based models.
This year, we would like to further push the frontier of these explicit likelihood models through the lens of invertible reparameterization.
We will encourage researchers to use these models in conjunction to exploit their benefits at once,
and to work together to resolve some common issues of likelihood-based methods.
The main goals of this workshop are:
- To increase cross-polination between research on different kinds of explicit likelihood models
- To highlight new directions and track ongoing developments in likelihood-based modeling
- To identify existing applications and explore new ones.
Diversity and Inclusion
We're committed to creating an inclusive and welcoming workshop.
Participants are encouraged to report any violations of the ICML code of conduct
to the ICML Diversity and Inclusion chairs
and the INNF workshop organizers
We've strived to create a diverse program and reviewer pool for our workshop, and to share our call for papers widely.
However, we also are grateful for suggestions of any individual researchers or research groups who we might have missed.