Graphical Normalizing Flows

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Antoine Wehenkel, Gilles Louppe
Journal/Conference Name arXiv preprint
Paper Category
Paper Abstract Normalizing flows model complex probability distributions by combining a base distribution with a series of bijective neural networks. State-of-the-art architectures rely on coupling and autoregressive transformations to lift up invertible functions from scalars to vectors. In this work, we revisit these transformations as probabilistic graphical models, showing that a flow reduces to a Bayesian network with a pre-defined topology and a learnable density at each node. From this new perspective, we propose the graphical normalizing flow, a new invertible transformation with either a prescribed or a learnable graphical structure. This model provides a promising way to inject domain knowledge into normalizing flows while preserving both the interpretability of Bayesian networks and the representation capacity of normalizing flows. We demonstrate experimentally that normalizing flows built on top of graphical conditioners are competitive density estimators. Finally, we illustrate how inductive bias can be embedded into normalizing flows by parameterizing graphical conditioners with convolutional networks.
Date of publication 2020
Code Programming Language Python
Comment

Copyright Researcher 2022