Universality of invertible neural networks
Invertible neural networks (INNs) are neural network architectures with invertibility by design. Thanks to their invertibility and the tractability of Jacobian, INNs have found various machine learning applications such as probabilistic modeling and feature extraction. However, their attractive properties come at the cost of restricting the layer designs, which poses a question on their representation power: can we use these models to approximate sufficiently diverse functions?
Coupling-layer-based INNs
In this research, we developed a general theoretical framework to investigate the representation power of INNs, building on a structure theorem of differential geometry. More specifically, the framework allows us to show the universal approximation properties of INNs for approximating a large class of diffeomorphisms. We applied the framework to Coupling-Flow-based INNs (CF-INNs), one of the first-choice INN architectures, and elucidated their high representation power.
Download slides (Powered by https://azu.github.io/slide-pdf.js)
Neural-ODE-based INNs
We extended the framework to analyze Neural Ordinary Differential Equations (NODEs), another popular building block of INNs, and we showed their universal approximation property for a certain large class of diffeomorphisms.
Download slides (Powered by https://azu.github.io/slide-pdf.js)
Approximation of derivatives and more general targets
Check out our latest paper “Universal approximation property of invertible neural networks.”