Robustly Train Normalizing Flows via KL Divergence Regularization

Document Type

Conference Proceeding

Publication Title

Proceedings of the AAAI Conference on Artificial Intelligence

Abstract

In this paper, we find that the training of Normalizing Flows (NFs) are easily affected by the outliers and a small number (or high dimensionality) of training samples. To solve this problem, we propose a Kullback–Leibler (KL) divergence regularization on the Jacobian matrix of NFs. We prove that such regularization is equivalent to adding a set of samples whose covariance matrix is the identity matrix to the training set. Thus, it reduces the negative influence of the outliers and the small sample number on the estimation of the covariance matrix, simultaneously. Therefore, our regularization makes the training of NFs robust. Ultimately, we evaluate the performance of NFs on out-of-distribution (OoD) detection tasks. The excellent results obtained demonstrate the effectiveness of the proposed regularization term. For example, with the help of the proposed regularization, the OoD detection score increases at most 30% compared with the one without the regularization.

First Page

15047

Last Page

15055

DOI

10.1609/aaai.v38i13.29426

Publication Date

3-25-2024

This document is currently not available here.

Share

COinS