Document Type

Conference Proceeding

Publication Title

ICLR 2022 - 10th International Conference on Learning Representations

Abstract

Our goal is to recover time-delayed latent causal variables and identify their relations from measured temporal data. Estimating causally-related latent variables from observations is particularly challenging as the latent variables are not uniquely recoverable in the most general case. In this work, we consider both a nonparametric, nonstationary setting and a parametric setting for the latent processes and propose two provable conditions under which temporally causal latent processes can be identified from their nonlinear mixtures. We propose LEAP, a theoretically-grounded framework that extends Variational AutoEncoders (VAEs) by enforcing our conditions through proper constraints in causal process prior. Experimental results on various datasets demonstrate that temporally causal latent processes are reliably identified from observed variables under different dependency structures and that our approach considerably outperforms baselines that do not properly leverage history or nonstationarity information. This demonstrates that using temporal information to learn latent processes from their invertible nonlinear mixtures in an unsupervised manner, for which we believe our work is one of the first, seems promising even without sparsity or minimality assumptions.

Publication Date

1-29-2022

Keywords

Auto encoders, Condition, Dependency structures, Latent variable, Nonlinear mixtures, Nonparametrics, Nonstationary, Proper constraint, Temporal Data, Time delayed

Comments

IR conditions: non-described

Access available at OpenReview.Net

Share

COinS