Continual Casual Representation Learning

Document Type



Causal representation learning is widely regarded as a crucial factor in attaining Artificial General Intelligence (AGI). Notably, nonlinear Independent Component Analysis (ICA) offers an attractive framework for causal representation learning, with the aim of recovering the underlying independent variables from their nonlinear mixtures. Nevertheless, the identifiability of nonlinear ICA has been a persistent challenge, as the independence assumption alone is insufficient. Recent breakthroughs in this field have relied on the utilization of side information by assuming that sources are conditionally independent given auxiliary variables such as domains. However, it is worth noting that obtaining sufficient side information in the first place may not always be possible in practical settings. Furthermore, learning concurrently from multiple domains goes against the sequential nature of human learning, which is a fundamental aspect of intelligence. To address this issue, we amalgamate continual learning techniques with nonlinear ICA by demonstrating that this problem can be reformulated as the challenge of averting catastrophic forgetting of the network while ensuring identifiability. We show that under mild conditions, the identifiability can still be guaranteed even domains that arrived don't bring enough change. Extensive experiment results show that our method has comparable performance with nonlinear ICA trained using multiple domains at the same time.

Publication Date



Thesis submitted to the Deanship of Graduate and Postdoctoral Studies

In partial fulfillment of the requirements for the M.Sc degree in Machine Learning

Advisors: Dr. Kun Zhang, Dr. Martin Takac

Online access available for MBZUAI patrons