Exploring Transformer Backbones for Heterogeneous Treatment Effect Estimation
Document Type
Article
Publication Title
arXiv
Abstract
Neural networks (NNs) are often leveraged to represent structural similarities of potential outcomes (POs) of different treatment groups to obtain better finite-sample estimates of treatment effects. However, despite their wide use, existing works handcraft treatment-specific (sub)network architectures for representing various POs, which limit their applicability and generalizability. To remedy these issues, we develop a framework called Transformers as Treatment Effect Estimators (TransTEE) where attention layers govern interactions among treatments and covariates to exploit structural similarities of POs for confounding control. Using this framework, through extensive experiments, we show that TransTEE can: (1) serve as a general-purpose treatment effect estimator which significantly outperforms competitive baselines on a variety of challenging TEE problems (e.g., discrete, continuous, structured, or dosage-associated treatments.) and is applicable both when covariates are tabular and when they consist of structural data (e.g., texts, graphs); (2) yield multiple advantages: compatibility with propensity score modeling, parameter efficiency, robustness to continuous treatment value distribution shifts, interpretability in covariate adjustment, and real-world utility in debugging pre-trained language models. Copyright © 2022, The Authors. All rights reserved.
DOI
10.48550/arXiv.2202.01336
Publication Date
2-2-2022
Keywords
Machine learning, Modeling languages, Program debugging, Sampling, Covariates, Different treatments, Discrete/continuous, Finite samples, Neural-networks, Potential outcomes, Structural similarity, Subnetworks, Treatment effects, Treatment group
Recommended Citation
Y.F. Zhang, H. Zhang, Z.C. Lipton, L.E. Li, and E.P. Xing, "Exploring Transformer Backbones for Heterogeneous Treatment Effect Estimation", 2022, arXiv:2202.01336
Comments
IR Deposit conditions: non-described
Preprint available on arXiv