Efficient Out-of-Domain Detection for Sequence to Sequence Models
Document Type
Conference Proceeding
Publication Title
Proceedings of the Annual Meeting of the Association for Computational Linguistics
Abstract
Sequence-to-sequence (seq2seq) models based on the Transformer architecture have become a ubiquitous tool applicable not only to classical text generation tasks such as machine translation and summarization but also to any other task where an answer can be represented in a form of a finite text fragment (e.g., question answering). However, when deploying a model in practice, we need not only high performance but also an ability to determine cases where the model is not applicable. Uncertainty estimation (UE) techniques provide a tool for identifying out-of-domain (OOD) input where the model is susceptible to errors. State-of-the-art UE methods for seq2seq models rely on computationally heavyweight and impractical deep ensembles. In this work, we perform an empirical investigation of various novel UE methods for large pre-trained seq2seq models T5 and BART on three tasks: machine translation, text summarization, and question answering. We apply computationally lightweight density-based UE methods to seq2seq models and show that they often outperform heavyweight deep ensembles on the task of OOD detection.
First Page
1430
Last Page
1454
Publication Date
1-1-2023
Recommended Citation
A. Vazhentsev et al., "Efficient Out-of-Domain Detection for Sequence to Sequence Models," Proceedings of the Annual Meeting of the Association for Computational Linguistics, pp. 1430 - 1454, Jan 2023.