Compression, Transduction, And Creation: A Unified Framework for Evaluating Natural Language Generation
Document Type
Article
Publication Title
arXiv
Abstract
Natural language generation (NLG) spans a broad range of tasks, each of which serves for specific objectives and desires different properties of generated text. The complexity makes automatic evaluation of NLG particularly challenging. Previous work has typically focused on a single task and developed individual evaluation metrics based on specific intuitions. In this paper, we propose a unifying perspective that facilitates the design of metrics for a wide range of language generation tasks and quality aspects. Based on the nature of information change from input to output, we classify NLG tasks into compression (e.g., summarization), transduction (e.g., text rewriting), and creation (e.g., dialog). The information alignment, or overlap, between input, context, and output text plays a common central role in characterizing the generation. Using the uniform concept of information alignment, we develop a family of interpretable metrics for various NLG tasks and aspects, often without need of gold reference data. To operationalize the metrics, we train self-supervised models to approximate information alignment as a prediction task. Experiments show the uniformly designed metrics achieve stronger or comparable correlations with human judgement compared to state-of-the-art metrics in each of diverse tasks, including text summarization, style transfer, and knowledge-grounded dialog. With information alignment as the intermediate representation, we deliver a composable library for easy NLG evaluation and future metric design. Copyright © 2021, The Authors. All rights reserved.
Publication Date
1-1-2021
Keywords
Knowledge management; Natural language processing systems; Automatic alignment; Automatic evaluation; Evaluation metrics; Human judgments; Information alignment; Natural language generation; Prediction modelling; Property; Reference data; Unified framework; Bacteriophages; Computation and Language (cs.CL); Machine Learning (cs.LG)
Recommended Citation
M. Deng, B. Tan, Z. Liu, E. P. Xing, and Z. Hu, "Compression, transduction, and creation: A unified framework for evaluating natural language generation," 2021, arXiv:2109.06379
Comments
Preprint: arXiv