Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism
Document Type
Conference Proceeding
Publication Title
EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings
Abstract
Large language models (LLMs) take advantage of step-by-step reasoning instructions, e.g., chain-of-thought (CoT) prompting. Building on this, their ability to perform CoT-style reasoning robustly is of interest from a probing perspective. In this study, we inspect the step-by-step reasoning ability of LLMs with a focus on negation, which is a core linguistic phenomenon that is difficult to process. In particular, we introduce several controlled settings (e.g., reasoning on fictional entities) to evaluate the logical reasoning abilities of the models. We observed that dozens of modern LLMs were not robust against lexical negation (e.g., plausible→implausible) when performing CoT-style reasoning, and the results highlight unique limitations in each LLM family.
First Page
14753
Last Page
14773
Publication Date
1-1-2023
Recommended Citation
M. Ye et al., "Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism," EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings, pp. 14753 - 14773, Jan 2023.