Fact-Checking Complex Claims with Program-Guided Reasoning
Document Type
Conference Proceeding
Publication Title
Proceedings of the Annual Meeting of the Association for Computational Linguistics
Abstract
Fact-checking real-world claims often requires collecting multiple pieces of evidence and applying complex multi-step reasoning. In this paper, we present Program-Guided Fact-Checking (PROGRAMFC), a novel fact-checking model that decomposes complex claims into simpler sub-tasks that can be solved using a shared library of specialized functions. We first leverage the in-context learning ability of large language models to generate reasoning programs to guide the verification process. Afterward, we execute the program by delegating each sub-task to the corresponding sub-task handler. This process makes our model both explanatory and data-efficient, providing clear explanations of its reasoning process and requiring minimal training data. We evaluate PROGRAMFC on two challenging fact-checking datasets and show that it outperforms seven fact-checking baselines across different settings of evidence availability, with explicit output programs that benefit human debugging.
First Page
6981
Last Page
7004
Publication Date
1-1-2023
Recommended Citation
L. Pan et al., "Fact-Checking Complex Claims with Program-Guided Reasoning," Proceedings of the Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 6981 - 7004, Jan 2023.