OffensEval 2023: Offensive language identification in the age of Large Language Models

Marcos Zampieri, George Mason University
Sara Rosenthal, IBM Research
Preslav Nakov, Mohamed Bin Zayed University of Artificial Intelligence
Alphaeus Dmonte, George Mason University
Tharindu Ranasinghe, Aston University

Open Access

Archived with thanks to Cambridge Core

License: CC by 4.0

Uploaded 22 February 2024

Abstract

The OffensEval shared tasks organized as part of SemEval-2019-2020 were very popular, attracting over 1300 participating teams. The two editions of the shared task helped advance the state of the art in offensive language identification by providing the community with benchmark datasets in Arabic, Danish, English, Greek, and Turkish. The datasets were annotated using the OLID hierarchical taxonomy, which since then has become the de facto standard in general offensive language identification research and was widely used beyond OffensEval. We present a survey of OffensEval and related competitions, and we discuss the main lessons learned. We further evaluate the performance of Large Language Models (LLMs), which have recently revolutionalized the field of Natural Language Processing. We use zero-shot prompting with six popular LLMs and zero-shot learning with two task-specific fine-tuned BERT models, and we compare the results against those of the top-performing teams at the OffensEval competitions. Our results show that while some LMMs such as Flan-T5 achieve competitive performance, in general LLMs lag behind the best OffensEval systems.