bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark
Document Type
Conference Proceeding
Publication Title
Proceedings of the Annual Meeting of the Association for Computational Linguistics
Abstract
We present bgGLUE (Bulgarian General Language Understanding Evaluation), a benchmark for evaluating language models on Natural Language Understanding (NLU) tasks in Bulgarian. Our benchmark includes NLU tasks targeting a variety of NLP problems (e.g., natural language inference, fact-checking, named entity recognition, sentiment analysis, question answering, etc.) and machine learning tasks (sequence labeling, document-level classification, and regression). We run the first systematic evaluation of pre-trained language models for Bulgarian, comparing and contrasting results across the nine tasks in the benchmark. The evaluation results show strong performance on sequence labeling tasks, but there is a lot of room for improvement for tasks that require more complex reasoning. We make bgGLUE publicly available together with the fine-tuning and the evaluation code, as well as a public leaderboard at https://bgglue.github.io, and we hope that it will enable further advancements in developing NLU models for Bulgarian.
First Page
8733
Last Page
8759
Publication Date
1-1-2023
Recommended Citation
M. Hardalov et al., "bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark," Proceedings of the Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 8733 - 8759, Jan 2023.