Document Type

Article

Publication Title

arXiv

Abstract

Recently, large pre-trained multilingual speech models have shown potential in scaling Automatic Speech Recognition (ASR) to many low-resource languages. Some of these models employ language adapters in their formulation, which helps to improve monolingual performance and avoids some of the drawbacks of multi-lingual modeling on resource-rich languages. However, this formulation restricts the usability of these models on code-switched speech, where two languages are mixed together in the same utterance. In this work, we propose ways to effectively fine-tune such models on code-switched speech, by assimilating information from both language adapters at each language adaptation point in the network. We also model code-switching as a sequence of latent binary sequences that can be used to guide the flow of information from each language adapter at the frame level. The proposed approaches are evaluated on three code-switched datasets encompassing Arabic, Mandarin, and Hindi languages paired with English, showing consistent improvements in code-switching performance with at least 10% absolute reduction in CER across all test sets. © 2023, CC BY-NC-SA.

DOI

10.48550/arXiv.2310.07423

Publication Date

10-11-2023

Keywords

Adapter, Automatic speech recognition, Code-switching, Low resource languages, Multilingual, Multilingual automatic speech recognition, Performance, Resource-Rich, Scalings, Speech models

Comments

Preprint: arXiv

Archived with thanks to arXiv

Preprint License: CC by NC-SA 4.0

Uploaded 30 November 2023

Share

COinS