Proceedings - International Conference on Computational Linguistics, COLING
Training with noisy labelled data is known to be detrimental to model performance, especially for high-capacity neural network models in low-resource domains. Our experiments suggest that standard regularisation strategies, such as weight decay and dropout, are ineffective in the face of noisy labels. We propose a simple noisy label detection method that prevents error propagation from the input layer. The approach is based on the observation that the projection of noisy labels is learned through memorisation at advanced stages of learning, and that the Pearson correlation is sensitive to outliers. Extensive experiments over real-world human-disagreement annotations as well as randomly-corrupted and data-augmented labels, across various tasks and domains, demonstrate that our method is effective, regularising noisy labels and improving generalisation performance.
Y. Wang, T. Baldwin, and K. Verspoor, "Noisy Label Regularisation for Textual Regression," in Proceedings of the 29th Intl Conference on Computational Linguistics, pp. 4228–4240, Oct 2022, https://aclanthology.org/2022.coling-1.371