Penalizing small errors using an adaptive logarithmic loss
Document Type
Conference Proceeding
Publication Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
Loss functions are error metrics that quantify the difference between a prediction and its corresponding ground truth. Fundamentally, they define a functional landscape for traversal by gradient descent. Although numerous loss functions have been proposed to date in order to handle various machine learning problems, little attention has been given to enhancing these functions to better traverse the loss landscape. In this paper, we simultaneously and significantly mitigate two prominent problems in medical image segmentation namely: i) class imbalance between foreground and background pixels and ii) poor loss function convergence. To this end, we propose an Adaptive Logarithmic Loss (ALL) function. We compare this loss function with the existing state-of-the-art on the ISIC 2018 dataset, the nuclei segmentation dataset as well as the DRIVE retinal vessel segmentation dataset. We measure the performance of our methodology on benchmark metrics and demonstrate state-of-the-art performance. More generally, we show that our system can be used as a framework for better training of deep neural networks.
First Page
368
Last Page
375
DOI
10.1007/978-3-030-68763-2_28
Publication Date
2-21-2021
Keywords
Class imbalance, FocusNet, Loss functions, Semantic segmentation, U-Net
Recommended Citation
C. Kaul, N. Pears, H. Dai, R. Murray-Smith and S. Manandhar, "Penalizing small errors using an adaptive logarithmic loss", in Pattern Recognition. ICPR International Workshops and Challenges, ICPR 2021, (Lecture Notes in Computer Science, v. 12661), pp. 368-375, 2021. Available: 10.1007/978-3-030-68763-2_28
Comments
IR Deposit conditions: