Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation
Document Type
Conference Proceeding
Publication Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
It is imperative to ensure the robustness of deep learning models in critical applications such as, healthcare. While recent advances in deep learning have improved the performance of volumetric medical image segmentation models, these models cannot be deployed for real-world applications immediately due to their vulnerability to adversarial attacks. We present a 3D frequency domain adversarial attack for volumetric medical image segmentation models and demonstrate its advantages over conventional input or voxel domain attacks. Using our proposed attack, we introduce a novel frequency domain adversarial training approach for optimizing a robust model against voxel and frequency domain attacks. Moreover, we propose frequency consistency loss to regulate our frequency domain adversarial training that achieves a better tradeoff between model’s performance on clean and adversarial samples. Code is available at https://github.com/asif-hanif/vafa.
First Page
457
Last Page
467
DOI
10.1007/978-3-031-43895-0_43
Publication Date
10-8-2023
Keywords
Adversarial attack, Adversarial training, Frequency domain attack, Volumetric medical segmentation, Computer aided instruction, Deep learning, Frequency domain analysis, Image segmentation, Learning systems, Medical imaging
Recommended Citation
A. Hanif et al., "Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation," Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 14221 LNCS, pp. 457 - 467, Oct 2023.
The definitive version is available at https://doi.org/10.1007/978-3-031-43895-0_43
Comments
IR conditions: non-described