Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation

Document Type

Conference Proceeding

Publication Title

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)


It is imperative to ensure the robustness of deep learning models in critical applications such as, healthcare. While recent advances in deep learning have improved the performance of volumetric medical image segmentation models, these models cannot be deployed for real-world applications immediately due to their vulnerability to adversarial attacks. We present a 3D frequency domain adversarial attack for volumetric medical image segmentation models and demonstrate its advantages over conventional input or voxel domain attacks. Using our proposed attack, we introduce a novel frequency domain adversarial training approach for optimizing a robust model against voxel and frequency domain attacks. Moreover, we propose frequency consistency loss to regulate our frequency domain adversarial training that achieves a better tradeoff between model’s performance on clean and adversarial samples. Code is available at

First Page


Last Page




Publication Date



Adversarial attack, Adversarial training, Frequency domain attack, Volumetric medical segmentation, Computer aided instruction, Deep learning, Frequency domain analysis, Image segmentation, Learning systems, Medical imaging


IR conditions: non-described