Federated Learning of Plug-and-Play Adapter for Segment Anything Model

Date of Award


Document Type


Degree Name

Master of Science in Computer Vision


Computer Vision

First Advisor

Prof. Mohammad Yaqub

Second Advisor

Prof. Karthik Nandakumar


Foundation models trained on natural images exhibit strong generalization capabilities, requiring only minimal fine-tuning across various downstream tasks. However, adapting these models for medical image analysis is challenging due to extreme distribution shifts compared to pre-trained source data. This is further exacerbated by privacy constraints that inhibit siloed task-specific medical data pooling at a central location for accurate fine-tuning. This work addresses this challenge by leveraging the combined strengths of Parameter-Efficient Fine-tuning (PEFT) and Federated learning (FL). Specifically, we learn plug-and-play Low-Rank Adapters (LoRA) in a federated manner to adapt the Segment Anything Model (SAM) for 3D medical image segmentation without modifying any parameters of the original SAM model. Our experiments show that retaining parameters in their original state during adaptation is beneficial because fine-tuning them tends to distort the inherent capabilities of the underlying foundation model. Furthermore, PEFT complements FL by decreasing communication cost (∼49× ↓) compared to full fine-tuning (FullFT), while also substantially outperforming FullFT (∼6% ↑ Dice score) in 3D segmentation tasks on Fed-KiTS19 dataset.


Thesis submitted to the Deanship of Graduate and Postdoctoral Studies

In partial fulfilment of the requirements for the M.Sc degree in Computer Vision

Advisors: Mohammad Yaqub, Karthik Nandakumar

with 2 years embargo period

This document is currently not available here.