Defense against Gradient Inversion Attacks using Kurtosis Regularization

Document Type



Gradient inversion is a popular attack in federated learning in which an attacker uses a shared gradient to reconstruct the model inputs. This form of privacy leakage has had many proposed defense methods, each involving a trade-off in model accuracy and computation time. First, we discuss and compare existing gradient inversion attacks and defenses. We then propose a new defense method against gradient inversion attacks which relies on an existing quantization method named kurtosis regularization (KURE). Using KURE, we regularize weights and gradients distributions to make them more uniform. We compare this defense method with perturbing the gradients or input batch mixing defense methods in a systematic manner. We demonstrate that this kurtosis regularization defense leads to worse gradient reconstruction performance in the best attack scenarios, and that the trade-off in model accuracy impact and complexity are less than comparable existing defense methods. Several variations of kurtosis regularization are proposed, discussed, and tested: on gradients, on weights, in combination with other existing defenses. We also apply KURE to DPGSD and demonstrate that KURE + DPSGD achieves better defense performance than regular DPSGD at the same noise levels. KURE is also shown to have no model accuracy impact with DPSGD, making it possible to achieve higher model accuracy with the same defense performance in regular DPSGD.

First Page


Last Page


Publication Date



Thesis submitted to the Deanship of Graduate and Postdoctoral Studies

In partial fulfillment of the requirements for the M.Sc degree in Machine Learning

Advisors: Dr. Karthik Nandakumar, Dr. Huan Xiong

2 years embargo period

This document is currently not available here.