AsySQN: Faster Vertical Federated Learning Algorithms with Better Computation Resource Utilization
Document Type
Conference Proceeding
Publication Title
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Abstract
Vertical federated learning (VFL) is an effective paradigm of training the emerging cross-organizational (e.g., different corporations, companies and organizations) collaborative learning with privacy preserving. Stochastic gradient descent (SGD) methods are the popular choices for training VFL models because of the low per-iteration computation. However, existing SGD-based VFL algorithms are communication-expensive due to a large number of communication rounds. Meanwhile, most existing VFL algorithms use synchronous computation which seriously hamper the computation resource utilization in real-world applications. To address the challenges of communication and computation resource utilization, we propose an asynchronous stochastic quasi-Newton (AsySQN) framework for VFL, under which three algorithms, i.e. AsySQN-SGD, -SVRG and -SAGA, are proposed. The proposed AsySQN-type algorithms making descent steps scaled by approximate (without calculating the inverse Hessian matrix explicitly) Hessian information convergence much faster than SGD-based methods in practice and thus can dramatically reduce the number of communication rounds. Moreover, the adopted asynchronous computation can make better use of the computation resource. We theoretically prove the convergence rates of our proposed algorithms for strongly convex problems. Extensive numerical experiments on real-word datasets demonstrate the lower communication costs and better computation resource utilization of our algorithms compared with state-of-the-art VFL algorithms.
First Page
3917
Last Page
3927
DOI
10.1145/3447548.3467169
Publication Date
8-14-2021
Keywords
asynchronous parallel, federated learning, quasi-newton methods
Recommended Citation
Q. Zhang et al., “AsySQN: Faster Vertical Federated Learning Algorithms with Better Computation Resource Utilization,” Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 3917–3927, Aug. 2021, doi: 10.1145/3447548.3467169
Comments
IT deposit conditions: none described