Document Type

Article

Publication Title

arXiv

Abstract

Modern machine learning algorithms usually involve tuning multiple (from one to thousands) hyperparameters which play a pivotal role in terms of model generalizability. Black-box optimization and gradient-based algorithms are two dominant approaches to hyperparameter optimization while they have totally distinct advantages. How to design a new hyperparameter optimization technique inheriting all benefits from both approaches is still an open problem. To address this challenging problem, in this paper, we propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG). Specifically, we first exactly formulate hyperparameter optimization as an A-based constrained optimization problem, where A is a black-box optimization algorithm (such as deep neural network). Then, we use the average zeroth-order hyper-gradients to update hyperparameters. We provide the feasibility analysis of using HOZOG to achieve hyperparameter optimization. Finally, the experimental results on three representative hyperparameter (the size is from 1 to 1250) optimization tasks demonstrate the benefits of HOZOG in terms of simplicity, scalability, flexibility, effectiveness and efficiency compared with the state-of-the-art hyperparameter optimization methods. © 2021, CC0.

DOI

10.48550/arXiv.2102.09026

Publication Date

2-17-2021

Keywords

Bi-level optimization; Black-box optimization; Hyperparameter optimization; Zeroth-order optimization

Comments

Preprint: arXiv

  • Archived with thanks to arXiv
  • Preprint License: CC by 0
  • Uploaded 24 March 2022

Share

COinS