Roma: Robust Model Adaptation for Offline Model-Based Optimization
Document Type
Article
Publication Title
arXiv
Abstract
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries. A popular approach to solving this problem is maintaining a proxy model, e.g., a deep neural network (DNN), that approximates the true objective function. Here, the main challenge is how to avoid adversarially optimized inputs during the search, i.e., the inputs where the DNN highly overestimates the true objective function. To handle the issue, we propose a new framework, coined robust model adaptation (RoMA), based on gradient-based optimization of inputs over the DNN. Specifically, it consists of two steps: (a) a pre-training strategy to robustly train the proxy model and (b) a novel adaptation procedure of the proxy model to have robust estimates for a specific set of candidate solutions. At a high level, our scheme utilizes the local smoothness prior to overcome the brittleness of the DNN. Experiments under various tasks show the effectiveness of RoMA compared with previous methods, obtaining state-of-the-art results, e.g., RoMA outperforms all at 4 out of 6 tasks and achieves runner-up results at the remaining tasks. Copyright © 2021, The Authors. All rights reserved.
DOI
10.48550/arXiv.2110.14188
Publication Date
10-27-2021
Recommended Citation
S. Yu, S. Ahn, L. Song, and J. Shin, "Roma: Robust model adaptation for offline model-based optimization," 2021, arXiv:2110.14188
Comments
Preprint: arXiv