Judging Adam: Studying the Performance of Optimization Methods on ML4SE Tasks
Proceedings - International Conference on Software Engineering
Solving a problem with a deep learning model requires researchers to optimize the loss function with a certain optimization method. The research community has developed more than a hundred different optimizers, yet there is scarce data on optimizer performance in various tasks. In particular, none of the benchmarks test the performance of optimizers on source code-related problems. However, existing benchmark data indicates that certain optimizers may be more efficient for particular domains. In this work, we test the performance of various optimizers on deep learning models for source code and find that the choice of an optimizer can have a significant impact on the model quality, with up to two-fold score differences between some of the relatively well-performing optimizers. We also find that RAdam optimizer (and its modification with the Lookahead envelope) is the best optimizer that almost always performs well on the tasks we consider. Our findings show a need for a more extensive study of the optimizers in code-related tasks, and indicate that the ML4SE community should consider using RAdam instead of Adam as the default optimizer for code-related deep learning tasks.
Codes (symbols), Deep learning, Learning systems
D. Pasechnyuk, A. Prazdnichnykh, M. Evtikhiev and T. Bryksin, "Judging Adam: Studying the Performance of Optimization Methods on ML4SE Tasks," 2023 IEEE/ACM 45th Intl Conf. on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), Melbourne, Australia, 2023, pp. 117-122, July 2023. doi: 10.1109/ICSE-NIER58687.2023.00027.