UTalk: Bridging the Gap between Humans and AI

Document Type

Conference Proceeding

Publication Title

Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications

Abstract

In the digital age of ever-increasing data sources, accessibility, and collection, the demand for generalizable machine learning models that are effective at capitalizing on given limited training datasets is unprecedented due to the labor-intensiveness and expensiveness of data collection. The deployed model must efficiently exploit patterns and regularities in the data to achieve desirable predictive performance on new, unseen datasets. Naturally, due to the various sources of data pools within different domains from which data can be collected, such as in Machine Learning, Natural Language Processing, and Computer Vision, selection bias will evidently creep into the gathered data, resulting in distribution (domain) shifts. In practice, it is typical for learned deep neural networks to yield sub-optimal generalization performance as a result of pursuing sharp local minima when simply solving empirical risk minimization (ERM) on highly complex and non-convex loss functions. Hence, this paper aims to tackle the generalization error by first introducing the notion of a local minimum’s sharpness, which is an attribute that induces a model’s non-generalizability and can serve as a simple guiding heuristic to theoretically distinguish satisfactory (flat) local minima from poor (sharp) local minima. Secondly, motivated by the introduced concept of variance-stability ∼ exploration-exploitation tradeoff, we propose a novel gradient-based adaptive optimization algorithm that is a variant of SGD, named Bouncing Gradient Descent (BGD). BGD’s primary goal is to ameliorate SGD’s deficiency of getting trapped in suboptimal minima by utilizing relatively large step sizes and ”unorthodox” approaches in the weight updates in order to achieve better model generalization by attracting flatter local minima. We empirically validate the proposed approach on several benchmark classification datasets, showing that it contributes to significant and consistent improvements in model generalization performance and produces state-of-the-art results when compared to the baseline approaches.

First Page

239

Last Page

249

DOI

10.5220/0011771700003417

Publication Date

1-1-2023

Keywords

Basin Flatness and Sharpness, Bouncing Gradient Descent, Deep Neural Network, Generalization, Heuristic Algorithm, Large Step Sizes, Local Minima, Optimization Method

This document is currently not available here.

Share

COinS