Negating Bias in Hybrid Transformers for Class Incremental Learning
Document Type
Dissertation
Abstract
The world around us is ever-changing, new products and tech are emerging every day. To keep up with these rapid changes, we need to have adaptable artificial intelligence (AI) algorithms. Over the last few decades, neural networks, or more specifically deep learning (DL), have been dominating AI research, to the point of having a near human-level performance on some tasks. But, for the existing DL models to learn a new concept or to identify novel objects, they need to relearn all previously learned knowledge. Failing to do so would cause the model to miss classify, or even forget, previously learned objects. To solve this problem and to avoid the costly retraining of the model from scratch, in recent years, more attention is being directed toward creating models that continually learn as we humans do. Humans have an extraordinary ability to learn to do new things while keeping most of the previously learned knowledge. Attempting to mimic this behavior, the area of Continual Learning (CL) emerged. The majority of the existing CL approaches mainly focus on Convolutional Neural Networks (CNNs), while the capabilities of vision transformers (VITs) are not well studied. In this thesis, we study how a hybrid vision transformer fares in continual learning compared to conventional CNNs. Furthermore, a new viewpoint on the bias problem in continual learning when using exemplars replay is proposed. We show that this problem can be further broken down into a long-tail distribution problem, which gets heavily skewed with more learned classes. Through experimentation on small and large-scale data sets, the benefit of addressing the bias from this perspective is demonstrated. We show how a very simple addition to the model cross-entropy loss inspired by long-tail distribution literature leads to a significant improvement in performance with no added parameters.
First Page
i
Last Page
45
Publication Date
1-12-2022
Recommended Citation
A.M.H. Mohamed, "Negating Bias in Hybrid Transformers for Class Incremental Learning", M.S. Thesis, Computer Vision, MBZUAI, Abu Dhabi, UAE, 2022.
Comments
Thesis submitted to the Deanship of Graduate and Postdoctoral Studies
In partial fulfillment of the requirements for the M.Sc degree in Computer Vision
Advisors: Dr. Salman Khan, Dr. Fahad Khan
Online access for MBZUAI patrons