Contrastive Self-Supervised Learning: A Survey on Different Architectures

Document Type

Conference Proceeding

Publication Title

2022 2nd International Conference on Artificial Intelligence (ICAI)

Abstract

Self-Supervised Learning (SSL) has enhanced the learning process of semantic representations from images. SSL has reduced the need for annotating or labelling the data by relying less on class labels during the training phase. SSL techniques dependent on Constrative Learning (CL) are acquiring prevalence because of their low dependency on training data labels. Different CL methods are producing state-of-the-art results on datasets which are used as the benchmarks for Supervised Learning. In this survey, we provide a review of CL-based methods including SimCLR, MoCo, BYOL, SwAV, SimTriplet and SimSiam. We compare these pipelines in terms of their accuracy on ImageNet and VOC07 benchmark. BYOL propose basic yet powerful architecture to accomplish 74.30 % accuracy score on image classification task. Using clustering approach SwAV outperforms other architectures by achieving 75.30 % top-1 ImageNet classification accuracy. In addition, we shed light on the importance of CL approaches which can maximise the use of huge amounts of data available today. At last, we report the impediments of current CL methodologies and emphasize the need of computationally efficient CL pipelines. © 2022 IEEE.

First Page

1

Last Page

6

DOI

10.1109/ICAI55435.2022.9773725

Publication Date

5-17-2022

Keywords

Architecture, Cobalt alloys, Image classification, Image enhancement, Semantics, Supervised learning, Surveys, Class labels, Contrastive learning, Data annotation, Image augmentation, Labelings, Learning process, Self-supervised learning, Semantic representation, Training data, Training phasis

Comments

IR Deposit conditions: non-described

Share

COinS