A Facial Landmark Detection Method Based on Deep Knowledge Transfer

Document Type

Article

Publication Title

IEEE Transactions on Neural Networks and Learning Systems

Abstract

Facial landmark detection is a crucial preprocessing step in many applications that process facial images. Deep-learning-based methods have become mainstream and achieved outstanding performance in facial landmark detection. However, accurate models typically have a large number of parameters, which results in high computational complexity and execution time. A simple but effective facial landmark detection model that achieves a balance between accuracy and speed is crucial. To achieve this, a lightweight, efficient, and effective model is proposed called the efficient face alignment network (EfficientFAN) in this article. EfficientFAN adopts the encoder-decoder structure, with a simple backbone EfficientNet-B0 as the encoder and three upsampling layers and convolutional layers as the decoder. Moreover, deep dark knowledge is extracted through feature-aligned distillation and patch similarity distillation on the teacher network, which contains pixel distribution information in the feature space and multiscale structural information in the affinity space of feature maps. The accuracy of EfficientFAN is further improved after it absorbs dark knowledge. Extensive experimental results on public datasets, including 300 Faces in the Wild (300W), Wider Facial Landmarks in the Wild (WFLW), and Caltech Occluded Faces in the Wild (COFW), demonstrate the superiority of EfficientFAN over state-of-the-art methods.

First Page

1342

Last Page

1353

DOI

10.1109/TNNLS.2021.3105247

Publication Date

3-1-2023

Keywords

Convolutional neural network (CNN), deep learning, facial landmark detection, knowledge distillation (KD)

Comments

IR Deposit conditions:

OA version (pathway a) Accepted version

No embargo

When accepted for publication, set statement to accompany deposit (see policy)

Must link to publisher version with DOI

Publisher copyright and source must be acknowledged

Share

COinS