CamoFocus: Enhancing Camouflage Object Detection with Split-Feature Focal Modulation and Context Refinement

Document Type

Conference Proceeding

Publication Title

Digest of Technical Papers - IEEE International Conference on Consumer Electronics

Abstract

As virtual environments continue to advance, the demand for immersive and emotionally engaging experiences has grown. Addressing this demand, we introduce Emotion enabled Virtual avatar mapping using Optimized KnowledgE distillation (EVOKE), a lightweight emotion recognition framework designed for the seamless integration of emotion recognition into 3D avatars within virtual environments. Our approach leverages knowledge distillation involving multi-label classification on the publicly available DEAP dataset, which covers valence, arousal, and dominance as primary emotional classes. Remarkably, our distilled model, a CNN with only two convolutional layers and 18 times fewer parameters than the teacher model, achieves competitive results, boasting an accuracy of 87% while demanding far less computational resources. This equilibrium between performance and deployability positions our framework as an ideal choice for virtual environment systems. Furthermore, the multi-label classification outcomes are utilized to map emotions onto custom-designed 3D avatars.

DOI

10.1109/ICCE59016.2024.10444200

Publication Date

1-1-2024

Keywords

3D avatars, EEG signals, emotion recognition, knowledge distillation, wellbeing

This document is currently not available here.

Share

COinS