Fusion and Orthogonal Projection for Improved Face-Voice Association

Document Type

Conference Proceeding

Publication Title

IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

Abstract

We study the problem of learning association between face and voice. Prior works adopt pairwise or triplet loss formulations to learn an embedding space amenable for associated matching and verification tasks. Albeit showing some progress, such loss formulations are restrictive due to dependency on distance-dependent margin parameter, poor run-time training complexity, and reliance on carefully crafted negative mining procedures. In this work, we hypothesize that enriched feature representation coupled with an effective yet efficient supervision is necessary in realizing a discriminative joint embedding space for improved face-voice association. To this end, we propose a light-weight, plug-and-play mechanism that exploits the complementary cues in both modalities to form enriched fused embeddings and clusters them based on their identity labels via orthogonality constraints. We coin our proposed mechanism as fusion and orthogonal projection (FOP) and instantiate in a two-stream pipeline. The overall resulting framework is evaluated on a large-scale VoxCeleb dataset with a multitude of tasks, including cross-modal verification and matching. Our method performs favourably against the current state-of-the-art methods and our proposed supervision formulation is more effective and efficient than the ones employed by the contemporary methods. © 2022 IEEE

DOI

10.1109/ICASSP43922.2022.9747704

Publication Date

4-27-2022

Keywords

Cross-modal verification, Face-voice association, matching, Multimodal, Computer vision, Face recognition, Large dataset, Cross-modal, Cross-modal verification, Embeddings, Face-voice association, Learn+, Matchings, Multi-modal, Orthogonal projection, Runtimes, Verification task

Comments

IR Deposit conditions: non-described

Share

COinS