Dynamically Decoding Source Domain Knowledge for Domain Generalization

Document Type

Article

Publication Title

arXiv

Abstract

Optimizing the performance of classifiers on samples from unseen domains remains a challenging problem. While most existing studies on domain generalization focus on learning domain-invariant feature representations, multi-expert frameworks have been proposed as a possible solution and have demonstrated promising performance. However, current multi-expert learning frameworks fail to fully exploit source domain knowledge during inference, resulting in suboptimal performance. In this work, we propose to adapt Transformers for the purpose of dynamically decoding source domain knowledge for domain generalization. Specifically, we build one domain-specific local expert per source domain and one domain-agnostic feature branch as query. A Transformer encoder encodes all domain-specific features as source domain knowledge in memory. In the Transformer decoder, the domain-agnostic query interacts with the memory in the cross-attention module, and domains that are similar to the input will contribute more to the attention output. Thus, source domain knowledge gets dynamically decoded for inference of the current input from unseen domain. This mechanism enables the proposed method to generalize well to unseen domains. The proposed method has been evaluated on three benchmarks in the domain generalization field and shown to have the best performance compared to state-of-the-art methods. Copyright © 2021, The Authors. All rights reserved.

DOI

doi.org/10.48550/arXiv.2110.03027

Publication Date

10-6-2021

Keywords

Benchmarking; Decoding; Machine learning; 'current; Domain agnostics; Domain knowledge; Domain specific; Feature representation; Generalisation; Invariant features; Multi-expert; Performance; Performance of classifier; Domain Knowledge; Computer Vision and Pattern Recognition (cs.CV)

Comments

Preprint: arXiv

Share

COinS