SepTr: Separable Transformer for Audio Spectrogram Processing
Document Type
Article
Publication Title
arXiv
Abstract
Following the successful application of vision transformers in multiple computer vision tasks, these models have drawn the attention of the signal processing community. This is because signals are often represented as spectrograms (e.g. through Discrete Fourier Transform) which can be directly provided as input to vision transformers. However, naively applying transformers to spectrograms is suboptimal. Since the axes represent distinct dimensions, i.e. frequency and time, we argue that a better approach is to separate the attention dedicated to each axis. To this end, we propose the Separable Transformer (SepTr), an architecture that employs two transformer blocks in a sequential manner, the first attending to tokens within the same frequency bin, and the second attending to tokens within the same time interval. We conduct experiments on three benchmark data sets, showing that our separable architecture outperforms conventional vision transformers and other state-of-the-art methods. Unlike standard transformers, SepTr linearly scales the number of trainable parameters with the input size, thus having a lower memory footprint. Our code is available as open source at https://github.com/ristea/septr. Copyright © 2022, The Authors. All rights reserved.
DOI
10.48550/arXiv.2203.09581
Publication Date
3-17-2022
Keywords
Audio acoustics, Open systems, Spectrographs, Audio spectrogram processing, Frequency bins, Multi-head attention, Multiple computers, Separable transformer, Sequential manners, Signal-processing, Sound recognition, Spectrograms, Time interval, Discrete Fourier transforms, Computer Vision and Pattern Recognition (cs.CV), Machine Learning (cs.LG)
Recommended Citation
N.C. Ristea, R.T. Ionescu and F.S. Khan, "SepTr: Separable Transformer for Audio Spectrogram Processing", arXiv, Mar 2022, doi: 10.48550/arXiv.2203.09581
Comments
IR Deposit conditions: non-described
Preprint available on arXiv