Sliced Recursive Transformer
Document Type
Conference Proceeding
Publication Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
We present a neat yet effective recursive operation on vision transformers that can improve parameter utilization without involving additional parameters. This is achieved by sharing weights across depth of transformer networks. The proposed method can obtain a substantial gain (∼ 2%) simply using naïve recursive operation, requires no special or sophisticated knowledge for designing principles of networks, and introduces minimal computational overhead to the training procedure. To reduce the additional computation caused by recursive operation while maintaining the superior accuracy, we propose an approximating method through multiple sliced group self-attentions across recursive layers which can reduce the cost consumption by 10–30% without sacrificing performance. We call our model Sliced Recursive Transformer (SReT), a novel and parameter-efficient vision transformer design that is compatible with a broad range of other designs for efficient ViT architectures. Our best model establishes significant improvement on ImageNet-1K over state-of-the-art methods while containing fewer parameters. The proposed weight sharing mechanism by sliced recursion structure allows us to build a transformer with more than 100 or even 1000 shared layers with ease while keeping a compact size (13–15 M), to avoid optimization difficulties when the model is too large. The flexible scalability has shown great potential for scaling up models and constructing extremely deep vision transformers. Code is available at https://github.com/szq0214/SReT. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
First Page
727
Last Page
744
DOI
10.1007/978-3-031-20053-3_42
Publication Date
11-6-2022
Keywords
Best model, Computational overheads, Designing principle, Performance, Recursions, Recursive operations, Sharing mechanism, State-of-the-art methods, Training procedures, Transformer design
Recommended Citation
Z. Shen, Z. Liu, and E. Xing, E., "Sliced Recursive Transformer", in Computer Vision (ECCV 2022), Lecture Notes in Computer Science, vol 13684. pp. 727-744, November 2022, doi:10.1007/978-3-031-20053-3_42
Comments
IR conditions: non-described