Efficient Attention-Sharing Information Distillation Transformer for Lightweight Single Image Super-Resolution
- Author(s)
- Park, Karam; Soh, Jae Woong; Cho, Nam Ik
- Type
- Conference Paper
- Citation
- 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025, v.39, no.6, pp.6416 - 6424
- Issued Date
- 2025
- Abstract
- Transformer-based Super-Resolution (SR) methods have demonstrated superior performance compared to convolutional neural network (CNN)-based SR approaches due to their capability to capture long-range dependencies. However, their high computational complexity necessitates the development of lightweight approaches for practical use. To address this challenge, we propose the Attention-Sharing Information Distillation (ASID) network, a lightweight SR network that integrates attention-sharing and an information distillation structure specifically designed for Transformer-based SR methods. We modify the information distillation scheme, originally designed for efficient CNN operations, to reduce the computational load of stacked self-attention layers, effectively addressing the efficiency bottleneck. Additionally, we introduce attention-sharing across blocks to further minimize the computational cost of self-attention operations. By combining these strategies, ASID achieves competitive performance with existing SR methods while requiring only around 300 K parameters - significantly fewer than existing CNN-based and Transformer-based SR models. Furthermore, ASID outperforms state-of-the-art SR methods when the number of parameters is matched, demonstrating its efficiency and effectiveness. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
- Publisher
- Association for the Advancement of Artificial Intelligence
- Conference Place
- Philadelphia
- URI
- https://scholar.gist.ac.kr/handle/local/31498
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.