BTAN: Lightweight Super-Resolution Network with Target Transform and Attention

Authors

  • Pan Wang School of Computer Science, China West Normal University, Nanchong, 637009, China
  • Zedong Wu School of Computer Science, China West Normal University, Nanchong, 637009, China
  • Zicheng Ding School of Computer Science, China West Normal University, Nanchong, 637009, China
  • Bochuan Zheng School of Computer Science, China West Normal University, Nanchong, 637009, China

DOI:

https://doi.org/10.31577/cai_2024_2_414

Keywords:

Image super-resolution, light-weight network, target transform, attention mechanism, deep learning

Abstract

In the realm of single-image super-resolution (SISR), generating high-resolution (HR) images from a low-resolution (LR) input remains a challenging task. While deep neural networks have shown promising results, they often require significant computational resources. To address this issue, we introduce a lightweight convolutional neural network, named BTAN, that leverages the connection between LR and HR images to enhance performance without increasing the number of parameters. Our approach includes a target transform module that adjusts output features to match the target distribution and improve reconstruction quality, as well as a spatial and channel-wise attention module that modulates feature maps based on visual attention at multiple layers. We demonstrate the effectiveness of our approach on four benchmark datasets, showcasing superior accuracy, efficiency, and visual quality when compared to state-of-the-art methods.

     

Downloads

Download data is not yet available.

Downloads

Published

2024-05-30

How to Cite

Wang, P., Wu, Z., Ding, Z., & Zheng, B. (2024). BTAN: Lightweight Super-Resolution Network with Target Transform and Attention. Computing and Informatics, 43(2), 414–437. https://doi.org/10.31577/cai_2024_2_414

Issue

Section

Special Section Articles