Wavelet Attention Embedding Networks for Video Super-Resolution
- Authors
- Choi, Young-Ju; Lee, Young-Woon; Kim, Byung-Gyu
- Issue Date
- Jan-2021
- Publisher
- IEEE COMPUTER SOC
- Citation
- 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), pp.7314 - 7320
- Journal Title
- 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)
- Start Page
- 7314
- End Page
- 7320
- URI
- https://scholarworks.sookmyung.ac.kr/handle/2020.sw.sookmyung/146190
- DOI
- 10.1109/ICPR48806.2021.9412623
- ISSN
- 1051-4651
- Abstract
- Recently, Video super-resolution (VSR) has become more crucial as the resolution of display has been grown. The majority of deep learning-based VSR methods combine the convolutional neural networks (CNN) with motion compensation or alignment module to estimate a high-resolution (HR) frame from low-resolution (LR) frames. However, most of the previous methods deal with the spatial features equally and may result in the misaligned temporal features by the pixel-based motion compensation and alignment module. It can lead to the damaging effect on the accuracy of the estimated HR feature. In this paper, we propose a wavelet attention embedding network (WAEN), including wavelet embedding network (WENet) and attention embedding network (AENet), to fully exploit the spatio-temporal informative features. The WENet is operated as a spatial feature extractor of individual low and high-frequency information based on 2-D Haar discrete wavelet transform. The meaningful temporal feature is extracted in the AENet through utilizing the weighted attention map between frames. Experimental results verify that the proposed method achieves superior performance compared with state-of-the-art methods.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - ICT융합공학부 > IT공학전공 > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.