Title SELF-LEARNED VIDEO SUPER-RESOLUTION WITH AUGMENTED SPATIAL AND TEMPORAL CONTEXT
Authors Fan, Zejia
Liu, Jiaying
Yang, Wenhan
Xiang, Wei
Guo, Zongming
Affiliation Peking Univ, Wangxuan Inst Comp Technol, Beijing, Peoples R China
Bigo, Beijing, Peoples R China
Issue Date 2022
Publisher 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Abstract Video super-resolution methods typically rely on paired training data, in which the low-resolution frames are usually synthetically generated under predetermined degradation conditions (e.g., Bicubic downsampling). However, in real applications, it is labor-consuming and expensive to obtain this kind of training data, which limits the practical performance of these methods. To address the issue and get rid of the synthetic paired data, in this paper, we make exploration in utilizing the internal self-similarity redundancy within the video to build a Self-Learned Video Super-Resolution (SLVSR) method, which only needs to be trained on the input testing video itself. We employ a series of data augmentation strategies to make full use of the spatial and temporal context of the target video clips. The idea is applied to two branches of mainstream SR methods: frame fusion and frame recurrence methods. Since the former takes advantage of the short-term temporal consistency and the latter of the long-term one, our method can satisfy different practical situations. The experimental results show the superiority of our proposed method, especially in addressing the video super-resolution problems in real applications.
URI http://hdl.handle.net/20.500.11897/667564
ISBN 978-1-6654-0540-9
ISSN 1520-6149
DOI 10.1109/ICASSP43922.2022.9746371
Indexed EI
CPCI-S(ISTP)
Appears in Collections: 待认领

Files in This Work
There are no files associated with this item.

Web of Science®


0

Checked on Last Week

Scopus®



Checked on Current Time

百度学术™


0

Checked on Current Time

Google Scholar™





License: See PKU IR operational policies.