Title LOW-RESOLUTION VISUAL RECOGNITION VIA DEEP FEATURE DISTILLATION
Authors Zhu, Mingjian
Han, Kai
Zhang, Chao
Lin, Jinlong
Wang, Yunhe
Affiliation Peking Univ, Sch EECS, Key Lab Machine Percept MOE, Beijing, Peoples R China
Huawei Noahs Ark Lab, Hong Kong, Peoples R China
Peking Univ, Sch Software & Microelect, Beijing, Peoples R China
Keywords Low-Resolution Recognition
Deep Convolutional Networks
Teacher-Student Paradigm
Issue Date 2019
Publisher 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Abstract Here we study the low-resolution visual recognition problem. Conventional methods are usually trained on images with large ROIs (regions of interest), while the regions and insider images are often small and blur in real-world applications. Therefore, deep neural networks learned on high-resolution images cannot be directly used for recognizing low-resolution objects. To overcome this challenging problem, we propose to use the teacher-student learning paradigm for distilling useful feature information from a pre-trained deep model on high-resolution visual data. In practice, a distillation loss is used to seek the perceptual consistency of low-resolution images and high-resolution images. By simultaneously optimizing the recognition loss and distillation loss, we formulate a novel low-resolution recognition approach. Experiments conducted on benchmarks demonstrate that the proposed method is capable to learn well-performed models for recognizing low-resolution objects, which is superior to the state-of-the-art methods.
URI http://hdl.handle.net/20.500.11897/544349
ISSN 1520-6149
Indexed CPCI-S(ISTP)
Appears in Collections: 信息科学技术学院
机器感知与智能教育部重点实验室
软件与微电子学院

Files in This Work
There are no files associated with this item.

Web of Science®


0

Checked on Last Week

百度学术™


0

Checked on Current Time




License: See PKU IR operational policies.