Title DSP: Discriminative Spatial Part modeling for Fine-Grained Visual Categorization
Authors Yao, Hantao
Zhang, Dongming
Li, Jintao
Zhou, Jianshe
Zhang, Shiliang
Zhang, Yongdong
Affiliation Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China.
Univ Chinese Acad Sci, Beijing 100049, Peoples R China.
Coordinat Ctr China, Natl Comp Network Emergency Response Tech Team, Beijing 100029, Peoples R China.
Capital Normal Univ, Beijing Adv Innovat Ctr Imaging Technol, Beijing 100048, Peoples R China.
Peking Univ, Elect Engn & Comp Sci, Beijing 100871, Peoples R China.
Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China.
Zhang, YD (reprint author), Univ Chinese Acad Sci, Beijing 100049, Peoples R China.
Keywords Orientational Spatial Part model
Discriminative Spatial Part modeling
Fine-Grained Visual Categorization
CNN
LOCALIZATION
Issue Date 2017
Publisher IMAGE AND VISION COMPUTING
Citation IMAGE AND VISION COMPUTING.2017,63,24-37.
Abstract Different from the basic-level classification, the Fine-Grained Visual Categorization (FGVC) aims to classify objects belonging to the same species. Therefore, it is more challenging than the basic-level classification. Recently, significant advances have been achieved in FGVC. However, most of the existing methods require bounding boxes or part annotations for training and testing, resulting in limited usability and flexibility. To conquer these limitations, we aim to automatically detect the bounding boxes and parts for FGVC. The bounding boxes are acquired by transferring bounding boxes from training images to testing images. Based on the generated bounding boxes, we employ a multiple-layer Orientational Spatial Part (OSP) model to learn local parts for the object. To achieve more discriminative part modeling, the Discriminative Spatial Part (DSP) model is proposed to select the discriminative parts from OSP. Finally, we employ Convolutional Neural Network (CNN) as the feature extractor and train a linear SVM as the classifier. Extensive experiments on public benchmark datasets manifest the impressive performance of our method, i.e., classification accuracy achieves 79.8% on CUB-200-2011 and 85.7% on Aircraft, which are higher than many existing methods using manual annotations. (C) 2017 Elsevier B.V. All rights reserved.
URI http://hdl.handle.net/20.500.11897/472493
ISSN 0262-8856
DOI 10.1016/j.imavis.2017.05.003
Indexed SCI(E)
Appears in Collections: 信息科学技术学院

Files in This Work
There are no files associated with this item.

Web of Science®


0

Checked on Last Week

Scopus®



Checked on Current Time

百度学术™


0

Checked on Current Time

Google Scholar™





License: See PKU IR operational policies.