OAK

Block change learning for knowledge distillation

Metadata Downloads
Abstract
Deep neural networks perform well but require high-performance hardware for their use in real-world environments. Knowledge distillation is a simple method for improving the performance of a small network by using the knowledge of a large complex network. Small and large networks are referred to as student and teacher models, respectively. Previous knowledge distillation approaches perform well in a relatively small teacher network (20–30 layers) but poorly in large teacher networks (50 layers). Here, we propose an approach called block change learning that performs local and global knowledge distillation by changing blocks comprised of layers. The method focuses on the knowledge transfer without losing information in a large teacher model, as the approach considers intra-relationships between layers using local knowledge distillation and inter-relationships between corresponding blocks. The results are demonstrated this approach as superior to state-of-the-art methods using feature extraction datasets (Market1501 and DukeMTMC-relD) and object classification datasets (CIFAR-100 and Caltech256). Furthermore, we showed that the performance of the proposed approach was superior to that of a fine-tuning approach using pretrained models. © 2019
Author(s)
Choi, HyungukLee, YounkwanYow, Kin ChoongJeon, Moongu
Issued Date
2020-03
Type
Article
DOI
10.1016/j.ins.2019.10.074
URI
https://scholar.gist.ac.kr/handle/local/12311
Publisher
Elsevier BV
Citation
Information Sciences, v.513, pp.360 - 371
ISSN
0020-0255
Appears in Collections:
Department of Electrical Engineering and Computer Science > 1. Journal Articles
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.