Block change learning for knowledge distillation
- Abstract
- Deep neural networks perform well but require high-performance hardware for their use in real-world environments. Knowledge distillation is a simple method for improving the performance of a small network by using the knowledge of a large complex network. Small and large networks are referred to as student and teacher models, respectively. Previous knowledge distillation approaches perform well in a relatively small teacher network (20–30 layers) but poorly in large teacher networks (50 layers). Here, we propose an approach called block change learning that performs local and global knowledge distillation by changing blocks comprised of layers. The method focuses on the knowledge transfer without losing information in a large teacher model, as the approach considers intra-relationships between layers using local knowledge distillation and inter-relationships between corresponding blocks. The results are demonstrated this approach as superior to state-of-the-art methods using feature extraction datasets (Market1501 and DukeMTMC-relD) and object classification datasets (CIFAR-100 and Caltech256). Furthermore, we showed that the performance of the proposed approach was superior to that of a fine-tuning approach using pretrained models. © 2019
- Author(s)
- Choi, Hyunguk; Lee, Younkwan; Yow, Kin Choong; Jeon, Moongu
- Issued Date
- 2020-03
- Type
- Article
- DOI
- 10.1016/j.ins.2019.10.074
- URI
- https://scholar.gist.ac.kr/handle/local/12311
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.