OAK

High-to-Low Resolution Feature Knowledge Distillation for Low-Resolution Vision Tasks

Metadata Downloads
Author(s)
Sungho Shin
Type
Thesis
Degree
Doctor
Department
대학원 융합기술학제학부(지능로봇프로그램)
Advisor
Lee, Kyoobin
Abstract
This study addresses the challenges encountered in low-resolution (LR) vision tasks, such as face recognition and object detection, which are essential in real-world applications like surveillance systems, mobile devices, and autonomous vehicles. To tackle these issues, the research proposes the use of knowledge distillation (KD) techniques to transfer knowledge from high-resolution (HR) networks to LR networks, enabling the LR networks to learn from their HR counterparts. For face recognition tasks, the study introduces attention similarity KD (A-SKD) and cross-resolution feature similarity KD (F-SKD) methods. These approaches guide the LR network to focus on detailed facial parts by transferring well-constructed attention maps and feature maps from the HR network. In the case of object detection tasks, the study proposes cross-resolution token KD (CR-TKD) to enhance the performance of Detection Transformers (DETRs) in LR scenarios. CR-TKD allows a student network learning from LR images to be guided by a teacher network operating on HR images, improving both recognition and localization capabilities. The proposed A-SKD, F-SKD, and CR-TKD methods demonstrate promising results on various benchmarks, showcasing their effectiveness in enhancing the performance of LR vision tasks. This research highlights the potential of utilizing KD techniques to address the challenges posed by LR scenarios in real-world applications.
URI
https://scholar.gist.ac.kr/handle/local/19346
Fulltext
http://gist.dcollection.net/common/orgView/200000878518
Alternative Author(s)
신성호
Appears in Collections:
Department of AI Convergence > 4. Theses(Ph.D)
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.