Learning to Learn for Unconstrained Visual Recognition
- Author(s)
- Younkwan Lee
- Type
- Thesis
- Degree
- Doctor
- Department
- 대학원 전기전자컴퓨터공학부
- Advisor
- Jeon, Moongu
- Abstract
- How is the robustness of the current state-of-the-art for recognition and detection algorithms in non-ideal visual environments? While the visual recognition research has made tremendous progress in recent years, most models are trained, applied, and evaluated on conventional high-quality visual data. However, in many emerging applications such as robotics and autonomous driving, the performances of visual sensing and analytics are largely jeopardized by low-quality visual data acquired from unconstrained environments, suffering from various types of degradation such as low resolution, noise, occlusion, motion blur, contrast, brightness, sharpness, out-of-focus etc.
In this dissertation, I present my efforts in learning deeply designed frameworks for unconstrained visual recognition.
Compared to existing learning frameworks, proposed learning mechanism can capture the generalization ability and better handle viewpoint variation, occlusion and truncation in visual recognition.
I introduce five new learning frameworks: the recognition of low-quality image, the denoising and rectification for license plate recognition, the detection of license plate, the disentangled representation framework for image deraining, and the connection of low- and high-level vision tasks.
These frameworks are built to handle complex challenging factors and tasks in visual recognition.
Based on these frameworks, we propose new two applications: 1) multi-task traffic scene recognition and 2) universal image deraining and conduct experiments on benchmark and newly proposed datasets to verify the advantages of our methods.
Lastly, I conclude the dissertation and discuss future steps for visual recognition.
- URI
- https://scholar.gist.ac.kr/handle/local/19452
- Fulltext
- http://gist.dcollection.net/common/orgView/200000884820
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.