Driver Drowsiness Detection Using Condition-Adaptive Representation Learning Framework
- Abstract
- We propose a condition-adaptive representation learning framework for driver drowsiness detection based on a 3D-deep convolutional neural network. The proposed framework consists of four models: spatio-temporal representation learning, scene condition understanding, feature fusion, and drowsiness detection. Spatio-temporal representation learning extracts features that can describe motions and appearances in video simultaneously. Scene condition understanding classifies the scene conditions related to various conditions about the drivers and driving situations, such as statuses of wearing glasses, illumination condition of driving, and motion of facial elements, such as head, eye, and mouth. Feature fusion generates a condition-adaptive representation using two features extracted from the above models. The drowsiness detection model recognizes driver drowsiness status using the condition-adaptive representation. The condition-adaptive representation learning framework can extract more discriminative features focusing on each scene condition than the general representation so that the drowsiness detection method can provide more accurate results for the various driving situations. The proposed framework is evaluated with the NTHU drowsy driver detection video dataset. The experimental results show that our framework outperforms the existing drowsiness detection methods based on visual analysis. © 2000-2011 IEEE.
- Author(s)
- Yu, Jongmin; Park, Sangwoo; Lee, Sang Wook; Jeon, Moongu
- Issued Date
- 2019-11
- Type
- Article
- DOI
- 10.1109/TITS.2018.2883823
- URI
- https://scholar.gist.ac.kr/handle/local/12484
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.