A Cascaded Multimodal Natural User Interface to Reduce Driver Distraction
- Author(s)
- Myeongseop Kim
- Type
- Thesis
- Degree
- Master
- Department
- 대학원 융합기술학제학부(지능로봇프로그램)
- Advisor
- Kim, SeungJun
- Abstract
- Many studies have attempted to identify ways to reduce driving distractions using multimodal Natural User Interfaces (NUIs) to compensate for the shortcomings of single modalities. These NUIs, however, are not based on comparative examinations of driver distractions (e.g., visual, cognitive, manual), and thus no consensus best-NUI has emerged. To address this gap, in experiment 1, we compared five single modalities commonly used for NUIs (touch, mid-air gesture, speech, gaze, and button) to provide a holistic view of driver distraction. In this study, we used a single modality interface designed around the steering wheel and a heads-up display (HUD) to provide information for the same comparison between modalities. Our findings suggest that the best approach is a combined cascaded multimodal interface that accounts for the characteristics of single modalities. In experiment 2, we compared several combinations of cascaded multimodalities by considering the characteristics of each modality in the sequential phase of the command input process. We conducted consecutive empirical studies using recorded videos, task assessments, physical data, questionnaires, and interviews. Our results showed that the combinations speech + button, speech + touch, gaze + button represent the best cascaded multimodal interface for IVIS. In addition, using a two-day learning effect study design, we showed the potential of a mid-air gesture that can be applied to cascaded multimodality. We believe that these interfaces will reduce IVIS-related driver distraction.
- URI
- https://scholar.gist.ac.kr/handle/local/32807
- Fulltext
- http://gist.dcollection.net/common/orgView/200000908628
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.