OAK

Predictive Modeling Through Data Fusion for Autonomous Driving Systems

Metadata Downloads
Author(s)
Muhammad Ishfaq Hussain
Type
Thesis
Degree
Doctor
Department
대학원 전기전자컴퓨터공학부
Advisor
Jeon, Moongu
Abstract
Predictive modelling through data fusion is a technique used in autonomous driving
systems (ADS) that involves combining data from multiple sensors to create a comprehensive understanding of the vehicle’s environment. This technique allows (ADS)
to make accurate predictions about the movement of other vehicles, pedestrians, and
obstacles, and to take appropriate actions to avoid collisions and other safety issues.
Sensors are a critical component of (ADAS) and are used to gather information about
the surrounding environment. A crucial task in autonomous driving is to create a perception of the surrounding environment using optical sensors, which is a long-standing
challenge and prompts us to explore the utilization of various sensors. Radar sensors
emit radio waves to detect objects in the environment. Radar sensors can provide
accurate long distance measurements and are effective in poor weather conditions.
However, radar sensors may struggle to provide detailed information about the environment. Cameras are widely used in (ADAS) systems as they can capture detailed
visual information about the environment. Cameras can be used for tasks such as lane detection, traffic sign recognition, and pedestrian detection. However, cameras may struggle in low-light conditions or in situations where visibility is poor. LiDAR sensors
are expensive and may struggle to detect certain types of objects such as transparent
or reflective surfaces. Ultrasonic sensors emit high-frequency sound waves to detect
objects in the environment. Ultrasonic sensors are effective at detecting nearby objects
and are commonly used for parking assistance systems. Overall, each type of sensor
has its own strengths and weaknesses, and the choice of sensor(s) used in an (ADAS)
system will depend on the specific requirements of the system. In general, a combination of sensors is often used to provide a more robust and accurate picture of the
environment. In this dissertation, I present my research related to the importance of
the independent and multiple sensor stack for autonomous driving systems for more
reliable and secure perception. Radar is an older and cheaper type of sensor than alternatives such as lidar for long-range distance coverage, and it is also competitively
reliable and robust in adverse weather conditions. In the first part, we explore the
dynamic Gaussian process for occupancy mapping and predicting a drivable path for
a self-driving vehicle within the field of view (FOV) of a radar sensor. In the second
part, we fuse radar and monocular vision sensor for finding the depth estimation. In
the last electromyography (EMG) signals are examined to predict the intention of a
driver along the perception building stack of an (ADS) in building an (ADAS).
URI
https://scholar.gist.ac.kr/handle/local/19606
Fulltext
http://gist.dcollection.net/common/orgView/200000883812
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.