Autonomous Parking Robot based on Deep Reinforcement Learning
- Author(s)
- Sang-Hyun Lee
- Type
- Thesis
- Degree
- Master
- Department
- 대학원 전기전자컴퓨터공학부
- Advisor
- Jeon, Moongu
- Abstract
- The global automotive companies endeavor to develop ADAS (advanced driver assistance systems) as a goal of autonomous driving. Autonomous driving will become a reality in the near future. Many experts predict that the realization of autonomous driving will solve environmental and traffic problems. Autonomous driving can be achieved by synergistic effects of various technologies such as communication, control and pattern recognition. These technologies also can be applied to mobile and indoor robots, so the researches for autonomous driving are valuable.
In this paper, Autonomous parking robot based on reinforcement learning is proposed. Generally, reinforcement learning is studied in a game environment to simulate for easy agent exploration. However, in this research, reinforcement learning is applied to physical models with measurement errors and uncertainties to prove that reinforcement learning could be extended into vehicle control field applications. Furthermore, it will be confirmed that reinforcement learning can find the optimal path in various decision-making problems.
Experiments have shown that it is possible to train a network that carries out proper parking with a state which has only 13 elements: eight distances gathered from LiDAR sensor, two relative locations and one angle calculated from the SLAM (simultaneous localization and mapping) algorithm, and two vehicle control commands. Using the state for reinforcement learning, the trained network could park even if an agent starts from different initial points. Vehicle control is based on ROS (robot operating system) and Gazebo simulation environment is used for modeling an Ackermann steering RC car. Amidst the training process, it was found that when the network attempts to park a wrong way, the agent can correct its path by reversing and then parking again. After enough training, the parking process is completed without backing up. In addition, the parking network trained in the simulation environment was transplanted into the real RC car to verify its applicability in the physical environment. Experimental results show that suitable paths are generated to perform parking in both simulation and real environments.
- URI
- https://scholar.gist.ac.kr/handle/local/32502
- Fulltext
- http://gist.dcollection.net/common/orgView/200000910558
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.