An Encoder-Sequencer-Decoder Network for Lane Detection to Facilitate Autonomous Driving
- Abstract
- Lane detection in all weather conditions is a pressing necessity for autonomous driving. Accurate lane detection ensures the safe operation of autonomous vehicles, enabling advanced driver assistance systems to effectively track and maintain the vehicle within the lanes. Traditional lane detection techniques heavily rely on a single image frame captured by the camera, posing limitations. Moreover, these conventional methods demand a constant stream of pristine images for uninterrupted lane detection, resulting in degraded performance when faced with challenges such as low brightness, shadows, occlusions, and deteriorating environmental conditions. Recognizing that continuous sequence patterns on the road represent lanes, our approach leverages a sequential model to process multiple images for lane detection. In this study, we propose a deep neural network model to extract crucial lane information from a sequence of images. Our model adopts a convolutional neural network in an encoder/decoder architecture and incorporates an extended short-term memory model for sequential feature extraction. We evaluate the performance of our proposed model using the TuSimple and CuLane datasets, showcasing its superiority across various lane detection scenarios. Comparative analysis with state-of-the-art lane detection methods further substantiates our model's effectiveness. © 2023 ICROS.
- Author(s)
- Hussain, Muhammad Ishfaq; Rafique, Muhammad Aasim; Ko, Yeongmin; Khan, Zafran; Olimov, Farrukh; Naz, Zubia; Kim, Jeongbae; Jeon, Moongu
- Issued Date
- 2023-10-17
- Type
- Conference Paper
- DOI
- 10.23919/ICCAS59377.2023.10316884
- URI
- https://scholar.gist.ac.kr/handle/local/21043
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.