SimVODIS: Simultaneous Visual Odometry, Object Detection, and Instance Segmentation
- Abstract
- Intelligent agents need to understand the surrounding environment to provide meaningful services to or interact intelligently with humans. The agents should perceive geometric features as well as semantic entities inherent in the environment. Contemporary methods in general provide one type of information regarding the environment at a time, making it difficult to conduct high-level tasks. Moreover, running two types of methods and associating two resultant information requires a lot of computation and complicates the software architecture. To overcome these limitations, we propose a neural architecture that simultaneously performs both geometric and semantic tasks in a single thread: simultaneous visual odometry, object detection, and instance segmentation (SimVODIS). SimVODIS is built on top of Mask-RCNN which is trained in a supervised manner. Training the pose and depth branches of SimVODIS requires unlabeled video sequences and the photometric consistency between input image frames generates self-supervision signals. The performance of SimVODIS outperforms or matches the state-of-the-art performance in pose estimation, depth map prediction, object detection, and instance segmentation tasks while completing all the tasks in a single thread. We expect SimVODIS would enhance the autonomy of intelligent agents and let the agents provide effective services to humans.
- Author(s)
- Kim, Ue-Hwan; Kim, Se-Ho; Kim, Jong-Hwan
- Issued Date
- 2022-01
- Type
- Article
- DOI
- 10.1109/tpami.2020.3007546
- URI
- https://scholar.gist.ac.kr/handle/local/8707
- Publisher
- Institute of Electrical and Electronics Engineers
- Citation
- IEEE Transactions on Pattern Analysis and Machine Intelligence, v.44, no.1, pp.428 - 441
- ISSN
- 0162-8828
-
Appears in Collections:
- Department of AI Convergence > 1. Journal Articles
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.