Robots for Constructing Multitype Real-World Datasets and Augmenting Indoor Spaces: Using RGB-D Camera and Ultra-Wideband Communication for Intuitive Human–Robot Interaction
- Abstract
- As indoor robots are developed and deployed into the real world, the need for human–robot interaction aspects are increasing. The development of such interaction between humans and robots plays an essential role in integrating robots into our daily lives and supports solving various practical problems. In this work we contributed to developing human–robot interaction for mobile ground and flying robots in three different aspects. First, we propose a labeling framework that enables a human to guide a mobile ground robot in creating multitype datasets for objects in the robot’s surroundings. Our labeling framework ensures no usage of labeling tools (e.g., software) but a direct hand-free gesture-based interaction between humans and robots. Our labeling framework, helps in reducing the effort and time required to collect and label two-dimensional and three-dimensional data. Our system was implemented using a single RGB-D sensor to interact with a mobile robot, position feature points for labeling, and track the mobile robot's movement. Several robot operating system nodes were designed to allow a compact structure for our labeling framework. We assessed different components in our framework, demonstrating its effectiveness in generating quality real-world labeled data for color images and point clouds. It also reveals how our framework could be used in solving object detection problems for mobile robots. Moreover, to evaluate our system considering human factors, we conducted a user study, where participants compared our framework and conventional labeling methods. The results show several significant enhancements for the usability factors and confirm our framework’s suitability to help a regular user build custom knowledge for mobile robots effortlessly. Second, we present an indoor interactive system using a mobile ground robot where a human can customize and interact through a projected screen utilizing the surrounding surfaces. An ultra-wideband (UWB) wireless sensor network was used to assist human-centered interaction design and navigate the self-actuated projector platform. We developed a UWB-based calibration algorithm to facilitate the interaction with the customized projected screens where a hand-held input device was designed to perform mid-air interactive functions. Sixteen participants were recruited to evaluate the system performance. A prototype level implementation was tested inside a simulated museum environment, where a self-actuated projector provides interactive explanatory content for the on-display artifacts under the user’s command. Our results depict the applicability to designate the interactive screen efficiently indoors and interact with the augmented content with reasonable accuracy and relatively low workload. Third, to solve the complexity of video broadcasting during a large-scale indoor event, we introduced an ultra-wideband (UWB)-based lighter-than-air indoor flying robot for user-centered interactive applications. In order to explore the user interaction with the robot at a long distance, dual interactions (i.e., user footprint following and user intention recognition) were proposed by equipping the user with a hand-held UWB sensor. Also, experiments were conducted inside a professional arena to validate the robot’s pose tracking in which 3D positioning was compared with the 3D laser sensor, and to reveal the applicability of the user-centered autonomous following of the robot according to the dual interactions.
- Author(s)
- Ahmed Ibrahim Ahmed Mohamed Elsharkawy
- Issued Date
- 2023
- Type
- Thesis
- URI
- https://scholar.gist.ac.kr/handle/local/19647
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.