Mobile Robot's Assignment

Read Complete Research Material

MOBILE ROBOT'S ASSIGNMENT

Mobile Robot's Assignment

[Name of the Institute]

Mobile Robot's Assignment

Introduction

When robots share living space with humans, and particularly when they cooperate on certain tasks, they should first identify target humans, approach them in a stable manner, and maintain a safe distance. This study assumes that humans are engaged in their usual activities in a living environment. The objective is to provide mobile robots with the ability to recognize accurately their own position (self-position) as well as the positions of obstacles to be avoided and targets (humans) to be followed. An important issue in the development of such human-following robots is sensing technologies that make possible the detection of a human object, and tracking of the object without losing visual contact. In Ref. 1, a mobile robot equipped with a laser range scanner performs view-based tracking of human objects by matching leg templates and sensor data.

In Ref. 2, a mobile robot is supplied with a camera to detect the legs of the target human, thus providing an inexpensive and safe solution. In Ref. 3, human objects are followed using color and other information. System design is simplified by the implementation of tracking functions using RT-components. In the case of continued human following, robots are navigated according to the human position. However, self-position estimation by the robot is required when the target human is lost from sight, or when position-based services are provided by the robot. In the case of human-following robots, tracking targets exist around the robot, and self-position estimation based on such methods as map generation and landmark search performed simultaneously with tracking is likely to be difficult operating constraints.

Robot self-position estimation performed solely by dead reckoning has limitations and must be corrected by using information from other sensors. There are methods that estimate the self-position by measuring landmarks with robot-mounted cameras or laser range scanners, or the SLAM technique, which combines map generation with self-position estimation.

However, these methods do not guarantee consistent position estimation in the case of complex scenes such as the human living environment, which are affected by various moving objects. In this context, researchers are considering the creation of an intelligent space with distributed intelligent devices capable of sensing, processing, and networking functions, thus aiming at implementation of human-following robots by monitoring the movements of humans and mobile robots, and supporting the robots' behavior by providing monitoring data. In this paper, a human-following robot system is proposed according to exchanging positions between a robot and an intelligent space where laser range scanners are arranged. A similar scheme was presented in Ref. 5, where a distributed vision system installed on the environment side was employed to track a mobile robot and its target, and to send the resulting position data to the robot for human tracking control. However, environment-side sensing cannot keep tracking without losing sight of robots and humans in complicated scenes. Thus, we aim at implementation of more robust human-following robots by combining odometric information and human detection on the mobile robot side with ...
Related Ads