This is a demonstration of controlling posture and position of passive folding handle cart pushed by an autonomous mobile robot. The position of the handle bar of the cart and the feet of the person walking in front are measured using a laser range scanner 'URG' mounted on the robot. Then its contact position and angle on the cart are adjusted by feedback control system, which directs the cart to move within a certain distance and angle towards the person. Therefore, the folding handle cart is controlled by the robot to follow the person walking ahead.
In this system, the robot(yamabico meros) moves in a free route which is designated by the direction of marker set on the floor. A camera is used to capture the image of the marker, and the position of the marker is segmented out and analyzed by using ARToolKit. Then, a virtual arrow is displayed over the marker, which shows the direction in which the robot will move towards.
In this demonstration, a 34cm wide mobile robot equipped with 240 degree laser range sensor(URG) is used to travel in a 40cm narrow path and reverse parking into a garage. We use the data acquired from the URG sensor to estimate the distance of the walls of the garage, which is within the blind spot of the robot. This enables the robot to maneuver into the garage by repeatedly turning left, right and going forward, like how a driver tries to align with the side wall and parks the car.
Introduction: Instead of using a laser scanner, human following is achieved with a camera only. From the color image of the camera, a color histogram is created out of the area where the object of interest is situated. With the histogram overlaying on the image again, only the area with the same color intensity will be extracted and then noise reduced. Human following is made possible by tracking the biggest region of the extraction. The amount of movement of the biggest region from the center of the camera is used as a method of tracking the human. This method can achieve human following reasonably well but the object recognition is difficult when the venue is overly bright and the color of object does not stands out distinctly.
By capturing the projected game screen using a webcam, the robot uses the data collected to play the game Dance Dance Revolution. When the arrows flow from bottom to the top of the screen, they are captured and recognized by the robot, which then corresponds information of the direction of the arrows to the PC running the game, while dancing simultaneously.