Intelligent home appliances talk each other by human unnoticeable method in your house. It may be distrustfulness and alienation for some users. Communication by audible sound might increase a sense of affinity, like R2-D2. In this demonstration, two robots turn on electrical equipments according to user\'s directions. The total power supply is protected by circuit breaker. They ask the consuming power of the other robot to know the amount of available power capacity by audible sound. And as necessary, robots turn off some equipments before turning on the desired one to prevent breaker trip.
A baby is crying for her milk. You're busy and your hands are full. A robot system is needed that can prepare the bottle (shake it) and hand it over to a smaller worker mobile robot which can deliver it to the baby. The system is called AKACYANPION. AKACYANPION is comprised of a light-weight Speego Yamabico Mobile robot, an Xtion RGBD camera, a pan and tilt unit for versatile maneuvering of the camera, and an Exact Dynamics iArm manipulator. When the bottle is required, the AKACYANPION Coordinator computer commands the mobile robot to go and fetch the bottle. The Object Detector program finds the bottle on the table, then the iArm picks up the bottle and hands it over to the Mobile Robot, which delivers it back to the crying baby.
Shooting a picture of yourself is difficult, so it is handy to have a robot automatically takes a picture of you. In this demonstration, the robot can take a picture of a person's face with a digital camera automatically. From the LRF data, the robot detects the part which is similar to a human figure, and then it moves back and forth repeatedly, making adjustment to get the figure completely in the angle of view. Then, the LRF sensor is rotated in the tilt direction to obtain the 3D data of the figure. With that, the person's height can be detected and it determines the angle of elevation for the digital camera to capture the full-length and close-up picture respectively. As the robot scans a person's height then calculates the position of the face, it can take a picture of anyone regardless of his/her height.
In 2006, I have developed a remote control system for a mobile robot via a Web browser on a mobile phone. This time I improved this system, and developed an autonomous navigation system for a mobile robot, in which the destination is determined by recognition of a user's speech on a mobile phone. Processing contents is as follows; First, a smartphone (Google Android) recognizes the user's speech to determine a destination of a robot. Then, the destination information is sent to the PHP system on a Web server. The Robot receives the information from the server and plans a path to the destination using the A* algorithm on the map generated by the Rao-Blackwellized Particle Filter SLAM in advance. During autonomous navigation, the Adaptive Monte Carlo Localization is performed, and the robot avoids obstacles using the Dynamic Window Approach.
In this demonstration, the robot collects cereal boxes which are put in 1.5m x 3m area. The robot judges position and direction of cereal boxes from lazer range sensor's data, and links itself to the box by using S-hooks. After that, the robot brings the box outside the area and uncouples it. To link the robot to boxes, the robot had to knew accurate position and direction of boxes, so I spent a lot of my time to get it.