A mobile robot is combined with an LED tape and a servo motor to express a flower.
Player will use the VR deivce to find the lost Pokemon with the help of the 360 degree camera, the robot will be controlled by the operator using the controller. The dectection method including 150 + pictures, Labeling process and YOLO algorithm training.
We developed a game in which a path is projected on the floor using a projector, and the player traces this projected path with a Yamabico. The rule of the game is that the player who can trace the path accurately wins. Two Yamabico are used in the game. One is an automatic machine that recognizes the path with a camera and traces it automatically, and the other is a manual machine that is operated with a controller. Various events are incorporated along the way to add color to the student's life. The challenger can "graduate" if he or she can trace the path with higher accuracy than the automatic machine.
We recreated our first Wii tank using Yamabico. We used a controller to move Yamabico in real space and simulated artillery shells in virtual space. Due to time constraints, we implemented the game from a bird's eye view, but we hope to add a camera to Yamabico in the future so that the game can be played from a first-person perspective.
A mobile robot detects fire alarm tone and detect sleeping person in a room. If a person does not wake up, the robot sprays water to alert him for safe evacuation.
Two teamates control two robots in a fishing competition held in a designated area by scoring points against each other. Then they will challenge the final boss. A camera recognizes the color brought by the robots and identifies the robots' positions to score points.
For people who can't wake up when their alarm clock goes off, we've made it so you can't turn off your alarm without doing sit-ups. If you ignore it and continue sleeping, you will be sprayed with mist.
A romance simulation robot that uses ChatGPT and speech-to-speech, both of which are very popular today. The ChatGPT API enables natural conversation with the user and a rating of the robot's seduction skills. This is a very practical robot in this day and age of declining birthrates due to young people falling out of love and marriage. It is regrettable that voice recognition was not available.
We made the laboratory robot Yamahiko beego wear Yankee cosplay and implemented wheelie running! ! Wheelie driving is performed using PID control by converting distance information from the floor obtained by a two-dimensional range sensor into a posture angle. Also, bike sounds are played according to the riding speed. All cosplay costumes are handmade using items purchased at 100 yen stores.
This system consists of a combination of voice recognition and robot control. The user's voice is obtained via WebAudioAPI and processed by the Flask application server. The Flask application runs in a Docker container, and the Goal determination PC recognizes QR codes and manages the entire system. This allows remote control of multiple robots via voice commands.
We have developed a quiz game that can be enjoyed with facial expressions, with the concept of bringing enjoyment and smiles to everyone who devote themselves to research every day. Participants are asked to line up in a line, and the robot comes in front of them one by one and asks them a question. What's interesting is that the robot reads human facial expressions and changes its response accordingly. This unique system gives the robot a chance to respond if the person smiles, and if a participant shows an angry expression, the robot skips the participant while showing a frightened response.
The robot uses skeletal recognition to determine whether the flag is raised or lowered, and if the human side wins, the robot sends words of praise. If the player loses, the robot will circle around the player and send words of pity. The player must do his/her best not to lose to the robot and win! For the technical side, we used mediapipe for skeletal recognition and Sound_Play for audio output. The position of the player is recognized as the nearest obstacle with obstacle_detector.
Participants turn the robot around by placing a ball of the same color as their clothing in the box next to the robot. The color of the ball placed later switches the target of the robot's pursuit. The winner is determined by who the robot is following at the end of the time limit. Technically, ndt_scan_matching was used to estimate the robot's self-position, YOLOv8+OCSort and OpenCV were used to recognize specific people and detect the color of their clothes and balls, and teb_local_planner was used for path planning.
We developed a game in which participants find the wrong image while playing "Daruma-san ga koronda." Only one of the three Yamabico robots displays the wrong image, and participants input the number of the Yamabico robot that displays the wrong image into the first robot. However, the voice saying "Daruma-san ga koronda" is continuously played, and the robot can only move while the voice is saying this word. The game is completed when the correct number is entered into the first robot while "Daruma-san Tumbled koronda" is being played.
Nowadays, opportunities for robots and humans to collaborate are rapidly increasing. As a simple opportunity, we attempted to realize this by expanding the functionality of Yamabico in the laboratory by installing an external field sensor.
We created a robot that uses a camera to detect suspicious persons. Using yolov8, a person wearing a mask and sunglasses at the same time is recognized as a suspicious person, and the robot approaches and warns the person. The training was conducted using a total of about 200 images of normal people and suspicious people.
The robot's camera detects the human skeleton and recognizes specific poses which are used to control the robot. You can move the robot to a specific location using poses that make the robot follow a person. Similarily, we can use poses to stop the robot's movement.