When you talk to someone in person, you are telling them more than just words because every motion of your eyes and hands, your posture, and your expression carry with them information. Our body language is something deeply entrenched in our language skills, but robots and animated characters do not have them. Researchers at the University of Wisconsin, Madison, supported by NSF, have been working on algorithms and models to give these virtual persons non-verbal communication abilities.
In one of their tests, the researcher had a robot ask participants to place specific items sitting on a table into one of two boxes. When the robot moved its gaze to the item it was speaking about, the participant was able to find and sort it faster than when the robot just stared at the participant. In a second test an animated character was telling a story set in China, and when the character actually would turn to look at a map of China, the participants learned the story better.
The entire goal of this work is to improve the interface and interaction between humans and robots. Robots with such capabilities could prove very useful in the classroom and in the hospital; wherever information is being conveyed from one person to another.