Soon, it’s going to be a way lot easier for your computer to communicate with you. Researchers at Carnegie Mellon University’s Robotics Institute have trained a computer to translate the body language and movements of multiple people in real time, as well as the pose of individual fingers.
“We communicate almost as much with the movement of our bodies as we do with our voice,” Yaser Sheikh, associate professor of robotics, said in a statement. “But computers are more or less blind to it.”
Interpreting body language and motion help open up new ways for people and computers to interact with each other. So, with the help of the Panoptic Studio, a two-story dome equipped with 500 video cameras, Sheikh, and his team developed a method to track 2D human form and motion. They fed the computers with all the data they gathered to teach it to identify certain gestures and movements, or something they call “real-time pose detection” for a certain group of people at the same time.
With the help of “real-time pose detection,” computers not only track the position of someone but also understand what they are going to do with their arms, legs, and heads at each point in time.
In order to inspire collaboration and research, Sheikh and his team have already released the computer code for both multi-person and hand pose estimation. Also, they are going to present their reports at CVPR 2017, the Computer Vision and Pattern Recognition Conference, later this month in Honolulu.