CMU researchers create a huge dome that can read body language

The Panoptic Studio is a new body scanner created by researchers at Carnegie Mellon University that will be used to understand body language in real situations. The scanner, which looks like something Doc Brown would stick Marty in to prevent him from committing fratricide, generates hundreds of videos of participants inside the massive dome interacting, talking, and arguing. The team has even released code to help programmers understand body positions in real period.

The dome contains 480 VGA cameras and 31 HD cameras as well as 10 Kinect sensors. It can create wireframe models of participants inside the dome. Why? To depict computers what we are thinking.

We communicate almost as much with the movement of our bodies as we do with our voice, told associate professor Yaser Sheikh. But computers are more or less blind to it.

In the video below the researchers scanned a group haggling over an object. The computer can look at the various hand and head positions and, potentially, the verbal communication, and begin to understand when two people are angry, happy, or argumentative. It will also let the computer distinguish poses including pointing which means you can point to an object and the organizations of the system will know what youre talking about.

Interestingly the system can also be used to help patients with autism and dyslexia by deciphering their actions in real hour. Finally a system like this can be used in athletics by scanning multiple participants on a playing field and ensure where every player was at any one time.

From the release đŸ˜› TAGEND

Tracking multiple people in real day, particularly in social situations where they may be in contact with each other, presents a number of challenges. Simply use programs that track the pose of an individual does not work well when applied to each individual in a group, particularly when that group get big. Sheikh and his colleagues took a bottom-up approach, which first localizes all the body parts in a scene arms, legs, faces, etc. and then associates those parts with particular people .

The Panopticon isnt exactly ready for using at the Super Bowl or your local Dennys but it seems to be a solid enough solution to tell what a few people are doing based on various point clouds of their appendages and actions. Theyve even been able to tell when you might be flicking somebody off.

A single shoot gives you 500 positions of a persons hand, plus it automatically annotates the hand position, told researcher Hanbyul Joo. Hands are too small to be annotated by most of our cameras, however, so for this study we used merely 31 high-definition cameras, but still were able to build a massive data set.

Make sure to visit: CapGeneration.com