Emil Protalinski at The Next Web:
In short, Kinect captures the gestures, while machine learning and pattern recognition programming help interpret the meaning. The system is capable of capturing a given conversation from both sides: a deaf person who is showing signs and a person who is speaking. Visual signs are converted to written and spoken translation rendered in real-time while spoken words are turned into accurate visual signs.
Very cool work by Microsoft Research into sign language recognition technology. If it pans out, it could be a fantastic tool for the Deaf community. There’s more info on the technologies that power this system in a two part series on the Microsoft Research Connections Blog.
Every now and again, you see the Kinect inside an experimental project, and it reminds you it’s a technology filled with incredible potential; the gaming stuff is really just the tip of the iceberg.