Kinect and sign language translation

Emil Protalinski at The Next Web:

In short, Kinect captures the gestures, while machine learning and pattern recognition programming help interpret the meaning. The system is capable of capturing a given conversation from both sides: a deaf person who is showing signs and a person who is speaking. Visual signs are converted to written and spoken translation rendered in real-time while spoken words are turned into accurate visual signs.

Very cool work by Microsoft Research into sign language recognition technology. If it pans out, it could be a fantastic tool for the Deaf community. There’s more info on the technologies that power this system in a two part series on the Microsoft Research Connections Blog.

Every now and again, you see the Kinect inside an experimental project, and it reminds you it’s a technology filled with incredible potential; the gaming stuff is really just the tip of the iceberg.

Reckoner had its humble beginnings way back in June of 2013.

Founded by James Croft, along with Peter Wells and Anthony Agius they created what would go on to become one of Australia’s most highly regarded and award winning independent tech blogs.

With its uniquely Australian voice Reckoner is committed to offering a “no-holds-barred” approach to its writing. Beholden to no one but its audience. Reckoner’s goal is to remain completely transparent and honour the trust it’s built with its faithful readership.

Support Reckoner!
Thanks for stopping by. It looks like you're really enjoying the content so why not help a brother out and pitch in for a coffee.

Your support makes all the difference!