Some industry analysts once said that voice interface will become the mainstream interface for users to interact with technologies. If this becomes true, the deaf would be left behind. Smart speakers won’t be able to understand people with hearing impairment. And Google Assistant or Amazon Alexa’s response won’t be heard by them.
Although a smart speaker with a screen to display information may help, the conversation isn’t fully natural.
To tackle this problem, Abhishek Singh, a developer who built Super Mario Bros. in augmented reality, has created a web app that enable deaf people to interact with smart speakers using sign language, like how they normally “speak.”
The web app reads sign language through a camera and then says the words loud to a smart speaker. After the speaker respondes, the app will type the words out.
“The project was a thought experiment inspired by observing a trend among companies of pushing voice-based assistants as a way to create instant, seamless interactions,” said Singh to Fast Company.
“If these devices are to become a central way we interact with our homes or perform tasks, then some thought needs to be given to those who cannot hear or speak. Seamless design needs to be inclusive in nature.”
Singh trained the system using Tensorflow, a popular machine learning platform. He taught the artificial intelligence system how sign language looks like through his webcam by hand over and over again. Then he added Google’s text-to-speech capabilities to read the words to the speaker.
Singh plans to open-source his own code and share the full methodology of his application. “So hopefully people can take this and build on it further or just be inspired to explore this problem space,” said Singh.
Singh’s creation would be the first step, and it now only works with Amazon Alexa and Echo devices. Hopefully Amazon will also be inspired and put similar features on its Echo Show device.