This HuffPost Canada page is maintained as part of an online archive.

How People With Down Syndrome Are Improving Google’s Voice Recognition Tech

The company's new project aims to make speech recognition software more accessible.

You’ve heard the phrases before: the questions asked and answered by those ubiquitous, futuristic devices. It’s all “OK, Google” and “Hey, Siri” out in the world of smartphone and voice assistant tech, where, by next year, it’s well expected that 50 per cent of all internet searches will be undertaken by voice command alone.

But, the technology wasn’t made to understand people with Down syndrome.

Take the Google Home, for example, which misses roughly 30 per cent of all words spoken by a person with Down syndrome, an engineer for the company revealed in the Project Understood introduction video.

An estimated 52 million Google Home devices have sold to date.
CoinUp via Getty Images
An estimated 52 million Google Home devices have sold to date.

That means every third word is misunderstood, which creates an obvious problem, since most sentences and questions and commands run a bit longer than three words.

Speech and language often present a challenge for people who are diagnosed with Down syndrome, which is associated with delays in language and in physical growth. Those who are diagnosed often have a number of challenges when it comes to communicating, and commonly have high-arched palates, small upper jaws, low muscle tone in the tongue and weak oral muscles.

This means they might have problems with breathing, pronunciation, and articulating certain sounds, due to timing and coordination of muscle movements needed for speech.

“By 2023, there will be 8 billion voice assistants in the world.”

“It’s tough when talking fast,” Matthew MacNeil, a 29-year-old man from Tillsonburg, Ont., told CBC News. “It doesn’t pick up my voice usually.”

And while its technology has struggled to do so, it appears Google the company has been listening, and is trying to address accessibility concerns. A year and a half ago, Google partnered with the Canadian Down Syndrome Society to create a new project called “Project Understood,” which is an effort to teach the company’s systems to better understand people with different speech patterns.

Watch: An advocate at a Congress hearing demands better funding for Down syndrome research. Story continues below.


MacNeil is part of the new partnership, as are many other people with Down syndrome. Google has asked people with Down syndrome to “donate their voices,” which just means recording a variety of phrases and uploading them so the company’s algorithms can learn to understand different speech patterns.

When it began, the research group behind the project was working with people with amyotrophic lateral sclerosis (ALS). Eventually, they realized the data analysis could go much farther than they’d initially imagined.

“It’s really a matter of having enough data,” Julie Cattiau, a project manager with Google AI, told CBC News. “The more examples [the algorithm] receives, the better it will get.”

The project is still in its early stages, but the hope is to build a database that can improve accessibility to voice technology for all users.

“Technologies that are activated by voice command are becoming a way of life,” Bob MacDonald, a technical program manager at Google, said in the video introducing Project Understood.

By 2023, there will be some eight billion voice assistants in the world, according to the video. Speech recognition technology is, to use a cliché, booming, and many companies have begun to introduce the feature into their new technologies.

There are thermostats with voice recognition and ceiling fans with voice recognition; there are precision cookers and video games and light systems. There are even cars with speech recognition, most of them using the famous technology of companies like Apple and Amazon, which have created those smart speakers that are always in the headlines.

Smart home technology is becoming increasingly popular, but until speech recognition software is improved, it will be largely prohibitive for those with atypical speech.
Silas Bubolu via Getty Images
Smart home technology is becoming increasingly popular, but until speech recognition software is improved, it will be largely prohibitive for those with atypical speech.

“For people where that doesn’t work, that must feel very disempowering, or like they’re being left behind,” said MacDonald.

As this technology increasingly becomes the norm, it’s important that its popularity also corresponds with an effort to make it more accessible. Presently, it excludes not only those who have atypical speech patterns, but also people who might have distinct accents (The technology has even been reported to struggle with recognizing “ethnic names.”).

For people with Down syndrome, this sort of technology, if successfully improved, has the potential of producing a deeper, more meaningful effect than just making it easier to tell what the weather is. It can help people feel understood.

Also on HuffPost

Close
This HuffPost Canada page is maintained as part of an online archive. If you have questions or concerns, please check our FAQ or contact support@huffpost.com.