Project Euphonia: Helping everyone be better understood

Project Euphonia: Helping everyone be better understood


No one has ever
collected large datasets of people whose speech is hard
for others to understand. They’re not
used in training the speech recognition model. I mean, the game is
to record things and then have it recognize
things that you say that aren’t in the training set. Dimitri recorded 15,000 phrases. It wasn’t obvious that
this was going to work. He just sat there and
he kept recording. [Dimitri speaking] You can see that
it’s possible to make a speech recognizer
work for Dimitri. It should be possible
to make it work for many people. Even people who can’t speak because they’ve lost
the ability to speak. The work that Shanqing
has done on, you know, voice utterances. [hum] From sounds alone
you can communicate. But there might be other
ways of communicating. Most people with
ALS end up using an on-screen keyboard
and having to type each individual letter
with their eyes. For me, communicating
is sloooooooow. Steve might crack a joke and it’s related to
something that happened, you know,
a few minutes ago. The idea is to
create a tool so that Steve can train machine
learning models himself to understand his
facial expressions. [basketball crowd cheering] [air horn sounds] Michael: [laughs] it works! to be able to laugh to be able
to cheer to be to boo, things that
seem maybe superfluous, but actually are so
core to being human. I still think this is only
the tip of the iceberg. We’re not even scratching the
surface yet of what is possible. If we can get
speech recognizers to work with small numbers of people, we’ll learn lessons which we can then combine to build something that really works for everyone. ♪♪

52 comments

  1. Awesome <3 Thanks Google!!! Go #Vegan for compassion !!! Watch What The Health on netflix! Namaste !

  2. OMG, Google you are a good company. This is so nice to help people with problems and don't go for the money. Thank you

  3. Seeing how anti free speech google is they're probably going to ban you if you try to say naughty words for "muh hate speech"

  4. There is actually already a large database of audio recordings of people who stutter. Here's the link. Please like this so that hopefully google can see it.
    https://fluency.talkbank.org/

  5. My grandfather's speech in his last days due to stroke really got a lot worse…we rarely understood what he wanted to say
    Since then I had imagined of such a technology but in India you've no opportunity to make dreams a reality

    Thanks to Google for giving future stroke patients and their families a ray of hope

  6. google Thanks from all people of pakistan ….hope you bring all your products outside usa and india someday

  7. AOUDOBILLAHIMINAACHCHAITANIERRAJIMIBISMIALLAHIARRAHMANIARRAHIMIWALAHAWLAWALAKOWWATAILLABILLAHIALALIYALADIMI.

  8. This is great. I think there’s a welcome space for people with Down Syndrome. Their speech sounds and grammatical patterns are predictable as a population category and typically very consistent as individuals. I assume that’s helpful in training speech output devices. Having speech output increases a person’s independence for socializing, work, taking transportation, as well as safety in the community. I hope you’d consider working with this group.

  9. You know what’s fun? Robbing and beating rich google employees on cannery row when they show up with their half million dollar sports cars. So much fun!

  10. I watched this during the keynote and I teared up. I hope this project helps a lot more people. I am so supportive of this. Thanks Google. 🙂

  11. Amazing work! Speech recognition should absolutely be made to work for people with any kind of speech impediment from multiple sclerosis to being scottish.

  12. This was the stand out feature of I/O for me, giving people back their voice who have had it taken from them.

    If anyone you know has problems with their speech please go to g.co/euphonia

  13. Thank You for doing this Google! This is absolutely awesome!!!
    If you need more test subjects (users) during development phase, please let me know. I'm nonverbal and I would be a great candidate for this!
    When you are ready for Alpha or Beta testing, I know a large group of special needs friends at our local university would gladly help out during the QA phase.

  14. OMG it's so amazing! I wonder if one can build a device which will understand what you're saying but also provide a feedback if you sound a little off. Such device would be very helpful for deaf people to learn how to sound "normal"

  15. Великолепно ! Вот только в нашей стране он наверняка уже давно бы спился и умер.

Leave a Reply

Your email address will not be published. Required fields are marked *