Anúncio
Anúncio

Mais conteúdo relacionado

Similar a Let's talk about voice(20)

Anúncio

Let's talk about voice

  1. Let’s talk about voice
  2. Hello world! • Started in 2017 • Over 20 years experience in web content and social media • Social media content, strategy, advertising, training and campaign management • Video editing and motion graphics • Chatbots and voice technology
  3. What we’ll cover • What is voice technology • How does voice technology work? • Why should we use it? • Who’s using it well? • Best practice in voice design • Practical demonstrations
  4. What is voice technology? • Amazon Alexa • Google Home • Microsoft Cortana • Apple Siri • Samsung Bixby
  5. Amazon Alexa devices
  6. Amazon Alexa devices
  7. Google Home devices
  8. Microsoft Cortana devices
  9. Apple Siri
  10. Samsung Bixby
  11. A little bit of history • Alan Turing was a pioneer of modern computing • He devised the Turing Test in 1950
  12. MIT AI Laboratory • Professor Marvin Minsky set up the research group in the early 1960s to explore artificial intelligence, machine learning and natural language processing
  13. ELIZA • One major project that emerged from the MIT AI Laboratory was ELIZA in 1964 • Essentially this was an early chatbot where individuals had a conversational with a computer • They were not told they were talking to a machine
  14. ELIZA meet DOCTOR • ELIZA simulated conversations using pattern matching and substitution methodology, but did not understand the context of words • One of the most popular scripts ELIZA ran was DOCTOR, that simulated a
 psychotherapist
  15. ELIZA is tested • ELIZA attempted the Turing Test • It failed
  16. Back to the future • Bringing things back up to date, AI, natural language processing and technology can now understand context • The Turing Test has still not been passed, but we are getting closer
  17. Google Duplex • Google recently demonstrated their Duplex technology that links voice technology to cloud services such as Google Calendar • A sophisticated DenseNet in TensorFlow can process complex interactions, and understand context
  18. Making progress • Duplex is said to be effective in 80% of situations so doesn’t yet pass the Turing Test • Deep Learning expert Andrew Ng predicts that once speech recognition is 99% accurate voice will be the primary way we interact with computers
  19. The final 4% • Estimates suggest we are at around 95% currently • The final 4% is very challenging!
  20. Adding functionality • Amazon Alexa and Google Home devices can add new functionality via Skills and Actions • These give the devices new capabilities, and anyone can build them
  21. Powerfully simple • It is fairly quick and simple to create content for these devices • There are now over 40,000 Alexa Skills available with an active developer community
  22. How does voice technology work? • Voice technology uses Natural Language Processing to understand and interpret voice commands • This is underpinned by machine learning techniques
  23. Voice technology in action Device listens for invocation User gives wake word Device returns welcome message Users gives intent Device returns response
  24. Intents • An intent is used to trigger a response • For example a Skill / Action could ask where you want to go on holiday - New York, Paris or Tokyo? • Each of these choices would be a separate intent and produce different responses
  25. Synonyms • Intents are really powerful and can include synonyms, so if users have a different name for something this can be handled gracefully • Eg Pavement / sidewalk • AI is used with NLP so phrases don’t have to be exact
  26. Slots • You can also add slots to intents that request specific data be captured in a set order • This is particularly useful for retail / ecommerce
  27. Explicit and implicit invocations • Explicit invocation
 Alexa open Coffee Wizard • Implicit invocation
 Alexa recommend a coffee for a sunny day

  28. Discoverability • It’s not always appropriate to use explicit intents, as it can feel less conversational and mechanistic • Alexa uses HypRank, a neural network to rank Skills using natural language
  29. HypRank • It’s not always appropriate to use explicit intents, as it can feel less conversational and mechanistic • Alexa uses HypRank, a neural network that uses contextual signals to rank Skills using natural language
  30. HypRank overview
  31. A few stats • Voice technology will be a $601 million industry by 2019
 Source: Technavio • Over 21 million smart speakers in the US by 2020
 Source: Activate • Google Assistant now available to over 95% of Android devices and majority of iOS
 Source: Alpine AI
  32. Creating Skills and Actions • Amazon and Google provide developer friendly tools for building content • AWS with Lambda • Dialogflow with Firebase • Work with a variety of languages (Node.js, JAVA, Python, Go, etc)
  33. Using SSML • SSML (Speech Synthesis Markup Language) can be used to control the pronunciation, speed and pitch of phrases • For example you can make Alexa pause, whisper or place emphasis on specific words
  34. Analytics • Both platforms offer detailed performance measurement tools to help monitor usage
  35. Ambient computing • Ubiquitous computing • Physical interface less vital • Cloud services • 5G rollout
  36. Entering a new era Desktop era
  37. Entering a new era Desktop era Mobile era
  38. Entering a new era Desktop era Mobile era Voice era
  39. Who’s using it well • BBC • Netflix • Ocado • HMRC
  40. Thanks for listening • I hope you have found this session helpful • Please visit dotkumo.com to learn more
Anúncio