Google I/O is a three day developer event held near the main Mountain View offices of Google. Most of the talks are available online for viewing – and there are a lot of them! Some provide an overview with others going into greater depth.
One video that I particularly recommend is the keynote 10 minute summary. It is a great highlight of the “wow” announcements, like maps being combined with AR so you can look around the street where you are standing and get annotations on local stores and directions of which way to turn. Google Lens also showed image recognition with product information coming up just by looking at items near you. The 10 minute video has so much in it I recommend you just watch it – I am not going to summarize it all here.
This post is my personal observations or areas of interest for sessions I attended. Note: there was one slot I had 5 sessions I wanted to attend at the same time!!! I need to go back and watch a few more sessions myself. So the following is not exhaustive. Also this is my personal notes with a personal bias on e-commerce, not official announcements reviewed by the teams, so please excuse any errors. My goal is to give you a taste to help you decide whether to investigate further.
- 500,000,000 devices already support Google Assistant.
- 1,000,000 actions already exist. (I don’t know exactly what this includes.)
- Google Assistant is supported in more than 25 locales, with more coming.
- This was presented in the keynote, showing Google Assistant making a phone call and using its voice to make a booking.
- This created a bit of a tweet storm on “tricking” people into thinking they were talking to a person, or concerns about automatic phone spam even more. This follow up post might help clarify.
- To me it is a natural progression that brands will want their own “voice” to represent their brand instead of the default Google voice (e.g. the Ronald McDonald clown voice in a McDonalds app).
- Vertical programs, such as “smart home”, continues to be a focus.
10 Design Tips
- There was a great session on 10 design tips for building a Google Assistant application.
- 1. Make conversation as simple as possible, but not too simple.
- 2. Add personality.
- 3. Build a robust grammar by providing multiple phrases and synonyms. The assistant uses these as a starting point, but it can learn new phrases by itself over time.
- 4. Think beyond the happy path – worry about error conditions. If the user’s input does not match any expected input, ask for information a couple more times in different ways each time in case the user does not understand what you are asking for. If no input (silence), think about the fallback intent. Worry about helping someone to the next step.
- 5. Evolve with the user. Remember previous interactions (e.g. name, preferred colors during searches, etc) with the user and store it for later access. Be their personal assistant.
- 6. Enhance actions with sound and media. There are lots of tools available such as a media player, text to speech algorithms, there is some control over the voice sound, you can add background sound effects (a sound library is provided as well), etc.
- 7. Adjust to your device. A device could be voice only, visual only, or a mix. Remember even on a phone (sound and screen), the sound could be muted. If a screen is available, use graphical representations (e.g. charts) and display less text.
- 8. Test with users. Alpha testing (coming soon) allows 20 users to act as alpha testers. Alpha releases require less approval processes from Google, so you can iterate faster.
- 9. Use analytics to make your action better. Monitor traffic, errors, retention provided; use session flow statistics to see what users did; user session transcripts can be accessed to analyze what users are really trying. Learn and adjust.
- 10. Re-engage your users. Assistant supports “daily updates” and “push notifications”. Make updates personalized, descriptive, but don’t spam your users.
- 11. Make your action easy to discover. Tools include action links (deep URLs into your assistant app for sharing on twitter etc), the assistant directory with browsing (categories), and built in intents (you can register your app with built in intents you are relevant to, to help the assistant find you app even when the user did not explicitly ask for it). Built-in intents will continue to grow over time.
- 12. Be helpful everywhere your users are. E.g. support multiple languages.
More Design Ideas
- Design for voice is different to visual design, but there is still a worthwhile design process.
- Pick the persona of your app (and brand). Humorous? Land back surfer? Refined?
- Think then about what would they say, what adjectives they would use, etc.
- Choosing a voice, synthesized or recorded. There are more quality synthesized voices than before. You can also have recorded voice clips, but these are less flexible as they cannot be automatically translated to different languages like is possible with synthetic voices.
- Think of voice like a stock ticker – a constant flow going past. It can be hard to remember all of what is said. The listener is focussing on the current part of the text. Visual displays can show everything at once that the user’s eye can skim over to discover and browse – adjust your interactions to the input/output device..
- To design a flow, start with simple real world conversation. Then model it.
- Actions.google.com/design – new site now available with design ideas.
- Growing demand for Personalized, real time feedback, everywhere (what is open now, available now, how far away, how do I get there, …). People are less interested in static information, more interested in getting tasks done.
- Understand the daily cycle of users: Streamline people’s mornings, people relax and look for entertainment at night.
- “Content actions”, such as news articles, recipes, and podcasts, can already be marked up via structure data (based on Schema.org markup) resulting in better assistant interactions. Expect more over time.
- “App actions” allows applications to link into assistant experiences.
- “Conversational actions” can be defined in dialog flows. A dialog flow maps speech to intents (concrete actions an application can take).
- “Earcon” was a new term for me. It’s like an icon, but is a distinctive sound (like when your mobile phone receives a message).
- “Routines” allow users to define a series of actions to take based on a trigger phrase like “Good morning” (turns on lights, plays news, turns on coffee maker).
- Good assistant applications now need to support a range of modes from voice only to display only. APIs exist to check the capabilities of the device. E.g. using conv.surface.capabilities to determine if the assistant has access to a screen, which can then be used to open a new surface on a screen.
- Defined terminology to help with classifications: voice only (e.g. Google Home), voice forward (e.g. screen in car dash), intermodal (e.g. mobile phone), visual only (muted phone, watch, …).
- Can now expose application to alpha testers (up to 20 nominated people).
- Beta testing also supported, which can be useful to release new app at a controlled time (e.g. aligned with a public event). A product once beta tested can go live at the time specified by the developer (rather than as soon as it passes Google testing).
- Transactions (purchases) on Google Assistant (e.g. via phone) was launched November 2017, now in 7 countries, more coming.
- Typical transaction flow: build order, propose and confirm, send updates, with helper actions around location and account linking.
- Current transaction types include making a reservation/booking, purchasing goods and services with merchant’s payment system (e.g. Starbucks allows you to put money on account), and paying with Google Pay (directly linked into the assistant).
- An example flow would be to create a booking, then confirm the details with the user concisely before placing the booking and taking money.
- Can send updates as order moves through the flow (via reminders, push notifications, etc).
- Write you app to not start such flows if the device does not support transactions! (There is an API to check.)
- There is an API to get delivery address.
- Create a cart containing products, submit as a proposed order.
- Think about follow on actions such as reordering, cancelling order, upgrading product, suggesting accessories after customer has used product, etc.
- Smart screens are hitting the market in July 2018 from multiple vendors. Same APIs as phone.
- Transactions on voice launching this week USA, more countries in coming weeks.
- Google has been measuring friction of transferring the user to a phone to confirm account linking – 90% drop off!!! So working on creating new accounts with sign in via voice. (API provided, you get a token to validate.) This one was surprising for me personally.
- Focus areas: Identity (so you know who user is), payments (higher level trust of identity), receipts (records of payments made), helpers (reducing effort for developers).
- Fandango just launched an experience including payments.
- Developer preview of new class of transactions, “digital goods”. Same as in-app purchases on mobile, or upgrading your Spotify subscription. Similar flow to Android Play.
- There are many partners already leveraging payments in this way.
Assistant app directory
- Structure data is already used by Google for marking up recipes, podcasts, etc. Structure data is also being used for the action directory.
- A structured data testing tool available from google. Search.google.com/structured-data/testing-tool
- You get a page created automatically when your app is accepted, but you can take ownership of your page to customize and enrich it. Eg introduce s site photo. Can get more analytics data there too. Also user ratings of your app.
- Account linking is supported for auth, transactions for payments in app.
This is long enough for a first post, so please keep your eyes open for follow up posts on topics such as AMP, web standards, VR/AR, Google Pay, and Google Cloud.