Google Speakable Annotations With Structured Data For Publishers

Jul 25, 2018 • 7:38 am | comments (0) by twitter Google+ | Filed Under Google Search Engine Optimization
 

google home

Google announced yesterday new structured data annotations named Speakable. These annotations let you mark up sections of your news content that "are best suited for audio playback using text-to-speech (TTS)." Google wants to be able to allow the Google Assistant and Google Home devices to speak the news to those who ask for it. So Google enabled publishers to "mark up sections of a news article that are most relevant to be read aloud by the Google Assistant."

The technical details can be found on both Schema.org and Google Developer docs. Google explains:

Adding markup allows search engines and other applications to identify content to read aloud on Google Assistant-enabled devices using TTS. Webpages with speakable structured data can use the Google Assistant to distribute the content through new channels and reach a wider base of users.

The Google Assistant uses speakable structured data to answer topical news queries on smart speaker devices. When users ask for news about a specific topic, the Google Assistant returns up to three articles from around the web and supports audio playback using TTS for sections in the article with speakable structured data. When the Google Assistant reads aloud a speakable section, it attributes the source and sends the full article URL to the user's mobile device through the Google Assistant app.

I really should give it a try here and see what happens but right now it is super beta. Google wrote "This feature is in beta and subject to change. We are currently developing this feature and you may see changes in requirements or guidelines."

It is supported for sites like this, which are based in the US and in English. So I think I will try to give it a try soon.

As I explained at Search Engine Land yesterday:

Here are the technical guidelines:

  • Don't add speakable structured data to content that may sound confusing in voice-only and voice-forward situations, like datelines (location where the story was reported), photo captions, or source attributions.
  • Rather than highlighting an entire article with speakable structured data, focus on key points. This allows listeners to get an idea of the story and not have the TTS readout cut off important details.
Here are the content guidelines:
  • Content indicated by speakable structured data should have concise headlines and/or summaries that provide users with comprehensible and useful information.
  • If you include the top of the story in speakable structured data, we suggest that you rewrite the top of the story to break up information into individual sentences so that it reads more clearly for TTS.
  • For optimal audio user experiences, we recommend around 20-30 seconds of content per section of speakable structured data, or roughly two to three sentences.

Google has been pushing publishers to do more with the Google Assistant more recently so I guess this is one another push.

How would this work for an end users? Google said:

When people ask the Google Assistant -- "Hey Google, what's the latest news on NASA?", the Google Assistant responds with an excerpt from a news article and the name of the news organization. Then the Google Assistant asks if the user would like to hear another news article and also sends the relevant links to the user's mobile device.

Forum discussion at Twitter.

Previous story: Google AdWords Officially Redirects To Google Ads
 
blog comments powered by Disqus