Last month, Google introduced the possibility for search questions to be answered “selfie-style” directly on the Search Engine Result Pages (SERPs). This feature is set to launch in 2018, widening the range of possibilities for Google and other search engines to improve user experience when utilizing voice search or rather conversational/spoken human machine interaction.

Google questions answered selfie-style from the Google blogSource: Google

New Google Search feature: Questions answered directly in videos

Google interviewed celebrities such as Will Ferrell, James Franco and Priyanka Chopra to answer their most frequently asked question, which were:

  • How many languages does Priyanka Chopra speak?
  • Can Tracee Ellis Ross sing?
  • Can Will Ferrel really play the drums?

Google’s announcement states that this feature will be rolled out in 2018, beginning with mobile devices.

Voice Search mirrors real-life interactions

Voice Search will grow prosperously throughout the upcoming year. As voice interaction is easier and more natural than the more popular method of typing questions and reading answers, ComScore estimated that, by 2020, Voice Search will increase its share to 50% of all searches. By May 2016,  20% of searches on the Google App were already carried out via voice, while “consumers turn to their devices in I-want-to-know, I-want-to-go, I-want-to-do, and I-want-to-buy micro-moments. […] And that’s just a preview of what’s to come”, as Google itself explains.

By providing original, highly credible answers, Google will boost user engagement within Google Search. If the answer’s original source, such as the particular celebrity or a business owner, responds to the question in place of Google Assistant, Voice Search further mirrors real interaction in any place at any time.

Google’s AI is already able to identify multiple visual objects, while the Machine Learning algorithms are continously optimizing this feature. Furthermore, Google will collect increasing amounts of data and insight with its Google Home Assistant. Thus, with such rapid technical development, Google is likely to offer new features that integrate video-recorded answers in the search results.

Respond to customers’ search queries with a video

It is likely that Google will aim for indexing voice and audiovisual snippets to provide them as answers for search queries where applicable. These snippets might be sourced from self-created recordings by users or businesses themselves, or via voice and visual recognition from videos already listed in Google’s video platform, YouTube, which already offers a transcription feature.

Businesses will thus have the opportunity to present themselves and personally address the questions of potential customers. Google has already introduced the Q & A feature for Google MyBusiness local packs in August 2017, wherebusiness owners can answer questions of customers and potential customers with written responses that are directly displayed in Google search’s local pack. Furthermore via Google Posts, restaurants can broadcass their daily menus, retail stores can discuss  new products, etc.

Additionally, in October 2017, Google introduced Dialogflow. Dialogflow allows users to create their own smart speaker apps, as they can download them and add them to their smart speaker or voice assistant. In the future, there could even be a platform where you upload your Q & A videos, from which Google will create audiovisual snippets to add to the Google Search.

Ultimately, the future of search is lead by new, highly enriched snippets of information for users to engage with, such as video and audio. As the technical development will continue rapidly, at OMMAX we monitor such state of the art innovations and keep you updated accordingly.