1. AI Will Be Supporting Content Creation but Will Not Replace It
Aleyda Solís shared some ideas on how to use ChatGPT for SEO tasks, including coming up with FAQ questions, classifying keywords, and creating drafts for meta descriptions. Nevertheless, there are limitations: “Relying completely on it will certainly generate issues sooner than later.”
Google lately released a statement that AI-generated content is not bad per se, if it satisfies the user intent, provides valuable and valid information and is showing strong expertise on a topic. As far as I see it, the impact of AI on content generation will therefore not satisfy the golden 80:20 rule; loads of human optimization is and will be needed, especially considering that Google just added Experience to E-A-T.
2. Google E-E-A-T Will Be a Firewall
As Lily Ray states, 15-20% of searches on Google continue to be brand new all the time - AI will struggle to keep up with real-time knowledge. With Google just having reworked E-E-A-T as a content guideline, especially the two E’s, Experience and Expertise will be very hard to satisfy by AI, flanking Authoritativeness and Trustworthiness in my view.
In addition, I love how Eli Schwartz put it: “Prior to the advent of AI content, the world was already drowning in endless "content" no one would ever read. […] Soon enough smart content marketers are going to "disrupt" the content industry by creating hand-crafted content that is custom made for an audience. In the offline world, mass-produced disposable goods didn't displace fine hand-crafted alternatives, making them even more valuable.”
From a search engine perspective, this would mean that information rich and valid content is upvoted, independent of whether it human or AI-generated.
3. When It Comes to Search, AI Will Be Hindered by Its Own Potential
With more and more people potentially using AI tools for content creation, the output will be at least similar, needing in-depth training for owned language models to avoid duplicate content. Especially considering that tools like ChatGPT will be monetized going forward, the big question will be how large the ROI for businesses will be compared to tailor-made approaches mentioned by Eli Schwartz.
4. Google Will Show Its Muscles
Google will not only stand still and watch. After declaring code red, they will put all the power and data they have into fighting back. With ChatGPT not being “production ready” and the relatable Google product not being published yet, it will be very interesting to see how Microsoft and Google will adapt to the learnings of the ChatGPT usage.
Christoph C. Cemper published an amazing study on the detection and watermarking of AI generated content. One of the key statements is: “It really looks like Google is able to detect the difference between GPT3 and human content. The reason for that is that even simple GPT-2 models are able to detect some content as generated, but they are too weak. Originality.ai* says they can reliably detect GPT-3 content, and indeed some examples look much better. But Google themselves have bigger LLMs than GPT3, like PalM (560B vs. 175B parameters), so it is very likely that they are even better at generation and detection.”
With Google’s subsidiary DeepMind releasing its own chatbot called Sparrow, competition throughout AI-generated solutions will increase, and it is not clear yet, if and how the battle of the chatbots, e.g. in Google or Bing search will be regulated and which potential monopoly rules might come into play.
5. AI-Generated Content Will Be Regulated Going Forward
In Felix's opinion, the further rise of fake news, the sheer mass of information, and even environmental questions will be valid arguments for regulations; however, the question will be how to execute those.
One weak spot of ChatGPT and similar language models so far is that it doesn’t cite its sources – this will make it difficult to accept it as a source of truth, especially in the era of fake news it can be expected that countermeasures will be taken. This is strongly connected to the tendency of large language models to make up facts, which is an inherent consequence of the current technology, as stated by leading AI researchers such as Yann LeCun.
In the study mentioned above, Christoph C. Cemper states that “it is likely that it will become a legal requirement for large language model (LLM) providers to introduce AI watermarking,” which are patterns in word combinations. However, this endeavor is challenging due to technical (watermarking can potentially be removed by following manipulations) as well as regulatory reasons (how to avoid retraining of watermark-free open source language models by non-compliant entities). From a search engine perspective, it is highly likely that they will be able to detect AI-generated content for a long period of time due to their ownership of the most advanced models.
If you want to learn more about how to utilize AI to efficiently build performant content assets to sustainably generate high ROI leads via organic Google Search, reach out to Dr. Felix Marcinowski.
Do you want to know more about our experience in transaction advisory? Get in touch!
Development and Execution of a Customized Digital Growth Strategy