Glue your pizza. Eat your rocks. And please die.
Google has had a bad year.
Months after revising its documentation from “helpful content written by people, for people” in favor of “helpful content created for people” (emphasis added), Gemini, Google’s very own LLM AI platform, shared some fatal culinary suggestions and outright threatened a college student cheating on his homework.
Google Search has struggled to adapt to and adopt AI. It hasn’t been alone in its uncertainty, and the growing pains are understandable as tremors from AI and LLMs shake the foundations of its monolithic search engine. However, Google’s prominence as the world’s library makes its struggles all the more pronounced and consequential.
The core of the struggle is Google’s difficulty balancing competing interests from its many entities. In 2020, Google pegged its head of Ads to run Search, and soon the Ads department’s priorities crept into Google’s search engine. Complaints soon followed, with “Google search is getting worse” the common refrain.
Now add another competing interest to the mix. Google launched Bard in 2023 and a few months later rebranded it as Gemini, essentially entering the content creation business. Just as when Ads’ priorities crept into the search algorithms, Google’s search results are threatened by priorities driven by its AI business. For the first time in its history, Google’s spent the last couple of years playing catch-up in search. It’s been beaten to market for AI, and some users are turning to LLMs in lieu of traditional search. In the scramble, Google search quality has declined, eroding trust at the worst possible moment.
In essence, Google’s AI Overviews (AIOs) and newly released AI Mode have now emerged as an existential threat to its own product.
‘Why is Google search getting worse?’
At the start of 2022, no one was searching that question, at least not in a way that showed up in search volume. The first noticeable uptick came in April of that year, with 20 monthly searches. By March 2025, that number had climbed to 170. (Source: Semrush)
The frustration didn’t come out of nowhere, and some say it goes back further than most people realize.
In 2024, Edward Zitron in his article The Man Who Killed Google Search explained how Google’s declining search quality can be traced back to internal politics that came to a head in 2019. Specifically, he argues the company allowed concerns about ad revenue to bleed into its search algorithm. He pointed to internal emails, made public during the DOJ’s antitrust case, that highlighted the tension between the Ads and Search teams.
The Ads team wanted people to run more searches, which would lead to more ad impressions, clicks, and revenue. But the Search team’s job was to deliver the most useful, relevant answers, not to keep users searching longer but to help them find what they need. Eventually, the company put the head of Ads, Prabhakar Raghavan, in charge of Search. This shift, Zitron argued, marked a turning point, when user experience ceded ground to “engagement hacking,” which refers to designing search to keep users on the page longer.
Google founders Sergey Brin and Larry Page had warned of precisely this scenario in their original Google research paper, written in 1998 as the pair attended Stanford University. “Currently, the predominant business model for commercial search engines is advertising,” they wrote. “The goals of the advertising business model do not always correspond to providing quality search to users.” As such, “we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.”
It’s hard to look at Google search today and not feel like they were right.
A 2024 study by a group of German researchers sparked a round of media coverage questioning the quality of Google’s results, and it likely deserves some credit for the surge in queries related to Google’s declining quality. (Queries for the term “scientific evidence that google search has become worse” spiked at 480 per month in 2024.)
But the trend in such queries predates the study:
Queries per month (source: Semrush)
Query | Jan 2022 | Dec 2022 | Dec 2023 | Dec 2024 | Mar 2025 |
why is google search getting worse | 0 | 20 | 110 | 170 | 170 |
google search results getting worse | 10 | 20 | 140 | 70 | 70 |
why has google search gotten worse | 0 | 0 | 20 | 50 | 70 |
google getting worse | 0 | 20 | 20 | 30 | 30 |
are google search results getting worse | 0 | 0 | 0 | 20 | 20 |
That’s quite a shift for a brand that just a couple decades ago had earned so much trust that its name entered the common vernacular as a verb.
Now, as Google leans into a new business vertical centered around its AI platform, Gemini, there’s concern that history is repeating itself. Just as the push for ad revenue led to algorithm changes that seemed to make Google search worse, the pressure to promote Gemini and keep up with competing entities like OpenAI and Anthropic could lead Google to once again compromise user experience.
In fact, it’s already happened.
Google joins the AI race at the expense of Search
In late summer 2022, Google launched its Helpful Content Update (HCU), promising to “better connect people to helpful information.” The message to publishers was clear: “focus on people-first content” and “avoid creating content for search engines first.”
In other words, if you’re creating genuinely helpful content, Google will reward you. If you’re gaming the algorithm with low-effort fluff, expect to disappear from the results.
It hasn’t been perfect.
Critics have flagged examples where large brands with low-effort or mediocre content outrank smaller sites offering more in-depth, original content. But such bias is inherent in how Google works. Google doesn’t actually know what’s good. It relies on user signals to gauge if content is helpful and trustworthy. And users tend to choose the familiar over the unfamiliar. In other words, familiarity (and brand recognition) can outweigh substance.
Still, the HCU’s core goal of cleaning up low-value content was broadly welcomed. It came just a few months before ChatGPT began making waves in late 2022, and some believed the update would be a much-needed guardrail against an expected flood of low- and no-effort AI-generated content.
But in September 2023, a quiet change to the HCU documentation raised some eyebrows.
The original guidance said the HCU aimed “to ensure people see more original, helpful content written by people, for people, in search results.” A year later, that became “content created for people,” dropping the reference to who or what created it. When asked, a spokesman told Gizmodo that the revision reflects Google’s view that what matters is “the quality of content we rank vs how it was produced.”
Why walk back the language?
The original phrasing read like a clear stand against machine-spun content. If that wasn’t the intent, why mention “written by people” in the first place? The answer may lie in Google’s shifting business priorities.
Though it first announced its large language model, LaMDA, in 2021, it was ChatGPT’s viral launch in late 2022 that forced the company into crisis mode. Google declared a “code red,” brought back founders Larry Page and Sergey Brin, and by early 2023, it launched Bard, soon rebranded as Gemini, Google’s flagship AI product.
With that shift, Google was no longer just organizing content. It was creating it. And that led to new complications.
It made inaccurate claims, outlandish suggestions, and outright threats. Its role as a content creator jeopardized Section 230 protections and elicited reports from Congress and commentary from the American Bar Association.
The HCU, which had once been its own system, a classifier designed to weed out low-value content, was folded into the core ranking algorithm in March 2024. With that change came new messaging: AI-generated content, in itself, was no longer a red flag. In an update to its spam policies, Google stated that automation wasn’t inherently problematic. What mattered was quality. The new policy targeted content produced at scale to manipulate rankings, regardless of whether it was made by humans, machines, or both.
We’ve long had a policy against using automation to generate low-quality or unoriginal content at scale with the goal of manipulating search rankings. This policy was originally designed to address instances of content being generated at scale where it was clear that automation was involved.
Today, scaled content creation methods are more sophisticated, and whether content is created purely through automation isn’t always as clear. To better address these techniques, we’re strengthening our policy to focus on this abusive behavior — producing content at scale to boost search ranking — whether automation, humans or a combination are involved. This will allow us to take action on more types of content with little to no value created at scale, like pages that pretend to have answers to popular searches but fail to deliver helpful content.
In other words, Google softened its stance on AI-generated content. If the content was helpful, it didn’t matter how it was made. Some publishers took it as a green light to mass-produce content using AI, ignoring Google’s warning about scaled content abuse. What resulted was a flood of AI-generated content that was anything but helpful.
In the immediate aftermath, complaints about declining search quality continued, and queries like “why is google search getting worse” kept climbing.
By fall 2024, Google appeared to have not made much headway in its fight against degrading search results quality. In October, it replaced Prabhakar Raghavan as Head of Search. A month later, in the November core update, it appeared that Google was still combating the low quality content issues that it had tried to tackle in the earlier March update: “This update is designed to continue our work to improve the quality of our search results by showing more content that people find genuinely useful and less content that feels like it was made just to perform well on Search.” In essence, content lacking originality, experience, or insight would be downgraded, regardless of how it was created.
That direction became even clearer in the March 2025 core update. The update hit sites producing massive volumes of content, a move widely interpreted as a rebuke of low-value, AI-generated output at scale. Publishers relying heavily on bulk automation without adding meaningful value saw steep declines
Timeline of how Google’s AI push backfired
- Aug 2022 – HCU (Helpful Content Update) launches
- Google introduces a classifier aimed at rewarding original, people-first content.
- Dec 2022 – HCU becomes global
- The classifier rolls out globally, now available in all languages.
- Sept 2023 – HCU classifier refined
- Minor improvements to the system as LLMs start flooding the content space.
- Feb 2024 – Google signs Reddit data licensing deal
- Google secures access to Reddit’s content to train its models and boost AI-generated insights in search.
- Mar 2024 – HCU folded into core algorithm
- Google sunsets the HCU as a separate system. The principles remain but are now part of the broader algorithm.
- May 2024 – “AI Overviews” launch causes chaos
- Google rolls out Gemini-powered summaries in search. The results are embarrassing: glue on pizza, “eat rocks,” and more. Public trust takes a hit.
- June 2024 – Perplexity.ai surges
- Users begin migrating to Perplexity, which offers transparent, source-linked LLM search.
- July–Aug 2024 – Google quietly pulls back AI Overviews
- AI Overviews begin disappearing from sensitive queries, with no official statement.
- Oct 2024 – Google replaces Head of Search
- Prabhakar Raghavan is replaced after mounting criticism.
- Nov 2024 – Core update doubles down on March update
- Google pledges to surface “genuinely useful” content again, echoing original HCU intent.
- Jan–Mar 2025 – Competitors gain ground
- OpenAI integrates real-time search in ChatGPT. Arc Browser launches “Browse for Me.” Users continue looking beyond Google.
- Mar 2025 – Core update hits scaled AI content
- Google reinforces that originality and insight matter most. Mass-produced AI content sees steep drops in rankings.
Google, for the first time, is now playing catch-up in Search
In the span of a year and a half, Google swung wildly from cracking down on low-value content, to seemingly encouraging users to produce it at scale, to punishing it again. AI wasn’t banned, but it had to be genuinely useful.
This pivot came not from principle, but from failure: Google’s attempt to keep up with AI rivals had compromised the quality of its most important product: Search. And in doing so, it left a gap that companies like Perplexity, OpenAI, and Arc rushed to fill.
At Google’s annual I/O event last month, the company unveiled AI Mode as its answer to the growing market of AI-powered search engines. The problem is that Google has now been beaten to market by several competitors who have already demonstrated far greater reliability than Google’s AI-generated search results.
Additionally, AI Mode appears as a repackage of Google’s legendary search engine in a more relevant UI to mirror its competition. While AI Mode was almost certainly in development for a far longer period, it can’t help but be noted that this announcement came just weeks after Eddy Cue, an Apple VP, announced that the company was looking into using AI search engines on Safari in lieu of Google’s search engine (something Google pays Apple billions of dollars a year to ensure is the default engine). Google’s stock dropped 7% the same day of Cue’s announcement.
Google’s future: promise or ruin?
Google once reigned as the undisputed gateway to the internet. It stood as the definitive verb for search. However, in its bid to chase the AI wave and increase ad revenue simultaneously, the company has tripped over its own legacy. By prioritizing fiscal growth over user experience, it blurred the line between serving information and manufacturing it. The result: a search engine increasingly defined not by clarity, but confusion; not by trust, but erosion of it.
As users migrate to faster, cleaner, AI-native platforms like Perplexity and Arc, Google finds itself doing the unthinkable: playing catch-up in a market it once owned outright. In trying to stay ahead of AI, Google let AI kneecap the very product that made it a titan. Now, with AI Mode’s deployment (and eventual integration into Search), Google is trying to patch the cracks, but with trust waning, competitors rising quickly, and users fleeing, it’s on its back foot for the first time.
What all of this means for the future of search
As Google course-corrects, keep this front of mind: what matters to consumers still drives Google’s behavior. Even as it scrambles to close the AI gap and fend off OpenAI, its mission hasn’t changed: make information easy to access. Because that’s its value.
And what matters to consumers hasn’t changed either: trust, credibility, and quality. Whether it’s Google defending its reign at the top of search or ChatGPT channeling Google’s early-2000s playbook to lock down the market, putting all other business strategies aside for a moment, the winner has to deliver on those values.
That leaves SEOs, publishers, and businesses in a tough spot. Google’s results may be slipping in quality, and its AI strategy still feels unsettled, but none of that stops visibility from dropping. Fair or not, we’re paying the price for it now and playing catch-up ourselves.
So here’s the real question: how do you prove to Google, ChatGPT, or any other platform that you’re trustworthy, credible, and deliver real quality?
SEOs will theorize, test, learn, and refine their strategies as they grapple with what leads to visibility in AI search. However, the foundation needs to remain firmly planted in earning trust, proving credibility, and delivering quality, online and offline.
Strategies, whatever they might be, based on that foundation will ultimately be positioned to win the day.