For most of its history, a Google core update meant the familiar scramble: check your E-E-A-T signals, audit your content, wait it out. But the March 2026 update carried something different underneath the surface. Alongside the standard ranking reshuffles, publishers and SEOs started finding changes in which pages were being pulled into AI Overviews - and which ones were quietly dropped. Sites that had held steady AI Overview placements for months suddenly disappeared from them. Others, with no obvious reason, started appearing for the first time.
That’s what makes this update worth paying close attention to - it goes past where you rank on the page, and into whether Google’s AI systems see your content as eligible to represent an answer at all. For sites, AI Overview visibility has become an actual source of brand exposure and traffic, which means changes to eligibility criteria cut deeper than a traditional ranking drop.
This piece breaks down what we know so far about how the March 2026 core update seems to have shifted the criteria Google uses to choose AI Overview eligibility, what kinds of content gained or lost that visibility, and what you can realistically do about it now that the rollout is complete.
Key Takeaways
- March 2026 was Google’s most volatile core update ever, moving roughly 80% of top-3 search results.
- AI Overview eligibility now heavily weighs demonstrated topical expertise and user engagement signals like time-on-page.
- Pages missing Core Web Vitals thresholds (LCP, INP, CLS) became measurably less likely to earn AI Overview citations.
- Health, finance, and e-commerce sites lost AI Overview visibility, while niche expert and educational sites gained ground.
- Pages cited in AI Overviews earn 35% more clicks than traditional first-page rankings, making citation eligibility increasingly critical.
Why This Core Update Hit Harder Than December 2025
The December 2025 core update ran for 18 days, December 11 through December 29 - a long rollout by any standard, and site owners felt it. But March 2026 finished faster and still did more damage across the board.
The number that matters is 80% - the share of top-3 search results that moved during the March rollout. No previous core update has hit that mark, which is why analysts started calling it the most volatile update on record.
For anyone running a site, “volatility” means opening your analytics on a Tuesday morning to find your top-performing page has dropped from position 2 to position 14 overnight - it means the traffic you counted on to hit your monthly goals just disappeared without a visible reason on your end.

December 2025 caused disruption too. But it moved rankings in smaller increments across a wider window. Site owners had a bit more time to see patterns before the dust settled. March compressed that movement into a shorter period, so the swings looked sharper and the recovery window felt narrower.
A longer rollout doesn’t automatically mean a stronger one. Google can apply more aggressive changes in less time if the underlying criteria change - it’s what happened here, and the criteria that changed had a direct connection to how Google decides what content gets pulled into AI Overviews.
The table below shows a quick side-by-side of how the two updates compared across a few key dimensions. That last row connects to a broader pattern around why some sites appear in AI Overviews and others don’t.
| Factor | December 2025 Update | March 2026 Update |
|---|---|---|
| Rollout length | 18 days | Shorter window |
| Top-3 result movement | Moderate | ~80% of results moved |
| Volatility rating | High | Highest on record |
| AI Overview connection | Indirect | Direct eligibility changes |
That last row is worth mentioning as you read more.
How AI Overview Eligibility Rules Changed After March 27
The rules for what gets cited inside an AI Overview shifted in a way that matters quite a bit for how you think about organic traffic. Before March 27, Google pulled citations from a pretty wide pool of ranked pages. After the update, that pool got much smaller and conditional.
Google started weighting two things more heavily than before: demonstrated expertise on the exact topic the user queried and engagement tells that showed users actually stayed on the page. A page that ranked in position three but had weak time-on-page data became far less likely to earn a citation - it’s an actual change from how eligibility worked even a few months earlier.
AI Overviews cut organic click-through rates by 61% on the queries they appear on; it’s a loss for pages that used to pull steady traffic from those searches. But pages that get cited inside an AI Overview earn 35% more clicks than a traditional ranking alone on the same query.

A page one ranking now delivers fewer clicks than it used to. But a citation in the AI Overview delivers more clicks than a first-page ranking ever gave. Both things are happening at the same time, on the same search results page.
In practice, the line between “visible” and “invisible” moved - it used to run between page one and page two. Now it runs between cited in the AI Overview and everything else.
Google also tightened the relationship between citation eligibility and content freshness. Pages that hadn’t been updated in over six months saw a measurable drop in citation frequency across a few content categories - this wasn’t a penalty in the traditional sense - it was more like a passive de-prioritization of content Google couldn’t confirm was still accurate. Updating older posts became less optional after this shift.
The net effect is that two pages can have nearly identical rankings and look the same from the outside but have very different results. One gets cited and earns outsized clicks. The other sits in position two and watches its traffic fall. The difference can depend on tells that most site owners weren’t actively tracking before this update changed what those tells were worth - and why content underperforms is often tied to exactly these kinds of signals.
The Core Web Vitals Threshold That Now Separates Winners from Losers
Technical performance became a much bigger factor in AI Overview eligibility after the March rollout. Google raised the bar on what counts as a fast, stable page and started to use that bar more aggressively when picking which pages to pull into AI-generated answers.
The three benchmarks to know are LCP, INP, and CLS, and each one measures something a person actually feels when they load a page.
LCP stands for Largest Contentful Paint and it tracks how long it takes for the main content on a page to appear. If you’ve ever watched a page load and seen the main image or headline pop in late, that’s LCP. Google’s updated threshold is under 2.5 seconds, and pages that took longer than 3 seconds lost 23% more traffic than their faster competitors during the rollout period.

INP stands for Interaction to Next Paint - it measures how fast a page responds when someone clicks or taps something. The threshold is under 200 milliseconds, which is fast enough that the response feels instant to a human. Anything above that starts to feel like lag.
CLS stands for Cumulative Layout Shift - the one where you go to tap a button and the page jumps right before your finger lands. Google wants this score to stay under 0.1, which in practice means nothing on the page should move around after it loads.
| Metric | What It Measures | Target Threshold |
|---|---|---|
| LCP | How fast the main content loads | Under 2.5 seconds |
| INP | How fast the page responds to clicks | Under 200ms |
| CLS | How stable the layout is while loading | Under 0.1 |
What changed in March 2026 was not the metrics themselves. Google has used Core Web Vitals for years. What changed is how tightly they are tied to AI Overview inclusion now versus before.
Pages that hit all three thresholds had a measurably better chance of appearing in AI Overviews than pages that missed even one; it’s a harder line than Google had previously drawn between technical performance and featured placements. If you’re also seeing traffic fluctuations around this period, it’s worth checking whether Google’s AI Overview is listing your information correctly as well.
Content Signals Google Rewarded and Penalized in the March 2026 Rollout
When close to 80% of top-3 positions reshuffled during this rollout, something had to fill that space. The sites that moved up were technically cleaner and had a different relationship with their subject matter altogether.
Topical authority came out as the biggest dividing line. Google’s systems seemed to reward sites that covered a subject in genuine depth across multiple related pages instead of sites that published single standalone articles on trending queries. A page about medication interactions, just to give you an example, seemed to fare better when it lived on a domain with dozens of interconnected pages on pharmacology instead of a general health blog covering everything from recipes to fitness.
Sourcing also took on more weight. Pages that cited verifiable references - published studies, government data, named experts - held their AI Overview placements better than pages that made claims without any visible basis. This lines up with what the SEO community observed in the weeks after the rollout: content built around traceable, checkable information seemed to earn more trust from Google’s quality evaluation layer.

On the penalty side, thin elaboration was a problem. This doesn’t mean short content - it means content that restates the same point multiple times without adding anything helpful. A 2,000-word page that spends 1,400 words on preamble and repetition behaved more like a 600-word page in terms of how Google assessed its depth.
| Content Signal | Rewarded or Penalized | What It Looked Like in Practice |
|---|---|---|
| Topical authority across a domain | Rewarded | Multiple connected pages covering a subject at different angles |
| Traceable sourcing and citations | Rewarded | Links to studies, named contributors, verifiable data |
| Padded or repetitive content | Penalized | High word counts with low informational return |
| Shallow single-topic pages on broad domains | Penalized | One expert article on a site with no related depth |
Google’s quality evaluator guidelines also seemed to place more emphasis on content that demonstrated firsthand experience or direct knowledge. If you want your content to perform well, following a solid SEO-friendly content checklist is a good place to start.
The underlying pattern is less about following a checklist and more about whether a page earns its place in a subject area.
Which Site Categories Saw the Biggest AI Overview Gains and Losses
The March 2026 rollout hit some verticals much harder than others, and the patterns that emerged from the SEO community tell a pretty telling story.
The Categories That Lost Ground
Health and finance sites took the hardest hits on AI Overview citations. These are your classic YMYL categories - content where Google has always applied extra scrutiny - and the March update made that scrutiny even more visible in AI Overview results. Sites in these spaces that lacked verifiable author credentials, cited sources, or up-to-date medical and financial information saw their AI Overview presence drop sharply.
News publishers also reported a big pullback. Google seems to want AI Overviews to pull from sources that actually broke the story or added something new to it.
E-commerce sites with heavy product description pages and minimal editorial content fared poorly too. A page that reads like a catalog entry instead of a resource was treated as one by the March update.

The Categories That Gained Ground
Independent and niche expert sites saw some of the clearest gains. A solo practitioner with full legal explainers, a registered dietitian with a nutrition blog, or a certified financial planner with a content-rich site - these profiles seemed to appear more in AI Overviews after the rollout.
Educational and how-to content from established institutions also gained ground. Universities, government agencies, and long-standing nonprofits with deep content libraries and strong domain trust all benefited.
| Site Category | AI Overview Trend Post-March 2026 | Key Factor |
|---|---|---|
| Health (YMYL) | Losses for unverified sources | Author credentials and citation depth |
| Finance (YMYL) | Losses for thin or outdated content | Accuracy signals and recency |
| News and Media | Losses for rewrites, gains for original reporting | Firsthand sourcing |
| E-commerce | Losses for catalog-style pages | Editorial depth and usefulness |
| Niche Expert Sites | Gains across the board | Demonstrated topical authority |
| Educational and Government | Consistent gains | Institutional trust and content depth |
If your site falls into one of the losing categories, that context matters quite a bit for what happened to your traffic after March 27.
What to Actually Do Now That the Dust Has Settled
Prioritize ruthlessly. Not every page needs an overhaul, and spreading effort thin doesn’t recover lost ground. Focus on content that already ranks in positions where AI Overviews appear. Those pages are closest to eligibility and carry the highest traffic upside. From there, build a repeatable process for keeping Core Web Vitals in range and content signals strong, because this update will not be the last. Google has made clear that AI Overview criteria will continue to evolve, and sites that recover faster will be the ones that have stopped treating these updates as emergencies and started treating them as expected terrain.
March 2026 changed the rules. But it also made the underlying criteria more legible than they have been in years. That is worth something. Sites that are fast, authoritative, and legitimately helpful to readers are not chasing an algorithm - they are building something durable. Start there, measure consistently, and adjust as the community changes. That is the whole job.