Business

LLMs Ate My Organic Traffic: An SEO Survival Guide for 2026

AI Overviews crushed my reference content. Experience-based articles held. I rebuilt my SEO strategy around what LLMs can't replicate.

My Google Search Console numbers for January 2026 told a story I'd been expecting but dreading: organic clicks were down 34% year-over-year on Grizzly Peak Software. Not because my content got worse. Not because a competitor outranked me. Because a significant portion of the people who would have clicked through to my articles were getting their answers directly from ChatGPT, Claude, Perplexity, and whatever Google's AI Overview decided to synthesize from my content without sending me a single visitor.

The short version of what I learned over the next three months: factual lookups are dying, experience-based content is holding, and the solo publishers who adapt to this will be fine. The ones who keep chasing keyword volume with interchangeable articles won't be.

The CIOs Guide to MCP: How Model Context Protocol Connects AI to Your Enterprise and Why It Matters

The CIOs Guide to MCP: How Model Context Protocol Connects AI to Your Enterprise and Why It Matters

Stop building custom AI integrations. MCP is the universal standard adopted by Anthropic, OpenAI, Google, Microsoft. CIO guide to enterprise adoption.

Learn More

I've spent the past year adapting my approach. Some of what I tried worked. Some didn't. Here's the full breakdown.


Update (March 2026): Since the January numbers I reference throughout this article, the strategy changes I describe here started paying off. Over the last three months compared to the prior three, Grizzly Peak Software saw a 530% increase in clicks (959 vs. 152), impressions grew from 21K to 500K, and average position improved from 17 to 8. The biggest winners were experience-based articles and opinionated comparisons: exactly the content types I argue for below. The playbook works. It just takes a few months to show up in the data.


The Content That Lost and the Content That Held

Let me be specific about what happened, because vague complaints about "AI killing SEO" aren't useful.

The traffic that disappeared falls into a clear pattern: informational queries with short, definitive answers. Things like "what is an API gateway," "how to parse JSON in Python," or "difference between REST and GraphQL." These are queries where an LLM can synthesize a perfectly adequate answer from its training data without the user ever needing to visit a source.

Google's AI Overviews made this worse. For a query like "how to set up Express middleware," Google now shows a generated answer at the top of the results page that's good enough for most people. The ten blue links still exist below it, but click-through rates on those links have cratered. Industry data from early 2026 shows organic CTR dropping by as much as 61% on queries where AI Overviews appear.

Here's the thing that took me a while to accept: this is actually reasonable behavior from the user's perspective. If I search "what port does MongoDB use" and the answer is right there in the AI overview, why would I click through to someone's blog post that spends 800 words building up to telling me it's port 27017? I wouldn't. And neither would you.

The traffic that survived is different. It's people searching for opinions, experiences, comparisons, and implementation details that require context. "Should I use Postgres or MongoDB for my SaaS" still drives clicks because people want to hear from someone who's actually done both. "How I deployed my Node.js app on DigitalOcean" still drives clicks because the answer is long, nuanced, and benefits from a specific person's experience.


What Google Search Console Reveals When You Segment by Content Type

If you haven't looked at your Search Console data through this lens, you should. Here's what I found when I segmented my own content:

Tutorial-style articles with code examples: Down 20-40% in clicks, but impressions mostly stable. People are seeing the listings but clicking less because the AI overview already gave them the gist. The ones that held up best are tutorials for uncommon tool combinations: things the LLMs aren't great at synthesizing because there isn't enough training data.

Opinion and strategy pieces: Roughly flat or slightly up. An article about why I stopped using microservices for side projects actually gained traffic. My theory is that as LLMs handle the factual queries, people are using Google more deliberately for opinion and analysis content.

"How I built X" case studies: Up 15-25%. These are inherently resistant to LLM summarization because the value is in the specific details of one person's experience. An LLM can tell you how to build a job board in general. It can't tell you the specific mistakes I made building mine.

Reference and glossary content: Down 50-60%. This is the category that got crushed hardest. If you built a bunch of "What is X?" pages hoping to capture top-of-funnel traffic, those pages are largely worthless now.

The takeaway: the type of content you produce matters more than ever. SEO used to be a game you could win with volume and keyword targeting. Now the content itself has to offer something an LLM can't replicate.


The New SEO Playbook (What's Actually Working)

This isn't theory. These are the changes I made on my own sites that drove the recovery I mentioned in the update above.

Write From Experience, Not Research

The single biggest change I've made is this: I stopped writing articles based on research and started writing articles based only on things I've actually done. If I haven't personally implemented it, debugged it, or shipped it, I don't write about it.

This sounds obvious, but the old SEO model incentivized the opposite. You'd find a high-volume keyword, research the topic, and write a comprehensive article about it even if your personal experience with the topic was shallow. That model produced a lot of content that was technically accurate but fundamentally interchangeable. Any competent writer could produce the same article. And now any competent LLM can too.

Experience-based content has a moat. When I write about the specific challenges of running a Node.js application on DigitalOcean, I'm drawing on months of actual deployment experience. The edge cases I mention, the gotchas I warn about, the specific configuration decisions I made: those details can't be synthesized from general documentation. An LLM might give you the documentation answer. I can tell you what actually happens at 2 AM when your app runs out of memory.

Target the Query After the AI Answer

Here's a pattern I've noticed: people ask an LLM a question, get a general answer, and then search Google for something more specific. The initial query goes to the LLM. The follow-up query, the one where they need depth, goes to Google.

So instead of targeting "how to implement rate limiting in Express," I target "Express rate limiting production issues" or "rate limiting Redis vs memory store tradeoffs." These are the queries people type after the LLM gave them the basics.

In practice, this means my keyword research now starts with: "What would someone search for after getting the generic answer from ChatGPT?" The answer is usually something more specific, more opinionated, or more experience-based than the original query.

Build Entity Authority, Not Just Page Authority

LLMs don't just pull from random web pages. They synthesize information and, increasingly, they attribute it. Being a recognized entity in your niche matters more now than it did in the pure-Google era.

What does this mean practically? Start with structured data on every article:

// JSON-LD structured data — every article gets this
const structuredData = {
  "@context": "https://schema.org",
  "@type": "Article",
  "author": {
    "@type": "Person",
    "name": "Shane Larson",
    "url": "https://grizzlypeaksoftware.com",
    "sameAs": [
      "https://x.com/grabordev",
      "https://www.linkedin.com/in/yourprofile",
      "https://www.amazon.com/stores/Shane-Larson/author/B0DX2LNMMZ"
    ],
    "jobTitle": "Software Engineer",
    "worksFor": {
      "@type": "Organization",
      "name": "Grizzly Peak Software",
      "url": "https://grizzlypeaksoftware.com"
    }
  },
  "publisher": {
    "@type": "Organization",
    "name": "Grizzly Peak Software",
    "url": "https://grizzlypeaksoftware.com",
    "logo": {
      "@type": "ImageObject",
      "url": "https://grizzlypeaksoftware.com/images/logo.png"
    }
  },
  "headline": article.title,
  "datePublished": article.publishDate,
  "dateModified": article.updatedDate || article.publishDate,
  "description": article.synopsis,
  "mainEntityOfPage": {
    "@type": "WebPage",
    "@id": article.canonicalUrl
  }
};

Structured data tells search engines and LLMs who you are, not just what your page says. The sameAs links connect your identity across platforms. The author information establishes you as a real person with verifiable credentials.

But structured data alone isn't enough. You need consistent identity signals: the same name, the same bio, the same areas of expertise across your website, social profiles, Amazon author page, and anywhere else you show up. LLMs are trained on all of these sources. The more consistent your identity, the more likely you are to be recognized as an authority in your space.

For articles that naturally answer common questions, add FAQ schema as well. It won't change your ranking, but it can increase the visual footprint of your search listing and improve CTR:

// FAQ schema — add to articles with natural Q&A structure
const faqSchema = {
  "@context": "https://schema.org",
  "@type": "FAQPage",
  "mainEntity": [
    {
      "@type": "Question",
      "name": "Should you use AI to generate SEO content?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "AI-generated content without genuine expertise behind it is a bad long-term bet. Use AI for outlining, editing, and formatting. The insights and experience have to come from you."
      }
    },
    {
      "@type": "Question",
      "name": "What type of content still gets clicks from Google?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Experience-based content, opinionated comparisons, and detailed case studies. Factual lookups and reference content have lost the most traffic to AI Overviews."
      }
    }
  ]
};

Let AI Crawlers Read Your Content

This one is easy to overlook. If you're blocking AI crawlers in your robots.txt, LLMs can't read your content, and they definitely can't cite it. Given that LLM citations are becoming a real traffic source, you need to decide whether you want to be part of that ecosystem or shut out of it.

Here's my approach: I allow the major AI crawlers. The tradeoff is real. Yes, they're training on my content. Yes, they might synthesize answers that reduce my clicks. But they also cite sources, and being the source that gets cited when someone asks Claude or ChatGPT about Node.js deployment or API development is worth more than the traffic I lose to AI Overviews.

Check your robots.txt and make a deliberate decision:

# robots.txt — deliberate AI crawler policy
# Allow major AI crawlers so your content can be cited
User-agent: GPTBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: PerplexityBot
Allow: /

User-agent: Google-Extended
Allow: /

# Block crawlers that scrape without attribution
User-agent: CCBot
Disallow: /

This isn't a blanket recommendation. If your business model depends entirely on pageviews and you have no other monetization, blocking AI crawlers might make sense. But for most publishers, being invisible to LLMs is worse than being summarized by them.

Diversify Beyond Google

This is the one that took me the longest to act on, even though it's the most important.

If Google sends you 60% of your traffic and LLMs eat 30% of that, you just lost 18% of your total traffic with no way to get it back through traditional SEO. The math doesn't work anymore if Google is your only channel.

What I'm doing instead:

X (Twitter) as a distribution channel. I post threads summarizing key insights from articles, then link to the full piece. The engagement-to-click ratio is surprisingly good for technical content. More importantly, X content gets indexed quickly, shows up in search results, and feeds into LLM training data.

Email newsletter. Old school, but it's traffic you own. Nobody can algorithmically decide to stop sending your newsletter to your subscribers. I should have started this years ago.

YouTube. I'm not a natural video creator, but even simple screen recordings of me walking through code get decent views. YouTube search is a different ecosystem from Google web search, and it's less affected by LLM competition because people searching YouTube specifically want video content.

RSS syndication to developer platforms. Dev.to, DZone, Hacker News, and daily.dev all accept syndicated content. Each platform has its own audience that may never find you through Google. The effort-to-reach ratio is excellent: one article, multiple distribution points.

Direct traffic through brand building. The percentage of my traffic that comes from people typing "grizzlypeaksoftware.com" directly has increased. This is the most resilient traffic source possible. Nobody can take it from you.


AI-Generated Content for SEO: Where the Line Is

Everyone asks this: should you use LLMs to generate SEO content at scale?

My position: AI-generated content that doesn't have genuine human expertise behind it is a bad long-term bet. Google's spam detection has improved dramatically. More importantly, if the content an AI generates is the same content any other AI could generate, what's your competitive advantage? You're producing commodity content in a market that's being flooded with commodity content.

Where AI helps me with content: outlining, editing, catching errors, formatting code examples, generating structured data, building social distribution posts. Where it doesn't: the actual insights, opinions, and experiences that make content worth reading. Those have to come from me.

The people I know who made money with pure AI content at scale in 2025 have mostly moved on. The ones still standing are the ones who used AI to accelerate their own expertise, not replace it.


The Metrics That Replace Vanity Traffic Numbers

I've changed what I measure. Old metrics like "total organic clicks" and "average position" are still useful, but they're no longer the primary indicators of success.

What I track now:

Engaged traffic. Not just clicks, but people who stay, scroll, and interact. A visitor who reads your entire article and bookmarks it is worth more than ten visitors who bounce after reading the AI overview snippet. GA4's engagement rate and average engagement time per page are what I look at first now, not sessions.

Email signups per article. This tells me whether the content is compelling enough to earn trust. If someone reads your article and gives you their email address, that content is working regardless of what your click count says.

Revenue per visitor. As total traffic decreases, revenue per visitor needs to increase. This means better monetization, better affiliate placement, better calls to action. Fewer visitors spending more is a viable model. Fewer visitors spending the same is a death spiral.

Brand search volume. Are more people searching for you by name? This is the ultimate signal that your content strategy is building something durable. I track branded queries in Search Console as a separate report.

LLM citations. This is new and still imperfect, but I check periodically whether ChatGPT and Claude mention Grizzly Peak Software or my articles when asked relevant questions. Being cited by LLMs is the new form of organic reach. Tools are starting to emerge that track this systematically, but for now I do it manually.


The Uncomfortable Truth (and Why I'm Still Optimistic)

Here's what I think a lot of SEO-focused content creators don't want to hear: the era of building a business primarily on Google organic traffic is over. It's not coming back. LLMs will continue to improve. AI Overviews will expand to more query types. The percentage of searches that result in a click to an external website will continue to decline.

This doesn't mean SEO is worthless. It means SEO is necessary but not sufficient. You need it, but you can't rely on it alone.

But here's the flip side that the doom-and-gloom crowd misses: total search activity, combining traditional engines and LLMs, is actually growing. People are searching more than ever. The pie is getting bigger even as Google's slice of it changes shape. The challenge isn't that demand for information disappeared. It's that the delivery mechanism shifted.

The content creators who will thrive in this environment are the ones who have genuine expertise, build audiences they own, diversify their traffic sources, and produce content that can't be replicated by an LLM trained on the same documentation everyone else reads.

That's a higher bar than "write a 2,000-word article targeting a keyword with 5,000 monthly searches." But it's also a more defensible position. If your content strategy is built on things only you can write, because only you have the experience, then LLMs are a tailwind, not a headwind. They handle the commodity information. You provide the insights that actually matter.

My traffic numbers are better now than when I wrote the first version of this article. The strategy described here is what drove that recovery. It takes longer than the old keyword-volume approach, but what you build is harder to take away.


Shane Larson is a software engineer and the founder of Grizzly Peak Software, a technical resource hub for software engineers, written from a cabin in Caswell Lakes, Alaska. His books on AI, LLM training, and software development are available on Amazon.