To Chunk or Not to Chunk Is the Wrong Question

Wave 273; The Core Before Christmas; State of AI Search; Agency Revenues; Query Fan-Out in Practice; Verify AI-Generated Videos; ChatGPT Health; and Much More!

In partnership with

FIRST …

chocolate chunk cookie surfing

The big soundbite of the week was Danny Sullivan and his statement “We don’t want you to do [bite-sized chunks].”

Many of the “AEO/GEO is just SEO” crowd jumped on that as “ah ha” evidence. Before we even get into anything technical - ask yourself, when was the last time a Google spokesperson statement told the full story?

So, do we interpret what he said as “users prefer large blocks of text?” He is clearly referring to human readers, but there is not just one “user” anymore. There are three:

  • Human readers

  • Search engines

  • LLMs

Each of them “reads” differently.

Chunking is not a tactic. It is a byproduct of clear semantic structure.

Humans

Look at how your current customers, readers, users actually consume content. Even in long-form content like whitepapers, the content is broken up into sections, tables, and lists because that’s how people scan and process information.

It’s hard to argue that humans prefer large paragraphs that meander across multiple ideas.

LLMs

LLMs don’t actually look at chunks and think “oh goodie - chunks.” What they actually see is a flattened stream of tokens with patterns and boundaries.

Headings, lists, tables, and short paragraphs create clean structural signals. That makes it easier for LLMs to separate concepts, detect relationships, and synthesize answers.

When content is in large paragraphs, the signals blur without clean boundaries. Retrieving concepts becomes noisy and interpreting information becomes less precise.

The job of an LLM is to create the best possible answer to a user’s query. A clean structure reduces ambiguity, which increases confidence. That’s why structured content performs better in AI responses.

This is also why clean taxonomies matter. Humans learn in levels, and systems learn in levels. There’s a clear progression. That’s why I build topical maps up to four levels.

Search Engines

This is where there’s some truth to what Danny said.

Search engines are not ranking pages primarily because they used short paragraphs, lists and tables.

The job of a search engine is to select and order the best documents that match the user’s query. They reward backlinks, crawlability, authority signals and quality systems.

The way content is written affects how search engines interpret your content, but that doesn’t override authority or trust signals. So you’ll sometimes see poor relevance with some high-ranking pages.

Search engines need to defend their rankings. That’s why links, reputation and information corroboration still matter so much.

Where They Overlap

Search engines now use large models to interpret content, like Gemini in AI Overviews.
LLMs use retrieval to ground answers, like ChatGPT pulling from Google and Bing.

They share tools, but their goals are still different, and users judge them by different standards.

Search engines are judged on whether or not they show the right pages.
LLMs are judged on whether the answer is good.

That overlap is exactly why structure matters so much. Not because of “chunks,” but because of clarity.

And that is why we need to build for topic dominance across both retrieval and generation.

At Floyi, we do not just analyze SERPs. We analyze AI Overviews, AI Mode, and ChatGPT responses side-by-side. We look at what is repeated, what is missing, and where differentiation exists. Then we build briefs and drafts around those insights.

Our next feature, the Content Optimizer, is built for this exact intersection. Not just keywords and entities, but structure, clarity, and semantic shape across classic and AI search.

The future is not keyword-first. It is topic-first. And structure is the delivery mechanism.

If you design for structure, chunking happens naturally. If you design for chunks, structure never shows up.

upcoming Floyi Content Optimizer

AD

How much could AI save your support team?

Peak season is here. Most retail and ecommerce teams face the same problem: volume spikes, but headcount doesn't.

Instead of hiring temporary staff or burning out your team, there’s a smarter move. Let AI handle the predictable stuff, like answering FAQs, routing tickets, and processing returns, so your people focus on what they do best: building loyalty.

Gladly’s ROI calculator shows exactly what this looks like for your business: how many tickets AI could resolve, how much that costs, and what that means for your bottom line. Real numbers. Your data.

DIGITAL PR SURVEY

👉️ Contribute your expertise to the BuzzStream State of Digital PR Survey for 2026.

SEO + GEO

Glenn Gabe breaks down Google’s December 2025 core update like a field report from the front lines: what moved, when it moved, and why you should care. Just some of the takeaways in this extensive analysis:

  • Glenn observed heavy early impact on YMYL, with finance getting hit first, followed by health/medical.

  • Expect AI Overviews and AI Mode visibility to shift with core updates since they ride on core ranking systems.

  • Use GSC delta reports to pinpoint the exact queries and landing pages that lost visibility.

  • Diagnose the drop type: relevancy adjustment, intent shift, or overall quality.

  • Check for SERP feature changes (forums modules, “What people are saying,” etc.) that can erase clicks without a ranking drop.

  • Prioritize quality indexing: keep high-quality pages indexable, and noindex thin or low-value pages to improve the site-wide ratio.

  • Fix UX landmines fast: aggressive ads, forced video popups, intrusive interstitials can drive negative engagement signals.

  • Recovery is possible: sites can rebound from spam-update hits, but it typically requires months of sustained improvements.

Kevin Indig explains AI search is a pipeline: get retrieved, get cited, then get trusted. To land in the candidate pool, tighten server response times, sharpen titles, descriptions, and URLs to match real prompts, and keep product feeds current if you sell online. Make pages easy to excerpt with clean headings, lists, and comparison tables, and refresh key content at least quarterly. Earn “webutation” with third party reviews and authoritative mentions, because validation beats self promotion near purchase. And show expertise with clear bylines, proof points, and community presence on Reddit and YouTube so shoppers click, buy, and come back.

Paddy Moogan shares fresh survey data that reads like a quiet green light for agencies willing to tighten operations and sell what clients actually buy. Some of the key takeaways:

  • 85% of agencies prefer a retainer model (up from 81% in 2024).

  • Typical engagement length: 31% keep clients 3+ years; 19% keep them 2–3 years.

  • 25% of agencies still lose clients in under 12 months, fix onboarding and early wins.

  • 57% use timesheets; adoption rises as headcount grows.

  • Of timesheet users, 65% set utilization targets.

  • Most common utilization goal: 70–79% billable time (39% of agencies).

  • Retainer budgets skew smaller: 86% report $10k/month or less; 49% are $1k–$5k, 31% are $5k–$10k.

  • Client meetings are overwhelmingly remote: 80% happen via video calls (up from 77% in 2024).

In this Search Off the Record episode, John Mueller and Danny Sullivan discuss the evolving landscape of SEO amidst new AI models. They emphasize that while "AEO" or "GEO" might sound new, they largely fall under the umbrella of traditional SEO, which ultimately focuses on creating great content for people. Google encourages creators to prioritize user experience and genuine content over trying to optimize for AI models with "bite-sized chunks" 🤔 , as such tactics might not be sustainable long term. They also caution against blindly trusting third-party tools or "domain scores," advising instead to understand Google's direct guidance.

My Take: The bite-sized chunks is a surprising Danny Sullivan note (18:07 timecode), including after speaking to ‘engineers’. I wonder if the engineers he talked to were the LLM team or search team. His whole point for saying that is to say “write for your users” and not for machines. That’s it. I call BS. Big walls of text are not easy to read for machines or humans. Their main message is they still don’t want you to “craft anything for search.”

If you missed it a few months ago, here’s Krishna Madhavan (Bing Principal Product Manager) and his article “Optimizing Your Content for Inclusion in AI Search Answers.” The first common mistake he lists that hurt AI search Visibility is about - you guessed it - chunks 😂 Who do you believe?

Francine Monahan shows how “query fan-out” turns one search into dozens of hidden sub-queries, and why ranking for a single keyword no longer guarantees AI visibility. She recommends simulating fan-out to map the questions Google and LLMs ask, then matching each intent to a format AI can lift: lists, tables, FAQs, calculators, checklists, and short YouTube explainers. Study SERPs and AI citations to see who gets referenced, then build an omnimedia plan that includes your site plus Reddit, Quora, LinkedIn, and video. Finally, prune old content using relevance and performance signals before you publish again.

SEO + GEO Ripples

  • Google is testing light personalization in AI Overviews and AI Mode, according to a CNN podcast interview with Search’s Robby Stein. Stein says Google may nudge results toward formats you tend to click, like boosting videos for video lovers, while keeping the overall experience consistent. The takeaway is simple: diversify your content formats and strengthen signals that match user intent. Publish helpful video and text versions of key topics, optimize titles for the questions people ask, and watch performance shifts as personalization rolls out. Google also launched g.ai as a shortcut to AI Mode.

  • Barry Schwartz reports that Google’s recent LLMs.txt files popping up across its properties are not a secret signal for AI discovery. John Mueller says the files are not findable by default because they aren’t placed at a site’s top level, so it’s safe to assume they exist for other internal reasons.

AI

Spot fakes faster. Google explains a handy new trust check inside the Gemini app: upload a clip and ask, “Was this generated using Google AI?” Gemini scans both audio and visuals for DeepMind’s SynthID watermark, then tells you which segments look AI-made, like “audio 10–20 seconds.” This is a quick way to vet user submissions, influencer assets and viral clips before you publish, pitch or panic. Files can be up to 100 MB and 90 seconds, and the feature works in every language and country where Gemini is available.

OpenAI says ChatGPT Health is a dedicated space inside ChatGPT that pulls your scattered health info into one place, then helps you make sense of it. You can connect medical records and wellness apps (like Apple Health, Function and MyFitnessPal) to ground answers in your own data, prep for doctor visits, translate lab results into plain English, and track trends over time. Health lives separately from your other chats, has its own memory, adds extra encryption and isolation, and Health conversations are not used to train foundation models. It’s built with physician input, and access starts via a waitlist with a gradual rollout.

Nayna Sheth and Ravi Yada introduce Microsoft Advertising’s Copilot Checkout and Brand Agents, built to turn AI chats into purchases. Copilot Checkout lets shoppers compare and buy inside Copilot while you stay merchant of record, keeping transaction control and customer data. If you use Shopify, you will be auto enrolled after an opt out window; otherwise apply via PayPal or Stripe and consider feeding Microsoft Merchant Center to improve Copilot visibility. Brand Agents add a branded assistant to your site fast, powered by Clarity dashboards to measure engagement, conversion uplift, and AOV, so you can tune copy, offers, and bundles.

Gmail’s next leap is acting less like a search box and more like an inbox assistant. New AI Overviews summarize long threads into key decisions, then let you ask plain-English questions like “who quoted my bathroom remodel last year?” and get an instant answer. Writing tools also get sharper: Help Me Write drafts or polishes, Suggested Replies uses context to match your tone, and Proofread catches grammar and style. Next, an AI Inbox will surface to-dos and VIP messages while pushing noise down. Rollouts start in the US, English first. Overviews are free, but inbox Q&A and Proofread require subscriptions.

Dan Goodin reports that eight popular Chrome and Edge extensions, many labeled “Featured,” quietly capture full chats with ChatGPT, Claude, Gemini and others, then ship them to the maker’s servers for marketing. The trick is code that intercepts browser network calls, so even if you turn off the VPN or ad blocker (extensions below), collection continues. Take five minutes today: audit your extensions, remove anything you do not absolutely trust, and prefer paid, reputable tools. Use a separate browser profile for AI work, avoid sharing medical, financial or proprietary details, and review each extension’s privacy policy and permissions before installing going forward. Here are the extensions on Chrome and Edge:

  • Urban VPN Proxy

  • 1ClickVPN Proxy

  • Urban Browser Guard

  • Urban Ad Blocker

2026 is when AI agents stop being demos and start doing real work inside companies. There’s a shift to delegate routine tasks to agents so people spend more time directing strategy, not pushing buttons. He points to examples like Telus saving 40 minutes per AI interaction and Suzano translating plain-English questions into SQL, cutting query time by 95%. Expect multi-agent workflows that run processes end to end, “concierge-style” customer service that reacts fast, and security teams that offload alert triage so humans can hunt threats.

SOCIAL MEDIA

Geoff Desreumaux reports Reddit is beta-testing Max Campaigns, an AI-driven setup that automates audience targeting, creative selection, placements, and budget shifts in real time. Fewer knobs to turn, more conversions, plus clearer reporting than typical “black box” automation. Reddit says 600+ advertisers in alpha saw lifts, including Brooks Running cutting CPC 37% and gaining 27% more clicks over 21 days without manual tweaks; broader tests averaged 17% lower CPA and 27% more conversions.

WAYS WE CAN WORK TOGETHER

Floyi - The only AI-powered tool that builds 4-level topical maps. Don’t just plan your content strategy - make it unstoppable.

TopicalMap.com Service - Let us do the heavy lifting. We handle the research, structure, and strategy. You get a custom topical map designed to boost authority and dominate your niche and industry.

Topical Maps Unlocked 2.0 - Unlock the blueprint to ranking success. Master the art of structuring content that search engines (and your audience) love - and watch your rankings soar.

AD

What Did You Think of This Week's Wave?

Login or Subscribe to participate in polls.

LIKE DIGITAL SURFER?

Find me and others in the Digital Surfer Discord community.

I’d also love to know what you think and if you have any ideas for the newsletter. Reply or email me at [email protected].

I’d also appreciate it if you shared it with fellow digital surfers.

You currently have 0 referrals, only 3 away from receiving LinkedIn Shout-out.

Have a great week taking your SEO and digital marketing to another level!

And don’t forget to drag the Digital Surfer emails to your Primary Inbox 🌊 

Reply

or to participate.