- Digital Surfer
- Posts
- Everything changed. Saturday didn't.
Everything changed. Saturday didn't.
Topical Authority is Not About Coverage; Homepage Matters; Back Button Hijacking is Spam; The Other AEO; Claude Design; Alibaba 3D Worlds; and Much More!
FIRST …
A couple weeks back, I ran a reader survey. Thanks to everyone who filled it out. The answers got me thinking back to how this whole thing started, almost six years ago now.
The newsletter launched in November 2020 as Niche Surfer. Affiliate/Content site plays, niche gaps, different era of SEO. I rebranded to Digital Surfer in May 2024 because the scope had already outgrown the name.
Along the way I tried other things. Midweek deep-dives. Most of them didn't stick, because doing them well on top of clients and Floyi turned out to be more than a free newsletter could carry. The one midweek exception is the Black Friday issue once a year, and even then the Saturday issue still goes out that week.
What did stick was Saturday. I picked it early on because almost nobody sends newsletters on Saturday, and because I asked myself when I'd actually want to read something like this.
The answer was the weekend, with coffee, with room to think. Not midweek, squeezed between meetings.
Every Saturday, for 287 weeks, this has landed in your inbox.
The industry used to change three or four times a year. Now it seems to change three or four times a week 😅
Here're this week’s industry changes …

AD
Turn Google Ads into predictable customer acquisition
Echelonn manages over $15M/mo in Google and YouTube Ads for 300+ ecommerce brands. One vertical. One channel. Total focus. If you're not confident your account is performing the way it should, book a free audit.
SEO + GEO
Luke Harsel reports on Semrush clickstream data showing that ChatGPT is not replacing traditional search so much as becoming another layer in the path users take before they click through to the wider web. The data makes the main pattern clear: ChatGPT is increasingly a referrer, but one that still sends a huge amount of attention back toward Google and a small set of dominant domains.
Outbound referral traffic from ChatGPT to the rest of the web grew 206% in 2025.
More than 20% of ChatGPT's referral traffic went to Google.
More than 30% of all ChatGPT referral traffic went to just 10 domains.
ChatGPT enabled web search on only 34.5% of queries by February 2026.
The share of prompts that matched traditional search language nearly doubled from 18.9% to 34.9% between October 2025 and February 2026.
Average queries per ChatGPT session jumped 50% in the final four months of the study period, reaching 1.75 by February 2026.
ChatGPT referred traffic to roughly 71,000 unique domains per month in October 2024, peaked around 260,000 in October 2025, and ended near 170,000 by February 2026.
Yoyao explains that topical authority was never about publishing the most pages on a subject, but about becoming the site and brand the right buyers trust on a topic area of expertise. He argues that AI has not broken content strategy so much as exposed weak strategy, because cheap production (especially by generic AI-writers) only amplifies vague positioning, fuzzy audience definition and shallow buyer understanding. The article lays out a stricter sequence for authority building, moving from brand foundation and target audience to buyer personas, topical maps, detailed briefs, drafts and measurement, with the brief positioned as the quality-control layer most teams skip. The real point is that content scale only compounds when the upstream structure is strong enough to turn expertise into something buyers can actually trust and choose.
My Take: This is one of the reasons every topical map created for clients are unique. Every client is different, even if they’re both plumbing companies. Plumbing in New York is different from Plumbing in Southern California. People use AI because responses are personalized for their specific situation. If your content isn’t clear on who its for, it won’t get shown by AI or even Google since they’re also personalizing what’s in the SERPs.
Marcus Miller suggests that AI-assisted research is reducing clicks to deep informational pages and sending more users back through branded search, which makes the homepage more important again as a conversion and orientation layer. He says that while traditional SEO taught us to treat every page as a possible entry point, AI tools are now doing more of the comparison and summarization work upstream, so users arrive warmer but with less visible context about what exactly they need. That raises the value of clear information architecture, stronger homepage signposting and site structures that work for both front-door human visitors and back-door AI and search crawlers. It’s not that inner pages stop mattering. It’s that homepage UX and architecture are becoming part of SEO performance again instead of something teams can safely treat as mostly brand territory.
My Take: I was initially surprised to see the headline because I was wondering “when did homepages not matter for SEO?” 😅 I see his point and logic for why they might be less important by some. I always just treated it as an important page. I mean, the homepage always sits at the top of sitemaps for a reason.
Dixon Jones argues that AI recommendation visibility depends less on how many pages a brand publishes and more on the strength of the relationships a model has learned between that brand and a topic. He builds the case through research on relational knowledge in language models, showing that LLMs retrieve brands more confidently when associations are tight and repeatedly reinforced, and far less reliably when brands sit in crowded, many-to-many categories with weak differentiation. He then connects that theory to practice by arguing that Share of Voice alone is too shallow, and that teams also need to understand their topical presence, meaning what topics AI actually associates them with and how concentrated those associations are.
My Take: This is a useful framing because it explains why some brands keep showing up in AI answers even when others have published just as much content. AI visibility is becoming a knowledge-graph and brand-association problem as much as a content production problem. The missing piece is not always page count. It is often whether the market, the web and the model all connect your brand to a specific problem space with enough consistency. That is exactly why brand positioning, entity clarity and focused topical depth matter more now than another round of generic keyword coverage.
Google is making back button hijacking an explicit spam policy violation under its malicious practices rules. The behavior covers sites that interfere with browser history so users cannot return normally to the previous page, and instead get pushed into ads, recommendations or pages they never intended to visit. Google says it has seen more of this behavior recently and will begin enforcement on June 15, with sites at risk of manual spam actions or automated demotions if they do not remove the scripts, libraries or ad platform configurations causing it.
SEO + GEO Ripples
Google Maps blocked 292 million reviews and removed 13 million fake Business Profiles in 2025, while also blocking 79 million inaccurate or unverified edits and restricting more than 783,000 abusive accounts. Google is leaning more on Gemini-powered systems plus proactive alerts to business owners.
Wikipedia deleted Barry Schwartz’s page as non-notable, sparking backlash across the SEO community and a public defense from Danny Sullivan. It is a weirdly revealing reminder that industry influence, citation history and actual community significance do not always map neatly to how institutional notability gets judged online.
Google confirmed the recent Search Console impression emails were a bug after site owners got messages suggesting data collection had just started on properties verified long ago. If you got one of these notices (I did), they were a mistake, and no underlying Search Console reset or tracking change actually happened.
AI
Addy Osmani says that technical documentation now needs to serve AI coding agents as a first-class audience, not just human readers, because agents fetch, parse and discard docs in ways traditional analytics rarely capture. He lays out a practical stack for making docs more agent-ready.
Audit robots.txt so AI agents are not silently blocked from your documentation.
Add llms.txt at the root to give agents a structured, low-token index of key documentation.
Track and surface token counts so agents can judge whether a page fits within context limits.
Write skill.md files that explain what each API or service can do, not just how to call it.
Serve clean Markdown versions of docs so agents do not waste context on HTML noise.
Front-load each page with the outcome, core capability and first-step information in the opening tokens.
Add “Copy for AI” buttons so developers can pass clean context into assistants without navigation clutter.
Monitor AI traffic directly through referrers and agent HTTP fingerprints instead of relying on client-side analytics.
My Take: This is for developer documentation in websites and not the homepage. Further into the future, there may need to be more consideration for non-documentation pages though. Coincidentally, Cloudflare has a test below that covers much of the same content.
André Jesus and Vance Morrison introduce Cloudflare's new Agent Readiness score, a tool and Radar dataset designed to show how well websites support AI agents across discovery, content access, bot controls and protocol support. The early data is revealing: 78% of sites have robots.txt, but only 4% declare AI usage preferences through Content Signals, 3.9% support markdown content negotiation, and emerging standards like MCP Server Cards and API Catalogs appear on fewer than 15 sites in Cloudflare's 200,000-domain sample.
My Take: Don’t freak out if you put in your homepage and score low. Note that Cloudflare’s test is also geared more towards documentation pages. Their test for their developer documentation (https://developers.cloudflare.com/) scores 100, while their homepage scores 67, because many of the tests aren’t relevant to marketing pages.
Anthropic is launching Claude Design, a research preview product that lets users create visual work like prototypes, slides, one-pagers and marketing assets by collaborating with Claude through prompts, comments, direct edits and adjustable controls. The tool is powered by Claude Opus 4.7 and is being positioned as both a way for non-designers to produce polished visual output and a way for designers to explore more directions faster, with extras like design system onboarding, web capture, organization sharing and export to formats like PPTX, PDF, Canva and HTML. One of the best features is the handoff flow to Claude Code, where finalized designs can be packaged into a bundle for implementation.
My Take: It’s Floyi for Design 😂 Claude uses the brand context for everything it designs before handing off to Claude Code, creating a much tighter workflow than bouncing between disconnected tools. I did use it to try and redesign homepages, but it seems overloaded with many actions just stopping and required me sending many “Resume” messages to get completed tasks. And I hit the limit pretty quickly, even with a Max account, so I need to wait 24 hours now. It could’ve been the initial injection of brand files though because I just gave it a full Github repository to analyze.
Alibaba has launched Happy Oyster, a world model that generates interactive 3D environments and videos in real time instead of producing a single finished clip from one prompt. The model runs in two modes, with Directing focused on continuous scene generation that can be steered midstream through text, voice or images, and Wandering allowing users to explore an expanding world in first-person with stable physics and camera control. Alibaba says Directing can generate up to three minutes of continuous video at 480p or 720p, while Wandering currently supports one-minute generations at 480p.
My Take: AI video generation is starting to blur into simulation and playable world-building. This feels more important than another text-to-video release because it pushes toward persistent environments instead of isolated clips. Once models can maintain world coherence while users intervene in real time, the line between video generation, prototyping and lightweight game creation gets much thinner. That opens up very different use cases than just making prettier AI footage.
Google has released a native Gemini app for macOS, giving users a desktop version that can be summoned with Option + Space and used without tab switching. The app can also take context from whatever is on screen, including local files, so users can ask for summaries, explanations or help tied directly to the window they are viewing.
My Take: Gemini finally releasing a desktop app, something that ChatGPT (with 2 apps) and Claude have had for awhile. Desktop presence is table stakes now for chatbots.
Personal Computer is expanding Perplexity Computer from a cloud-based workflow tool into something that can operate across your own files, native apps, connectors and the web on your machine. The pitch is that the system can understand an objective, carry work across interfaces and stay available for longer-running workflows, with Mac mini positioned as an especially useful always-on host. Perplexity also emphasizes local control and security, saying actions are auditable, reversible and designed to keep users in the loop on sensitive steps.
My Take: Everyone’s got a desktop app. Now it’s a matter of - what’s going to be most useful for the work you need to do?
Codex is expanding from a coding assistant into a broader desktop agent that can use your computer, browse in-app, generate images, remember preferences and handle longer-running work across the software development lifecycle. The update adds background computer use, multiple parallel agents on macOS, deeper PR review and terminal workflows, remote devbox access over SSH, more than 90 new plugins and a preview of memory and recurring automations. They’re framing it as a workspace agent that can carry context, operate across apps and keep development work moving over time.
My Take: This moves Codex closer to being an operating layer for developer work instead of a feature you dip into when you need help writing code. Once memory, plugins, browser control and recurring automations start working together well, the stickiness goes up fast. The winner here may be the tool that best holds context across days, tools and unfinished work, not just the one that writes the cleanest function.
Google is upgrading AI Mode in Chrome so people can explore the web side-by-side with AI help instead of bouncing constantly between search tabs and webpages. The new setup lets clicked links open next to AI Mode, while a new plus menu can pull in recent tabs, images and PDFs as context for follow-up questions. Google is also extending access to tools like Canvas and image creation wherever that plus menu appears, which turns AI Mode into more of a persistent browser layer than a standalone search experience.
My Take: ChatGPT Atlas browser and Claude’s Chrome Extension do offer similar-ish features, but Chrome and AI Mode are better integrated. This is one of the clearest product signs that browser UX and Search are being rebuilt around context carryover instead of isolated queries. Once users can keep pages, tabs, files and follow-up questions in the same working surface, the search box stops being the main event. What I like about ChatGPT Atlas is the conversations are saved and can be continued anywhere. But Claude’s Extension’s chats are lost when the extension’s sidebar closes (or at least I can’t find them).
Shopify releases an AI Toolkit that connects tools like Claude Code, Codex, Cursor, Gemini CLI and VS Code to Shopify's developer platform and store management workflows. The toolkit gives developers several ways to wire this in, including plugins, agent skills and a local MCP server, with the core promise being that AI agents can use Shopify docs, API schemas, code validation and store execution capabilities instead of guessing.
My Take: This is exactly the kind of move more platforms will make over the next year. If developers are increasingly building through agents, then the platform that exposes the cleanest docs, schemas and execution paths to those agents gets a real adoption advantage.
AI Ripples
Claude Opus 4.7 is now generally available, with Anthropic positioning it as a stronger model for advanced software engineering, better high-resolution vision and more reliable long-running work than Opus 4.6.
Gemini 3.1 Flash TTS adds more expressive speech controls, including audio tags for pace, tone and delivery, while supporting 70+ languages and SynthID watermarking.
Google is rolling out Skills in Chrome, which lets users save repeatable prompts as one-click AI workflows that can run against the current page and selected tabs. Google is also launching a library of prebuilt Skills for common tasks, while keeping editing and remixing available so people can adapt saved workflows to their own needs.
Gemini can now use Personal Intelligence and Google Photos for image generation, making Nano Banana 2 more personalized by pulling from connected apps and labeled photos instead of relying on long prompts and manual uploads. Google is trying to turn private context into a creative input layer while keeping that data inside its own ecosystem.
OpenAI has started rolling out ads in Australia, New Zealand and Canada for Free and Go users, while paid tiers remain ad-free. That is a small geographic launch on paper, but it is a meaningful signal that conversational AI ads are moving from experiment toward product reality.
MARKETING
Reva Minkoff walks through a real Google Ads Manager Account hack, detailing how attackers gained access through a compromised employee email, removed users, changed allowed domains, altered billing setups and created chaos across linked client accounts. It goes beyond the incident itself and spells out a recovery sequence: contact Google fast, file account takeover forms, coordinate with clients to disconnect affected accounts, reset billing and use change history to trace every malicious action. She also lays out practical prevention steps like limiting admin access, forcing password resets, enabling stronger 2FA, using multi-party approval and avoiding direct bank account connections.
CONTENT
Lily Ray explains how fabricated AI-generated information can get repeated across enough low-quality sites that other AI systems begin treating it as fact, using a fake September 2025 Google “Perspectives” core update as the clearest example. She traces the loop from one made-up article to multiple AI-generated sites, then into AI answers from tools like Perplexity, ChatGPT, AI Mode and AI Overviews that repeat the fiction with confidence because citation volume gets mistaken for truth. The article also points to a deeper structural issue: free-tier and mass-market AI products still give billions of users answers from models that are more prone to error, while the more reliable reasoning layers are often reserved for paid users. AI misinformation is no longer just a content quality issue. It is becoming an infrastructure problem for how knowledge gets manufactured, cited and believed at scale.
My Take: This is why I keep saying that AI visibility work cannot be separated from information quality anymore. Once enough bad content starts citing itself, the retrieval layer gets polluted and even smart users can be led in the wrong direction. The brands that benefit long term will be the ones creating high-delta content with real verification, because that is one of the only ways to resist the feedback loop instead of feeding it.
LINK BUILDING
Vince Nero explains that the shift from traditional link building to digital PR is less about changing tactics than changing what teams optimize for. Instead of chasing link volume through scalable outreach, he says that digital PR requires fewer, better pitches built around timely stories, stronger journalist fit and data-led assets that reporters cannot produce easily on their own. The piece also pushes a reporting mindset change, noting that nofollow links, unlinked mentions and social placements matter more now because AI visibility tracks brand mentions as much as classic backlinks.
SOCIAL MEDIA
Jill Raines says LinkedIn is expanding AI-powered people search to all U.S. members, moving a feature that previously launched for Premium users into the broader product. The update lets people search in natural language, improves search when users only remember fragments like a nickname or partial spelling, personalizes suggestions based on profile and search history, and adds verification badges plus AI-generated profile summaries directly in results.
Andrew Hutchinson reports that X is adjusting its creator revenue share program to push more money toward original authors and less toward aggregator accounts that ride the reach of reposted material. Nikita Bier says X is experimenting with tools to identify original creators, has already cut aggregator payouts to 60% this cycle, and plans another 20% reduction in the next one. The change matters because it is one of the clearest admissions yet that X's own monetization system has been rewarding repost velocity and bait behavior more than actual creation.
TOOLS AND RESOURCES
Anu Adegbola reports that Google will begin rolling out mandatory MFA for the Google Ads API starting April 21, with enforcement expanding over the following weeks for users generating new OAuth 2.0 refresh tokens through standard authentication flows. Existing refresh tokens will keep working, but new authentications will require a second factor by default, while service account workflows are not affected. The change also extends beyond the API itself to tools like Google Ads Editor, Scripts, BigQuery Data Transfer and Data Studio.
My Take: This sounds like one of those updates where teams using Ads API tools should get their OAuth tokens now instead of waiting for the MFA requirement to break a workflow at the worst possible time. If you have internal tools, dashboards or automations that depend on fresh user-based credentials, this is the moment to audit them.
WAYS WE CAN WORK TOGETHER
Floyi - Build Topical Authority that wins in Google and AI Search. Don’t just plan your content strategy - make it unstoppable.
TopicalMap.com Service - Let us do the heavy lifting. We handle the research, structure, and strategy. You get a custom topical map designed to boost authority and dominate your niche and industry.
Topical Maps Unlocked 2.0 - Unlock the blueprint to ranking success. Master the art of structuring content that search engines (and your audience) love - and watch your rankings soar.
AD
Your inventory doesn't wait for you to check a dashboard.
Viktor sends daily inventory and reorder alerts to your team's Slack channel. If a SKU is trending toward stockout, you know before it happens.
Your content calendar and social posting run on autopilot. Brand monitoring runs in the background. Viktor handles the recurring work across ops and marketing so your team focuses on growth.
5,700+ teams. 3,000+ integrations.
What Did You Think of This Week's Wave? |
LIKE DIGITAL SURFER?
Find me and others in the Digital Surfer Discord community.
I’d also love to know what you think and if you have any ideas for the newsletter. Reply or email me at [email protected].
I’d also appreciate it if you shared it with fellow digital surfers.
You currently have 0 referrals, only 3 away from receiving LinkedIn Shout-out.
Have a great week taking your SEO and digital marketing to another level!
And don’t forget to drag the Digital Surfer emails to your Primary Inbox 🌊







Reply