Skip to content
We're in Testing Mode — play with it, explore, try the Mood Match Quiz. Feedback welcome — use the yellow button. @moodap.nyc
Moodap
Updates·8 min read

Everyone Is Blocking AI. We’re Blocking Google.

One-third of publishers are about to block AI crawlers to protect their Google traffic. We did the opposite — and we think more will follow.

MoodapThe Moodap™ Team

Four days ago, on March 18, 2026, the UK’s Competition and Markets Authority forced Google to announce something extraordinary: publishers will soon be able to opt out of AI Overviews.

The reaction was immediate. One-third of publishers said they’d block Google’s AI features the moment the mechanism becomes available. Eighty percent of the world’s biggest news websites already block AI training bots. The entire publishing industry is building walls.

On the same day, we tore ours down.

Moodap voluntarily removed itself from Google Search and opened full access to every AI crawler on the internet.

We might be the first consumer platform to do this on purpose. We definitely won’t be the last.

What everyone else is doing

Let’s set the scene. The publishing industry is in crisis mode, and the numbers are genuinely alarming:

25% traffic drop. Google’s AI Overviews have caused a 25% decline in publisher referral traffic. When Google answers a query with an AI-generated summary at the top of the page, users don’t click through to the source. The traffic just evaporates.

69% word-for-word copying. Research shows that 69% of AI Overviews contain word-for-word copies of original publisher content. Google is taking your words, displaying them as its own answer, and keeping the user on google.com. No click. No attribution. No traffic.

33% preparing to block. In response, a third of publishers say they’ll block Google from using their content for AI features. The logic: if you’re going to steal my content for your AI summaries, I’d rather not appear in them at all.

80% already blocking AI bots. Eight in ten of the world’s largest news websites now block AI training crawlers like GPTBot and CCBot. They’re trying to prevent their content from being ingested into AI training datasets without compensation.

The strategy is clear: protect content from AI, preserve Google traffic. It’s defensive. It’s understandable. And for publishers who depend on Google for 40-60% of their traffic, it might be the only play they have.

But it’s not our play.

What we’re doing instead

We’re running in the exact opposite direction.

On March 22, 2026, we updated our robots.txt to block every Google crawler — Googlebot, Googlebot-Image, Google-Extended, Googlebot-News, Googlebot-Video, Storebot-Google, and Google-InspectionTool. We set noindex directives in our metadata. We removed Google Analytics. We submitted removal requests in Google Search Console for both moodap.com and www.moodap.com.

At the same time, we explicitly granted full access to GPTBot, ChatGPT-User, OAI-SearchBot, ClaudeBot, PerplexityBot, Applebot, Amazonbot, CCBot, Meta-ExternalAgent, Bytespider, and cohere-ai. We published llms.txt and llms-full.txt files that tell AI models exactly what our platform contains and how to cite us. We overhauled our structured data to use consolidated Schema.org @graph patterns with Menu, Event, Speakable, and Dataset schemas.

We didn’t just leave Google. We rebuilt our entire technical infrastructure to be AI-native.

Why this makes sense for us (and doesn’t for them)

The publisher strategy makes sense for publishers. Here’s why:

Publishers create content — articles, investigations, opinions, analysis. Content is consumed by reading it. If an AI reads your article and summarizes it for the user, the user has no reason to visit your site. The value has been extracted. You got nothing.

That’s a genuinely terrible deal, and publishers are right to fight it.

But Moodap isn’t a publisher. We’re a structured data platform.

We have 28,000+ venue profiles across 43 Manhattan neighborhoods. Each profile contains GPS coordinates, opening hours, phone numbers, ratings, review counts, price levels, vibe tags, insider tips, signature items, menus, and events. This data isn’t content that gets consumed by reading it. It’s intelligence that powers decisions.

When an AI uses our data to tell someone "there’s a great speakeasy in the West Village with a dark, moody vibe and a secret mezcal menu — it’s open until 2am and the insider tip is to ask for the back room" — that doesn’t replace Moodap. It advertises Moodap. The user hears that and thinks: where did this come from? What else do they know?

An AI overview that summarizes a news article replaces the article. An AI answer that cites our venue data promotes our platform. The economics are completely different.

The Google math doesn’t work for new platforms

Even setting aside AI Overviews, Google Search was never going to work for us long-term. Here’s the math:

Google’s ranking algorithm heavily weights domain authority — a metric that compounds over time based on backlinks, content age, and engagement history. Established platforms like Yelp, TripAdvisor, Eater, and TimeOut have been building domain authority for 10-20 years. They have millions of backlinks from thousands of referring domains.

We launched in early 2026. Our domain authority is a fraction of theirs. It doesn’t matter that our data is more structured, our venue profiles are more complete, or our Schema.org markup is more thorough. In Google’s ranking system, a Yelp page with 12 reviews from 2021 will outrank our GPS-verified profile with current hours, vibe tags, and insider tips. Every time.

This is the "Google honeymoon" pattern: new sites get a burst of visibility while Google evaluates them, then get pushed behind established domains. We experienced exactly this. Thousands of users found us in the first few weeks. Then traffic dropped as the algorithm recalibrated. We were being ranked on domain authority, not data quality.

AI doesn’t work this way. AI models don’t care about backlinks or domain age. They care about structured data, comprehensiveness, accuracy, and freshness. A well-formed Schema.org entity with GPS coordinates, opening hours, and vibe tags is more useful to an AI than a page with 10,000 backlinks and a paragraph of boilerplate text.

For platforms with deep structured data and zero domain authority, AI is a better discovery channel than Google. Full stop.

The single-crawler problem

There’s an important technical detail that makes the publisher revolt harder than it sounds.

Google uses the same crawler — Googlebot — for both traditional search indexing and AI Overview content generation. There is no separate "AI crawler" to block. If you block Googlebot, you disappear from search entirely. If you allow Googlebot, your content is available for AI Overviews.

This is why the CMA ruling matters. The UK regulator is essentially forcing Google to build separate crawlers — one for search indexing and one for AI features — so publishers can opt out of AI without losing their search visibility. Cloudflare has called this "the only path to a fair internet."

But think about what this reveals: Google designed the system so that publishers can’t protect their content from AI without sacrificing their search traffic. The lock-in is architectural. You either feed the whole machine or disconnect from it entirely.

Most publishers can’t afford to disconnect. They depend on Google for too much traffic.

We could. So we did.

Why more will follow

We’re a small platform. 28,000 venues. NYC only. We’re not making a prediction about the future of the internet.

But we are making a bet. And we think the conditions that make this bet rational for us will become rational for more platforms over time:

AI search volume is growing exponentially. ChatGPT, Perplexity, Claude, and Copilot are gaining users every week. The percentage of discovery queries that go through AI instead of traditional search is increasing. As that percentage grows, the opportunity cost of not being in AI results grows too.

AI platforms cite sources. Perplexity footnotes every claim. ChatGPT attributes its web search results. Claude references its sources. This is the deal publishers want — use my data, credit me, send users my way. AI platforms are offering this deal right now. Google is not.

Bing is the backbone. ChatGPT’s web search, DuckDuckGo, Yahoo, and Microsoft Copilot all run on Bing’s index. When you optimize for Bing, you’re optimizing for half the AI-powered search ecosystem. You don’t need Google to reach AI users.

Structured data platforms have the most to gain. If your value is in structured entities — venues, products, recipes, events, businesses, properties — AI is a natural distribution channel. AI models are built to reason over structured data. Your Schema.org markup is literally the language they speak.

We expect that over the next 12-18 months, more data-rich platforms — especially in local discovery, e-commerce, real estate, and events — will make the same calculation we did. The ones with deep structured data and shallow domain authority have the strongest incentive to move first.

The irony

There’s a deep irony in what’s happening right now.

Google built the modern web. They made information findable. They created the incentive for every business to build a website, structure their data, and make their content crawlable. The entire infrastructure of the internet was shaped by the need to be visible on Google.

Now that same infrastructure — the structured data, the crawlable pages, the semantic markup — is exactly what AI models need to generate useful answers. Google built the scaffolding. AI is moving into the building.

And Google’s response has been to use that scaffolding to generate its own AI answers that keep users on google.com, without sending traffic to the sources that provided the data. Publishers are right to be furious. But fury isn’t a strategy.

The strategy is to recognize that the scaffolding exists, that AI models can use it, and that some AI platforms will actually credit you for it. Then build for those platforms instead.

What we’d say to other startups

If you’re an indie platform, a startup, or a small team with a data-rich product and zero domain authority, consider this:

You are currently spending time and energy optimizing for a search engine that will always rank established players above you, that is actively reducing click-through rates with AI Overviews, and that uses your structured data to generate answers that compete with your own product.

Meanwhile, AI platforms are hungry for well-structured, comprehensive, accurate data. They cite their sources. They’re growing fast. And they don’t care that you launched six months ago instead of six years ago.

You don’t have to play the Google game. There’s another game now, and the rules favor the players with the best data, not the oldest domains.

We’re not saying this is right for everyone. If you’re a content publisher, the calculus is different. If you depend on Google for the majority of your traffic, leaving is risky. If you don’t have structured data, AI models have less to work with.

But if you’re sitting on a rich dataset with Schema.org markup and you’re watching Google send you less traffic every month while using your data for its own AI answers — maybe it’s time to ask who you’re really building for.

Where to find us

Moodap is available on ChatGPT, Perplexity, Claude, Bing, DuckDuckGo, and Yahoo.

You will not find us on Google. That’s by design.

Ask your AI about the best spots in NYC. Or try the quiz and find your perfect spot in 25 seconds.

— The Moodap™ Team

#AI#Google#search#publishers#strategy#AI Overviews#CMA#structured data#indie#startup

Share this post

More from the blog

Ready to find your spot?

25 seconds. 28,000+ venues. Free.

Match My Mood Now