The web is splitting into two classes. Sites that ChatGPT, Perplexity and Bing Copilot can read, and sites that are invisible to that world. AI-ready web design is the blueprint for class one: semantic HTML5 as the skeleton, Schema.org JSON-LD as the meaning layer, agents.json and llms.txt as the AI table of contents, robots.txt explicitly opened for GPTBot, ClaudeBot and PerplexityBot. Our own site currently pulls 1,500 AI citations in thirty days, verified through Bing Webmaster Tools. The average Wix template delivers none of these layers.
Anyone building a website in 2026 and only thinking about humans is planning for 2022. Search behaviour is shifting. In March 2025 Google referrals to news sites fell roughly nine percent compared to January, according to Cloudflare. April was worse, down fifteen percent. At the same time OpenAI's GPTBot more than doubled its share of total AI crawler traffic from 4.7 to 11.7 percent, Anthropic's ClaudeBot grew from around six to ten percent. Anthropic crawls on average 38,000 times for every real visitor it sends back, OpenAI 1,091 times. Translation: your website is being read by AIs long before any human sees it. The only question is what the bots find when they get there.
A website now has two audiences
Until 2024 web design was relatively simple to think about. You build a page, it looks good, it loads fast, Google ranks it on keywords and backlinks, a human clicks. The whole SEO discipline spent twenty-five years polishing that one pipeline.
That pipeline still exists, it is shrinking. A second pipeline has built up next to it that works differently. When someone asks ChatGPT, Perplexity or Gemini "who builds websites in Mallorca?", the AI does not search the web in the classical sense. It reads training data and live-fetched content from a curated source list. It looks for structured data, an agents.json, an llms.txt, Schema.org markup, clearly written answers to specific questions. If your website does not have that, you do not appear in the answer. This is not about position eleven instead of position one, it is about whether you are in the answer at all.
A modern website therefore has two audiences. Humans, who scroll, click and maybe fill out a form. And machines, that parse content, understand structure and decide if your site is worth being a source. Both need the same foundation, but they read it on two different layers.
What actually changed in 2025 and 2026
Cloudflare publishes data on AI crawler traffic twice a year, and the trajectory is clear. By mid-2025 about 80 percent of AI bot traffic was driven by training, up from 72 percent the year before. Comparing July 2024 to July 2025: GPTBot rose from 11.9 to 28.1 percent of crawler share. ClaudeBot grew from 15 to 23.3 percent. Meta-ExternalAgent jumped from 2.4 to 17.7 percent. Bytespider collapsed from 37.3 to 5.8 percent. Googlebot still anchors the picture at 39 percent, but the field behind it is four times more diverse than a year ago.
The second shift is the crawl-to-visitor ratios. Anthropic now sends 38,000 crawls for every real referral visitor (July 2025, down from half a million in January). OpenAI 1,091. Perplexity is at 194 crawls per visitor, falling. These numbers matter because they show how much work the AIs put into content that never directly converts to clicks. They read, they process, they cite, without your analytics seeing any of it. Anyone trying to measure visibility here needs a different yardstick than clicks alone.
That data is from summer 2025. In H1 2026 the picture has only sharpened. We measure it for our own domain through Bing Webmaster Tools, more on that further down.
The AI-ready stack: five layers, no magic
An AI-ready website has five layers that work together. None of them alone delivers the effect, all together do. Here they are in the order we build them on every project.
Layer 1: semantic HTML5
This is the least glamorous and most important layer. Instead of a desert of generic divs, a semantic site uses the HTML tags the web has had since 2014: article, section, header, footer, nav, main, aside, h1 through h6 in clean hierarchy. Banal, but the GEO research from the dev.to community published in late 2025 is unambiguous: AI models rank content higher that signals clean structure. A page that says "this is an article from this date by this author" through an article tag, an h1 and a time element gets more trust from an LLM than the same content buried in fourteen nested divs. The dev.to authors call the anti-pattern by name: div soup. Major audits like Lighthouse and axe-core flag it as critical.
Custom-built sites get this right from day one because a developer makes the choice. Wix, Squarespace and most WordPress themes generate exactly that div soup automatically because their builders think layout-first, semantics-second.
Layer 2: Schema.org JSON-LD
The second layer is the explicit meaning layer. You tell the machine in a JSON block what your page actually is: Organization with address and language, WebSite with internal search, Article with author and publication date, FAQPage if you answer questions, BreadcrumbList for navigation. A January 2026 LinkedIn analysis by Lawrence McKenzie cites a Semrush study covering five million URLs that were referenced by ChatGPT Search and Google AI Mode, finding that structured data is a consistent driver of citations. A Discovered Labs study on AI citation patterns shows ChatGPT pulls 47.9 percent of its sources from Wikipedia, Perplexity 46.7 percent from Reddit. Both platforms had Schema.org from day one. That is not coincidence, that is the same mechanism.
We deploy Schema.org as JSON-LD in the head, not as Microdata in the HTML, because JSON-LD is parsed more reliably by LLM crawlers and does not bleed into the visible code.
Layer 3: llms.txt as AI table of contents
llms.txt is a small text file at the root of your domain, comparable to robots.txt but for AI crawlers. In two sections it tells an AI what the site is and which URLs are most important. The trick is this: without llms.txt an AI has to crawl the sitemap and guess what matters. With llms.txt it reads the first paragraph, knows the context, jumps directly to the central pages.
We pair llms.txt with a longer llms-full.txt that contains the full site content as Markdown. Think of it as a PDF for machines. Vercel describes a similar pattern in its knowledge base: in addition to the HTML version of a page, also serve a .md endpoint via content negotiation. AI crawlers prefer the Markdown variant because they do not have to parse a DOM.
Layer 4: agents.json + agent-card.json
This is where it gets interesting. agents.json is an OpenAPI-like description of which actions an AI can perform on your site. agent-card.json follows the A2A protocol and describes your site's skills at the highest abstraction level. A restaurant website might have tools like get_menu, get_opening_hours, make_reservation. A law firm has get_specializations, request_callback. A web design agency has get_pricing, request_quote, list_portfolio.
This is not theory, it works. When ChatGPT finds your agents.json while answering a question about your restaurant, it can quote the actual current menu instead of guessing. We build this into every customer project, from a Mallorca boat school to law firms to our own brand. For dito-cafe.es we ship four tools (get_menu, get_info, get_events, get_reviews) with a locale parameter, plus an A2A card with concrete example calls.
Layer 5: robots.txt for AI bots
The last layer is banal but it decides everything else. If your robots.txt does not explicitly allow the AI crawlers, the rest can be irrelevant. You need allow entries for GPTBot, ChatGPT-User, anthropic-ai, ClaudeBot, PerplexityBot, Google-Extended (Google's separate AI training bot, distinct from Googlebot), Applebot-Extended for Apple Intelligence. Plus the classic crawlers. Default setups often either block all of this or none of it, both are wrong. You want to invite the AIs explicitly while still controlling what they cannot see.
Why Wix, Squarespace and standard themes fail at this
This is where it gets honest. The hard study numbers that say "Wix sites get cited X percent less" do not exist. What does exist are structural reasons it will look that way. On Wix you have no real access to robots.txt, just a simple toggle. You cannot host an agents.json or llms.txt endpoint because the platform does not expose those routes. Your Schema.org coverage is whatever a plugin or built-in setting gives you, often the bare minimum. The HTML output is builder-generated and rarely comes out semantic.
Squarespace has similar limits. WordPress can cover everything with five or six plugins (RankMath, Yoast, an agents.json plugin, an llms.txt plugin, a robots.txt editor) but you end up with a five-vendor patchwork with five update paths and five places it can break. A custom build consolidates this in three files.
Bleyldev put it cleanly in April 2026: "Wix is a solid platform for fast launches, simple sites, and businesses that prioritize convenience over performance ceilings. A custom website (...) makes sense when unique functionality or scale justifies the investment." If you need a three-page business card without conversion ambition, Wix is fine. If you want to stand in the ChatGPT answer for your industry, Wix is the wrong layer.
Our proof: 1,500 AI citations in 30 days
We build our own studio according to this stack and we measure publicly. Current state as of April 30, 2026, pulled from Microsoft Bing Webmaster Tools (live screenshot at studiomeyer.io/proof/bing-ai-citations-current.png):
1,500 Bing Copilot AI citations in the last thirty days. 15 avg cited pages. On April 13 we were at four avg cited pages, on April 22 at nine. That is a 275 percent jump in seventeen days. In April 2025 we had 187 citations total. Today 1,500. That is plus 702 percent in 24 days.
In Google Search Console we see the classic search lever in parallel: 14,670 impressions in 28 days, plus 364 percent against April 12. The top Bing Copilot grounding queries in April 2026 were: "tendencias diseño web 2026" with 35 citations (Spanish dominant), "Webdesign Trends 2025 2026" with 29 (DE), "KI Automatisierung technische Service Anfragen" with 28 (DE), "web design trends 2026" with 27 (EN), "tendencias actuales diseño web 2026" with 25.
Put differently: a single blog article with the right stack pulls more than a hundred AI citations in a month. A Wix site on the same topic pulls none.
What the process looks like for you
If you want to make your site AI-ready, we work in three steps.
Step one is an audit. We pull your existing site through a few checks: Lighthouse for performance and accessibility, a scan of which Schema types are present, a check whether agents.json, llms.txt and an AI-friendly robots.txt exist, a semantic HTML review. The audit takes about an hour, after which you know what is in place and what is missing. For existing customers this is often included.
Step two is the stack build. On a custom-built site we typically need a few days to two weeks depending on the codebase. On a Wix or Squarespace site the honest recommendation is: migrate to a custom setup. The template platform will not let you go far enough.
Step three is validation. We set up Bing Webmaster Tools and Google Search Console, register the site with IndexNow, push sitemaps and llms.txt to the relevant services, and watch the next few weeks for how citations and impressions develop. Inside thirty to sixty days you have a first baseline.
What it costs, what it brings
Web design with us starts at 199 euro per month or 2,500 euro one-off. AI-ready is standard in every tier, never an upsell. What it brings: in Q3 and Q4 2026 sales funnels are going to end with "who is the best web design agency for the German-speaking market" or "who handles AI visibility in Mallorca" as a ChatGPT answer. If you are in the answer, you get inquiries that no human would have ever found through Google. If you are not in the answer, you do not exist for those customers. This is not theory, it happens daily.
We have been building exactly this kind of visibility for Mallorca and the DACH region since early 2026. If that sounds relevant, let us talk. The first conversation is always free and non-binding. You can book directly at booking.studiomeyer.io/matthias or run our website audit first, that gives you a baseline score plus a concrete next-steps recommendation.
