next:ssr:article:best-practice:vanilla

Renderability mapping: what nine LLM surfaces actually see across six Next.js rendering modes

By · Published

We measured what crawlers and on-demand LLM fetch tools actually receive when they request the same page rendered through six different Next.js patterns. This is the first finding from a 12-week experiment that runs every probe against the same content surface.

Why this question matters now

Most renderability literature was last published in 2019. Since then, three things have changed materially: AI crawlers (GPTBot, ClaudeBot, PerplexityBot) became the dominant traffic for some niches; on-demand fetch tools (ChatGPT browse, Claude Web, Perplexity Sonar) became a parallel retrieval pathway with different rules; and Bing's rendering behaviour has been a black box since the last MERJ tests.

Phase 1 of this experiment fixes a single framework (Next.js) and varies only the rendering mode, page type, and image pattern. Every probe lands on a known cell with a deterministic UUID embedded in three locations of the HTML, so we can attribute behaviour to the rendering decision, not to content quality or topical authority.

Method, in two sentences

Six rendering modes × five page types × two variants × three image patterns = roughly 48 cells. Each cell is hit by 20+ user-agent classes (batch crawlers like Googlebot, Bingbot, Applebot, GPTBot, ClaudeBot, PerplexityBot, plus on-demand fetch surfaces from ChatGPT, Claude, Perplexity, Gemini, and Bing Copilot), and every hit is logged with the marker UUID it actually saw.

What gets recorded

  • Request side: user-agent class and pathway (batch vs on-demand), Cloudflare colo and ASN, rDNS verification when available.
  • Response side: which of the three marker locations were present, image patterns rendered, structured data in SSR, antipattern signals (broken variants only).

What gets compared

  • Same content, different mode: does Bingbot treat /ssr/article differently from /csr/article?
  • Same mode, different pathway: does GPTBot (batch) treat the page differently from ChatGPT-User (on-demand)?
  • Same cell, time: does indexation lag correlate with rendering mode?

Pre-registered hypotheses

Five hypotheses were committed to the repository before any data was collected, to avoid retrofitting findings to whatever the data happens to show:

  1. For batch AI crawlers, all server-rendered modes (SSR, SSG, ISR, RSC, Edge) produce identical marker visibility; CSR produces a different signature.
  2. Googlebot reaches every mode eventually, but time-to-render differs by mode.
  3. On-demand fetch tools show measurable per-LLM differences for the same cell.
  4. Bing's rendering of CSR mode is systematically worse than Google's — a 2019 finding that needs updating.
  5. JS-injected lazy images are invisible to non-JS crawlers regardless of rendering mode.

Each hypothesis will be confirmed, refuted, or refined in Week 5-6 against the data collected in Weeks 2-4.

Status at time of writing

Tracker is live. First mode (SSR homepage) is deployed publicly and receiving real bot traffic within minutes of going up: GPTBot and at least one US-based commercial scanner pinged the domain before this article was written. Markers are correctly rendered in three locations, the delivery pipeline retains 100 % of hits, and per-request CF metadata (colo, ASN, ray ID) is logged for every probe.

Three more cells (this article, SSG homepage, CSR homepage) go live with this commit. The next three modes (ISR, RSC, Edge SSR) plus the remaining four page types follow in Week 3.