Changelog

Every change to how we measure and report. Changes that affect historical comparability are flagged.

Panel changelog

Coverage expansion and calibration progress

Manual crawler run processed 1,205 of 2,154 queued URLs in 90 minutes. Coverage by engine increased from 2.8–5.7% to 37.9–58.9%. Training rows expanded from 781 to 9,115.

Key day-one findings

  • Median CI width tightened from 1.11 to 0.34 — a 69% reduction. Calibration period continues.
  • Two features show directional signals approaching statistical significance: has_faqpage (+0.12, CI [−0.03, +0.28]) and last_modified_known (+0.09, CI [−0.03, +0.27]). Neither is yet significant at the 95% level. Pre-registered hypotheses for these features will be filed before week 3 to formalise testing.
Methodology:
v1.0
Sample:
Run ID: 4

Crawler pipeline operational

Crawler pipeline operational. 105 URLs fetched and feature-extracted in first 5-minute validation run. 2,154 URLs queued. HTML stored compressed to Cloudflare R2.

Methodology:
v1.0

Attribution views live

Property citation views deployed against live panel data. Day-one baseline: Amvia 3.3% ChatGPT / 6.5% Perplexity / 25.8% Google AI Mode. CompareFibre 11.1% ChatGPT / 0% Perplexity. Surfaceloop 0% all engines — emergence study baseline established.

Methodology:
v1.0
Sample:
123 queries × 3 properties × 4 engines

Panel v3.1 first run complete

First clean panel run under the v3.1 shared-panel architecture. 486 observations across 4 engines (ChatGPT, Perplexity, Google AI Mode, Claude baseline). 97.8% success rate. 2,640 URLs queued for crawling.

Key day-one findings

  • Google AI Mode redirect URLs (vertexaisearch.cloud.google.com/grounding-api-redirect/) are opaque — the real destination domain is not recoverable from the URL itself. Fixed at write time in the engine adapter: redirects now resolved before storage. Observations before 2026-05-03 have unresolved Google AI Mode citations.
  • Claude baseline (non-grounded) returned zero citations across all three tracked verticals (telecoms, fibre comparison, EASM). This is expected — Claude without web search does not retrieve URLs. Included in the panel as a non-grounded control.
  • Cross-engine URL overlap across telecoms vertical queries: ChatGPT cited 141 unique URLs across 99 domains. Perplexity cited 887 unique URLs across 589 domains. Claude baseline cited 59 unique URLs across 52 domains. Google AI Mode data pending redirect resolution.
Methodology:
v1.0
Observations:
n=486
Sample:
123 queries × 4 engines, daily slim panel

Methodology changelog

v1.1 · Last updated 2026-05-04

5 pre-registered hypotheses added

Added pre-registered hypothesis framework and 5 initial hypotheses with tamper-evident SHA-256 hashes. Panel composition updated from 5 engines to 4 (Claude grounded mode removed; Claude baseline retained as non-grounded control). Panel architecture updated to shared model (123 queries × 4 engines = 486 daily observations). Hashes: H1 (internal_link_count) df682cf1…5d84, H2 (external_link_count) 5252acfd…3676, H3 (is_forum) ea2c6b89…47a5, H4 (pathway_informational) 2c4c8f72…ecff8d, H5 (vertical_telecoms) 8376a846…fd4b.

Affects historical comparability:
No effect on historical data. Pre-registration is forward-looking; no existing metrics change. Panel engine count change reflects architecture already deployed since 2026-05-03.
Migration:
N/A.
v1.0 · Last updated 2026-05-12

Initial release

First public methodology document.

Affects historical comparability:
N/A (initial).
Migration:
N/A.