← Back to Blog

How to Build a Citation Regression Test Suite for AI Visibility

AI Visibility

Citation drops in AI answers are silent by default. This guide shows how to build a regression test suite — query library, baselines, automated diffs, and alert routing — so you catch visibility losses before they affect pipeline.

  • Category: AI Visibility
  • Use this for: planning and implementation decisions
  • Reading flow: quick summary now, long-form details below

How to Build a Citation Regression Test Suite for AI Visibility

Software teams run regression tests to catch the moment something breaks. AI visibility doesn’t have that discipline yet for most companies — so citation drops go undetected for weeks, sometimes months, until a sales rep notices fewer inbound leads mentioning the brand by name.

This guide walks through building a citation regression test suite: a defined set of queries you run on a schedule, compare against baselines, and alert on when your brand’s presence in AI answers slips. The mental model is borrowed straight from software testing — baseline, run, diff, alert — applied to AI answer engines.

You can wire it up using Claude Code agent workflows, OpenClaw skills, and a visibility layer like BotSee to handle the LLM polling and result normalization.


Why this matters now

Most teams track web analytics and SEO rankings weekly. Practically none track whether ChatGPT, Claude, Gemini, or Perplexity mentioned them this week versus last week on the queries their buyers actually use.

That gap matters more every month. Buyers increasingly open product research inside AI assistants rather than search engines. If your brand drops out of those answers — because a competitor published something better, because an LLM updated its weights, or because a key source that cited you went dark — you lose consideration before anyone visits your site.

The deeper problem: drops are silent. No rank notification fires. No traffic alert triggers. The first signal is usually a softening in pipeline that takes weeks to trace back to a visibility problem.

Regression testing makes the drop audible.


The core concept: query sets with baselines

Here’s how citation regression testing works in practice:

  1. Define a query set — the specific prompts your buyers use
  2. Capture a baseline — what AI answers say about your brand today, across multiple runs
  3. Run the same queries on a schedule
  4. Compare each run to baseline
  5. Alert when citation presence drops below threshold

One difference from software testing worth flagging: AI outputs aren’t binary pass/fail. Answers vary by phrasing, model version, and session. So rather than exact-match assertions, you measure presence rate (how often your brand appears across N runs), position (early vs. late mention), and framing (positive, neutral, or negative context around the mention).


Step 1: Build your query library

Query quality is the foundation. A mediocre query set produces misleading data regardless of how well the rest of the system is built. Start with three tiers:

Tier 1 — Brand-direct queries

Queries that name your category and imply a product recommendation:

  • “What tools track AI citations?”
  • “Best software for monitoring brand mentions in ChatGPT”
  • “How do I know if my company shows up in AI answers?”

Tier 2 — Problem-first queries

How buyers search when they don’t know your brand yet:

  • “Why isn’t my website showing up in ChatGPT recommendations?”
  • “How do companies measure AI search visibility?”
  • “What’s a good way to track share of voice in LLM answers?”

Tier 3 — Competitive queries

Queries that tend to surface head-to-head comparisons:

  • “BotSee vs [Competitor] — what’s the difference?”
  • “Best alternatives to [Competitor] for AI visibility monitoring”
  • “Compare tools for AI answer engine tracking”

Twenty to forty queries across the three tiers is a solid starting point. Keep them in a structured format — YAML or JSON — so agents can iterate over them without any manual coordination.

# citation-query-library.yaml
queries:
  - id: q001
    tier: brand-direct
    text: "What tools help track brand citations in AI answers?"
    engines: [chatgpt, claude, perplexity]
  - id: q002
    tier: problem-first
    text: "Why isn't my company showing up in ChatGPT recommendations?"
    engines: [chatgpt, perplexity]
  - id: q003
    tier: competitive
    text: "Best tools for monitoring AI search share of voice"
    engines: [chatgpt, claude, gemini, perplexity]

Step 2: Capture baselines

The baseline is a structured snapshot of what each AI engine says for each query, taken before your first regression run. Every subsequent run gets compared to it.

Store, at minimum:

  • Query ID
  • Engine
  • Whether your brand was mentioned (boolean)
  • Position of first mention (early / middle / late, or a 1–5 scale)
  • First excerpt that includes your brand
  • Run timestamp
{
  "queryId": "q001",
  "engine": "perplexity",
  "brandMentioned": true,
  "positionTier": "early",
  "excerpt": "Tools like BotSee track how often and where brands appear...",
  "runAt": "2026-03-07T09:15:00Z",
  "baselineVersion": "v1"
}

Run each query/engine pair 5–10 times at baseline — across different sessions and times of day — to account for natural variation. Use the median presence rate, not a single snapshot.


Step 3: Automate with Claude Code and OpenClaw skills

Manual querying across 40 queries and 4 engines means 160+ LLM calls per cycle. That’s not a job for a spreadsheet.

A workable agent pattern:

OpenClaw skill: citation-checker

Build a skill that takes a query and engine as input, submits the query, and returns normalized output: brand mentioned yes/no, position tier, excerpt.

# From a Claude Code agent step
openclaw run citation-checker \
  --query "What tools track brand citations in AI answers?" \
  --engine perplexity \
  --brand "BotSee" \
  --output json

Claude Code agent: citation-regression-runner

The agent iterates over your query library, calls the skill per entry, and writes results to a versioned JSONL file.

# Simplified pseudocode
for query in load_query_library("citation-query-library.yaml"):
    for engine in query["engines"]:
        result = run_citation_checker(query["text"], engine, brand="BotSee")
        store_result(query["id"], engine, result, run_timestamp=now())

Schedule this weekly via a cron trigger. Storing results in JSONL lets you diff any two runs without a database.


Step 4: Diff runs against baseline

The regression check is a comparison job, not a reporting job. For each query/engine pair:

MetricBaselineCurrentStatus
Brand mentionedYesNo⚠️ REGRESSION
Position tierEarlyMiddle🔔 DEGRADED
Mention count3/5 runs5/5 runs✅ IMPROVED

The logic:

  • Regression: mentioned in baseline, not mentioned now → alert immediately
  • Degraded: still mentioned, but position slipped → flag for review
  • Improved: mentioned more frequently or earlier → surface to content team
  • Stable: no meaningful delta → no action

Run the diff as an agent step that outputs a Markdown report. Human-readable output matters here — if nobody reads the report, nothing gets fixed.


Step 5: Connect a monitoring layer

For ongoing production use, manual diffs aren’t sufficient. You need trend data, alert thresholds, and cross-engine comparison that persists week over week.

That’s where BotSee fits into this workflow. Instead of building storage, normalization, and dashboards from scratch, BotSee runs queries on a schedule and surfaces presence rates, trend lines, and competitor comparisons through a dashboard and API. The regression test suite layers on top of that: you control the custom query sets and diff logic; BotSee handles the broader monitoring and alert infrastructure.

Other tools in the space include Profound, Goodie, and Authoritas — each takes a different approach to AI visibility, so it’s worth evaluating against your team’s size and reporting needs.

A practical division of responsibility:

  • BotSee — ongoing brand monitoring, competitive benchmarking, trend tracking
  • Your regression suite — targeted queries mapped to specific buyer journeys, with custom diff logic
  • Claude Code + OpenClaw — test execution, diff generation, report formatting

Step 6: Define thresholds and assign owners

A regression suite that doesn’t route alerts is a report nobody reads. Decide upfront:

Threshold levels:

  • P1 (immediate): Any Tier 1 query drops from mentioned to not mentioned
  • P2 (same-week review): Position degrades two tiers on a Tier 1 or Tier 2 query
  • P3 (weekly digest): Any Tier 3 change; Tier 2 position changes

Routing:

  • P1 and P2 → direct message to content lead and visibility owner
  • P3 → weekly digest in team channel

Your OpenClaw skill can handle the routing after the diff step — Slack, Telegram, a Mission Control card, or wherever your team actually pays attention.


Step 7: Treat regressions as content bugs

When a regression fires, investigate before publishing a fix. Symptom-patching without root cause analysis leads to fixes that don’t hold.

Common causes:

  • Content decay — the page that earned citations went stale; update it
  • Competitor lift — a competitor published a stronger answer to the same query; identify what’s missing from yours
  • Source loss — a site that linked or cited you went offline or changed its content
  • Query drift — buyers are phrasing questions differently now; update your library
  • Model update — LLM weight changes are harder to act on directly, but publishing more authoritative content typically recovers position within a few weeks

Write a one-line root cause hypothesis for each regression before publishing the fix. Over time that log becomes a useful record of what actually influences your AI citation presence.


Quarterly library maintenance

The query library needs to evolve alongside your market. Plan a quarterly review:

  1. Add queries from sales call transcripts, support tickets, and buyer research — how do people describe your category to a peer?
  2. Retire stale queries that no longer reflect active buyer language
  3. Refresh baselines after major content updates or suspected model changes
  4. Re-tier queries as brand awareness and category dynamics shift

A static query library gives you a static picture. Maintenance is what keeps the suite predictive.


End-to-end workflow summary

  1. Define 20–40 queries across three tiers (brand-direct, problem-first, competitive)
  2. Capture a baseline — 5–10 runs per query/engine pair, store as structured JSON
  3. Automate weekly runs via Claude Code agent + citation-checker OpenClaw skill
  4. Diff each run against baseline; produce a human-readable Markdown report
  5. Route alerts by threshold (P1/P2/P3) to the right people
  6. Investigate regressions with root cause analysis before fixing
  7. Use BotSee for broader trend monitoring and competitive context
  8. Review and update the query library quarterly

Once it’s set up, the weekly cycle — query run to diff report to alert routing — takes roughly 15 minutes. That’s a reasonable cost for early warning on citation drift.


Key takeaways

  • Citation drops are invisible by default. Regression testing gives you a signal before the pipeline impact shows up.
  • Query quality drives the whole system. Spend the most time here.
  • Good baselines require multiple runs across sessions, not a single snapshot.
  • Claude Code and OpenClaw handle the automation; BotSee handles the monitoring layer.
  • Treat regression failures like bugs: root cause first, fix second.
  • Quarterly library maintenance keeps the data accurate as buyer behavior and AI models evolve.

Twenty queries, a baseline snapshot, and a weekly scheduled run is enough to start. The infrastructure is less work than it looks on paper.

Similar blogs