Most AI-citation-tracking tools overpromise. The marketing says “track every mention”, reality says 55–70% detection rate from any single tool.
Tools we actually use
SE Ranking AI Visibility — paid, EUR 79+/month. Best Google AI Overviews coverage, decent ChatGPT, weak Perplexity. Bulk-query support up to 500/week.
Profound — paid, EUR 199+/month. Strongest ChatGPT and Perplexity tracking, includes Claude and Gemini. Visual share-of-voice dashboard. Best single tool if budget for one.
Internal manual reproduction — every Monday, our strategist runs 5–10 high-value queries manually across all 5 platforms. Catches what tools miss, especially edge cases.
Per-platform reality
Google AI Overview — most predictable. Same query gives same source about 90% of the time. Daily monitoring is enough.
ChatGPT — least predictable. Same query gives different sources across runs (model randomness). Need 3 daily reproductions to confirm a citation, not one.
Perplexity — surprisingly predictable. Same query, same sources about 80% of the time. Once cited, usually persists for weeks.
Claude / Gemini — middle ground. Twice-weekly reproduction enough.
What we don’t trust
Tools that claim “real-time tracking across all AI platforms” without naming specific reproduction methodology. Tools that don’t show source confidence levels. Tools that bundle SEO + AEO into one dashboard without separating the metrics.
Why featured citations matter most
A single citation in Wikipedia, Handelszeitung, NZZ or a major industry report often triggers 10–20 follow-on AI citations across all platforms. AI models cross-reference between authoritative sources. So tracking featured citations isn’t just a vanity metric — it’s a leading indicator of AI-citation volume two to four weeks out.