How to Track Whether Your Brand Shows Up in AI Answers

Most B2B SaaS companies have no idea whether ChatGPT mentions their product when a buyer asks a relevant question. They optimise for Google rankings, track organic traffic, and measure conversion — but the AI search layer is invisible to them. That is a problem, because it is increasingly where early-stage buying decisions happen.

This is a practical guide to finding out where you stand and what to do about it.

Step 1: Define the questions your buyers ask AI tools

Start by listing the 10–20 questions your buyers are most likely to ask an AI assistant during the research and consideration phase. Not vanity brand queries — actual problem-driven questions:

  • "What is the best content agency for B2B SaaS?"
  • "How do I automate my lead routing workflow?"
  • "What does a monthly content retainer include?"
  • "Is AI-generated content safe to publish?"
  • "What automation tools work with HubSpot?"

Google Search Console is a useful starting point. Filter your query data for question-format searches — who, what, how, why, best, compare. These are the queries most likely to generate AI Overview results, which means they are also the queries buyers are asking AI tools directly.

Step 2: Run the queries manually across AI platforms

Open ChatGPT, Perplexity, and Google's AI Overviews. Run each of your target questions. For each response, note:

  • Is your brand mentioned by name?
  • Is your content cited (with a link)?
  • Is a competitor mentioned where you should be?
  • What sources are being cited?

Do this in a private/incognito window to reduce personalisation bias. Document the results in a simple spreadsheet: query, platform, brand mentioned (yes/no), citation link (if any), competitors cited.

Limitation to know: Manual checks are personalised. The answer you see is influenced by your location, search history, and device. They are a starting point, not a definitive measurement. Scale and objectivity require automated tooling — but manual checks are free and give you immediate signal.

Step 3: Find what AI models already know about you

Ask directly: "What do you know about [Your Company Name]?" in ChatGPT and Perplexity. The response tells you:

  • Whether you exist in their training data at all
  • What description they surface (accurate or not)
  • What sources they cite as the basis for their answer

If the description is wrong or outdated, that is a content problem — the AI is drawing on stale or inaccurate sources. The fix is publishing clear, authoritative, up-to-date content that corrects the record, not trying to edit the AI directly.

Step 4: Identify the content gaps

For every query where a competitor is cited and you are not, ask: does this topic exist anywhere on your site? If yes — is the content specific, factual, and structured clearly? If no — it needs to be created.

The gap list becomes your content brief backlog. Each gap is a specific article, FAQ answer, or page that, once published and indexed, gives AI models a source to cite for that query.

Step 5: Measure over time

AI visibility is not static. Models are updated, new content gets indexed, and competitors publish. Run your query set monthly and track movement. The metrics that matter are simple:

  • Brand mention rate: percentage of target queries where your brand is named
  • Citation rate: percentage where your content is directly linked
  • Competitor displacement: queries where you replaced a competitor citation

The compounding effect: AI visibility improvements compound. Each article that earns a citation creates surface area for adjacent queries. The brands that start building this foundation now will be structurally harder to displace in 12 months.

Need the content that fills these gaps?

Horizon Vera delivers 4–12 SEO articles per month, structured for AI citation and reviewed by humans. Plus workflow automations to act on the traffic they bring.

Start with Growth — €3,497/mo →