Editorial illustration: a workshop pegboard hung with legitimate measurement instruments — calipers, a ruler, a protractor, a magnifying glass — the FACT tag on the calipers rendered in brand amber, signifying that most free AI-visibility tools fail the transparency test that real instruments pass.
Measurement · Part 2 of 3

Which of the Free AI Visibility Tools Measure Up?

April 18, 2026 By Liz Micik 12 min read

The Short Version

  • Every marketing leader today is asking one of five questions about how their company is showing up in AI. Every free tool in the market is trying, with varying degrees of honesty, to answer one of them.
  • We matched 28 free and freemium tools against four transparency questions. Six scored 4/4. Ten fell into a partial-credit middle. Twelve are confidence theater. The gap is the story.
  • The honest ones will tell you where you have a problem. None of them, alone and free, will reliably tell you what to do about it.
  • Download the full cheatsheet and keep it handy for the next time your VP of Marketing asks, “Is there a tool we can use to quickly see if we’re visible to ChatGPT?”

Where we are, and why this piece exists

Early in April, a GoodFirms study showed that while 89% of brands are already appearing in AI-generated search answers, only 14% of marketers currently track what those answers say. That same study found 43% of marketers naming AI search optimization as a core 2026 strategy. Budget is moving into the channel three times faster than measurement. That is not a surprising number to any SEO I’ve asked.

As marketers at every level in every company, we’ve come to depend on the over-abundance of user data we can glean online, and this new AI platform “black hole” is frustrating us to no end.

And we all know where there is frustration, there is money to be made.

The number and variety of new AI-based analytics tools launched in the last 18 months is dwarfed only by the “free AI visibility tools” launched in the last two months.

The signal-to-noise ratio around these tools has gotten so high, this series was created to help make sense of it all. In Part 1 we covered some of the highest-signal research being done and shared by the leaders of the SEO community.

This piece does the inventory work. We took a look at 28 free and freemium tools from three different angles:

  1. Which of the top five questions marketers have about AI is the tool trying to answer
  2. Is the tool being transparent about how it is answering the question
  3. How well does it do its stated job — is it giving you facts, vibes, or wrong answers

Before we look at how the tools stack up in these three angles, one note on method.

Our methodology

We studied 28 free and freemium tools and came out the other side with a way of deciding which one matches the question you are trying to answer.

There are five question buckets we’re all asking about our sites and how they’re seen by AI:

  1. Are we showing up in AI results at all?
  2. When buyers ask AI about my category, do we come up?
  3. How do we stack up against competitors?
  4. When AI describes my brand, does it get the facts right?
  5. Is my site technically set up for AI to find and read me?

We evaluated each tool against whatever its vendor makes publicly visible: homepage claims, sales-page methodology notes, pricing pages, free-tier output, help docs. We did not get private demos or sales briefings.

Is the tool transparent enough to be trusted?Source TransparencyYes, the data source is named and verifiable.Methodology TransparencyYes, the scoring logic is visible without a sales call.ReproducibilityYes, the results are consistent, or drift is explained.ActionabilityYes, the output provides specific steps for improvement.
The four-question transparency test every tool was graded against.

For every tool in our sample we asked these four transparency questions:

  1. Source transparency. Where does the data come from? Is the data source named, and can you verify it exists independently of the tool’s dashboard?
  2. Methodology transparency. How is the score computed? Is the formula, weighting, or scoring logic visible somewhere a customer can read without booking a call?
  3. Reproducibility. Run the same check twice, get the same answer? If the answer drifts, is the drift explained (refresh cadence, LLM non-determinism, query rotation)?
  4. Actionability. Does the output point you at a specific thing you can fix or change? Or does it stop at “your score is 42”?

Each tool got a checkmark or a blank for each question. The scoring is binary on purpose: either the information is visible to the customer or it is not.

If you remember only one thing from this piece, remember these four questions. Apply them to any AI visibility tool you evaluate for the rest of this year, and 90 percent of the evaluation work is already done.

Angle 1 — The five questions you are actually asking

When a VP or CMO walks up to your desk to talk about AI visibility, she is asking one of five questions. The questions sound very similar, but the answers she’s trying to get at are usually very different. Just like your traditional SEO tools, if you can get just a little more specific about the question you’re asking, you can be a little more confident about the answer your tool gives. Every tool in our sample is really trying to answer one of those five.

Q4: When AI describes my brand, does it get the facts right?Biggest bucket of toolsQ2: When buyers ask AI about my category, do we come up?Schema and readiness checkersQ1: Am I showing up in AI results at all?Near-empty bucketAI Visibility QuestionsSynthetic-prompt trackersQ5: Is my site technically set up for AI to find and read me?Enterprise-tier toolsAI Visibility Questions for VPsQ3: How do we stack up against competitors?
The five AI visibility questions every VP is actually asking.

Question 1: Are we showing up in AI results at all?

This is the first question, and the one most VPs actually want a two-second answer to.

This is the biggest bucket in our sample. Tools that sit here (shown alphabetically) all claim to hand you a single 0–100 number for “your AI visibility.”

Question 2: When buyers ask AI about my category, do we come up?

This is the question the VP asks on the walk back to her office, after the quick-score check came back low.

This is the synthetic-prompt family. These tools run a structured set of prompts against the major AI platforms and report whether your brand comes up and how often.

Question 3: How do we stack up against competitors?

This is the question the CMO’s boss asks in the QBR.

This bucket is almost entirely the enterprise platforms we all already know and use. All of them are custom-priced and sales-gated. None are truly free tools — they are sales demos wearing a free-tool costume.

Question 4: When AI describes my brand, does it get the facts right?

This is the quieter fourth question, and the one that matters most when the answer is “no.”

This near-empty bucket is itself a finding. Most free tools will happily tell you whether you appear in AI answers. Almost none will tell you whether the AI is right about you.

Question 5: Is my site technically set up for AI to find and read me?

This is the question that should be asked first and almost never is.

Every tool in this bucket checks observable signals in your site’s code — schema validity, heading hierarchy, robots.txt, canonical URLs — against published rules.

Angle 2 — How transparently each tool gets to its answer

Every tool in the sample takes one of four approaches to answering its chosen question. There is no inherently right or wrong approach; they are just different, and each has tradeoffs. The failure mode is not which base a tool uses; the failure mode is opacity about the base.

Synthetic base. Tools that run a fabricated set of prompts against one or more AI platforms and report what comes back. Fabricated prompts in, brand mentions out.

Real base. Tools that measure observable behavior — actual AI-platform user prompts, server logs, citation tracking, real referral data.

Code base. Tools that read your site’s own code; its schema markup, structured data, HTML signals, robots.txt, etc. and validate it against published rules. No AI prompts involved.

Unknown base. Tools that present a confident score without disclosing what their data source is or how the score is built. Their methodology is not visible at the public-page level. The base may be synthetic, may be real, may be a random number generator from the customer’s chair, you cannot tell.

Tools Transparency Scores6 The Transparent Six12 Confidence Theater2 Transparent with One Gap8 Partial Transparency
How the 28 tools scored on the four-question transparency test.

Here is the full inventory, ordered by their final transparency score (top to bottom), alphabetical within each tier, with the base each tool uses flagged:

4/4 — the transparent six

3/4 — transparent with one calibrated gap

2/4 — partial transparency

0–1/4 — confidence theater

The bottom line on transparency

At first glance you may ask: why does scoring well or badly on transparency matter? It’s a free tool. But, just like any analytics data, if you can’t trust the data that comes out, why bother asking in the first place.

Eight tools (29%) scored well on transparency measure something real and tell you what it is. Most of them are smaller than the enterprise platforms making the loudest noise. That is not a coincidence. Because of their pricing structures, the enterprise tools were at a clear disadvantage from the start, but “the scrappy startup” story has been around as long as the Internet.

Unfortunately, 21 of the 28 tools (75%) present a confident score with no visible methodology.

They disclosed no source for the data, nor any explanation of how the score is computed. You are left with no way to verify the finding.

You paste a URL, you get a number, and the number arrives inside a confident-looking dashboard with brand colors and a recommended next step, which is usually to book a demo.

Angle 3 — Fact, vibe, or wrong: the verdict

If you read the Find, Understand, and Trust framework that underlies my work with clients, you will recognize our measurement categories: Fact, Vibe, and Wrong. They apply here too.

SampleLimitFactVibeWrong28 AIVisibility ToolsDriftOpaqueScoringAI Visibility Tools Verdict
Fact, Vibe, or Wrong — and the three reasons a Vibe tool is a Vibe tool.

A Fact tool measures a real, verifiable thing. Rerun the same test and the result is the same. Cross-check with an independent method and the answer lines up. Fact tools are deterministic — their inputs are code, not language models.

A Vibe tool asks an LLM a question, or wraps its output in an opaque score, or samples a tiny slice and calls it a population. A Vibe tool can be honest about why its answer is imprecise, and the honest ones are useful — the direction of the answer is often informative, the trend over time is often informative. The specific number is noise dressed up as signal.

A tool can be a Vibe tool for three different reasons. We’ve tagged each one.

A Wrong tool presents a confident score with no visible methodology, or measures a thing that does not actually map to the claim on the box. The “AI search visibility score” that will not tell you which queries it ran, which platforms it tested, or how the score was computed — that is Wrong by definition. Not because the tool is malicious, but because the customer has no way to defend the number to anyone, which means the number is marketing.

Facts - 4, Vibes - 12, Wrong - 10. The distribution is what you should expect from a category this young — the signal lives at the smaller, scrappier end, and the loudest vendors are the least inspectable.

The hidden cost of “free”

Free tools are not free. The cost is paid in one of three forms, and which form matters.

Email capture for nurture sequences. Roughly 45% of the free tools we tested follow this pattern. You enter a URL and an email to see the result. The score arrives, and so does a nurture sequence of blog posts, webinar invitations, and eventually a sales email. Unlike the 0/4 scoring lead-gen tools, this group usually offers reasonable value in exchange for the email. HubSpot’s AEO Grader, Insites, GoVISIBLE, Wordlift AI Audit, and Semrush’s free AI Visibility checker all fall into this bucket.

Sales-call required to see the full result. Roughly 31% of tools. The free tier shows you enough to confirm there is a problem and not enough to act on. Profound, BrightEdge, Conductor, seoClarity, and the short-trial tools (Peec AI’s 7-day trial, LLM Pulse’s 14-day trial) all use some version of this pattern. The cost here is time — yours and your team’s — spent on a qualifying conversation before you can see the data.

Eventual paid tier required. Roughly 24% of tools. Substantive free tier, clear paid tier, no qualifying call in between. Gumshoe (3 free reports), LucidRank, SE Ranking, Wordlift, LLM Pulse at its subscription, and Peec AI all fit here. This is the honest model. The free tier delivers real value and teaches you whether you want the paid tier. If the product is real, the upsell is consensual.

Match the cost model to the transparency score and a pattern emerges. The 4/4 tools are almost all in the “truly free” or “honest upsell” categories. The 0/4 tools are almost all in the “email capture” or “sales call required” categories. Transparency and cost discipline travel together.

Part 3 of this series picks up here where the paid tool stacks that actually help with the fixing live.

Time to adjust your expectations

Here is the part most free-tool reviews do not say out loud.

All of these tools, even the four that scored Fact, can only help you identify whether you have a problem in the area they measure. None of them, alone and for free, will reliably tell you what to do about it.

If you use these tools as diagnostic instruments — instruments that read a signal, return a verdict, and point you at an area to investigate — they will serve you well.

If you use them as prescription instruments, you will be disappointed, and worse, you will probably paper over the disappointment with a score you share with leadership as if it were an answer.

It is worth stating the warning directly: a 73/100 AI visibility score you put in a board deck is not measurement. It is decoration. If someone on your team asks “what is that number actually measuring?” and you cannot answer past “the tool said so,” the number is costing you credibility, not earning it.

Take the help free tools can honestly give. Use them to identify where the problem lives. Then resolve to do the work of fixing it, or hire people whose job is to fix it. Do not substitute a score for a strategy.

What comes next

Part 3 is where the tool stacks that actually help with the drilling and fixing live. We will cover:

Until then, keep the cheatsheet on your desktop. It is this piece as a single-page scan, grouped by the five questions. Share it with anyone you are comparing notes with. Community helping community is how we collectively keep the signal-to-noise ratio on the good side of this landscape.

Part 3 (coming soon): How to combine free and paid tools into a working AI visibility stack. Includes two buyer scenarios drawn from the research and tool recommendations by role.

Want to know what AI is actually saying about you?

The Signal Check shows you what AI platforms actually say about your business across all three layers. The full Audit diagnoses every gap.

Agent Readiness Audit Free Signal Check