When AI validates AI, nobody learns

Written by
Omer Frank
Published on
February 17, 2026

You've got an idea for an AI product. So you do the smart thing: you ask ChatGPT to poke holes in it. You get back a thoughtful analysis with pros, cons, and a confidence score. You feel validated.

Here's the thing. You just asked one AI to evaluate another AI product. And everything it told you came from the same place: pattern-matched text on a screen. None of it came from a real person with a real problem in a real context.

That's not validation. That's a very convincing echo.

Where the good stuff actually lives

There's a simple way to think about this. Everything you know (and don't know) about your users falls into four buckets.

The stuff you know you know. Facts. Specs. Standards. AI is great here. Use it. Don't waste your time on things you can look up.

The stuff you know you don't know. These are your open questions. Will people pay? What triggers them to switch tools? AI can help you sharpen these questions, but the answers have to come from actual humans. "Users would pay $29/month" sounds great until the real person tells you their company will never approve the purchase.

The stuff you don't know you know. This is where it gets interesting. These are your hidden assumptions. Things like "users will trust AI output" or "speed matters more than accuracy." They feel like facts. They're not. They're guesses you forgot to question. And AI won't catch them, because when you bake an assumption into your prompt, AI just builds on top of it.

The stuff you don't know you don't know. This is where the gold is. The user who says "yes, I'd use that" and then never does. The janky spreadsheet that reveals what the real workflow looks like. The three-second pause before someone answers your question. You can't prompt your way to these insights. They only show up when you're sitting across from a real person, paying attention.

The pattern is simple: the messier the quadrant, the more valuable the insight. AI lives at the clean end. Breakthroughs live at the messy end.

"But I told it to play devil's advocate"

I hear this a lot. And I get it. You know AI tends to agree with you, so you tell it to push back.

The problem is it's still working inside your frame. If your entire premise is off, if you're solving a problem that doesn't actually exist, AI has no idea. It'll happily debate the details of a solution nobody needs.

It also argues both sides with equal confidence. "Users might resist change" and "users are hungry for new tools" are both perfectly plausible to an AI. It doesn't know which one is true for your people. It's performing a debate, not doing discovery.

And the worst part: it gives you the feeling of rigor without any of the actual risk. You walk away thinking you stress-tested your idea. What you actually did was have a really articulate conversation with yourself.

Compare that to a real user who says "I would never use this." Or a prospect who goes quiet after your demo. Or someone who built an ugly workaround instead of adopting your tool. That feedback stings. And it carries real information, because it comes from someone with real needs and real alternatives.

So what should you actually do?

Don't stop using AI. Just stop using it for the wrong things.

Let AI gather facts and prepare your research. Then go talk to real people for everything else. Write down what you "just know" about your users and challenge every single item. And when you can, sit with real users and watch them work. Follow the contradictions. Follow the workarounds. The insight is always in what surprises you.

AI can help you get ready for discovery. It can't do the discovering for you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
link to copy the page URLShare on LinkedinShare to XShare on Facebook