You're documenting insights, not making decisions

Why teams with hundreds of user interviews still ship the wrong features—and what to do instead.

Tucker Schreiber·February 28, 2026·4 min read

The research theater problem

I watched a PM present their quarterly roadmap last month. Thirty slides. Beautiful Figma mockups. A section titled "User Research" with a dozen quotes in carefully chosen fonts.

I asked: "What's the decision you made based on these interviews?"

Silence. Then: "Well, we validated that users want better collaboration features."

That's not a decision. That's documentation.

Most product teams have turned user research into an artifact collection game. They interview users, tag feedback, fill out templates, and file everything in Notion. They have the raw material. What they don't have is the connective tissue that turns observations into conviction.

The gap isn't doing research—it's deciding what it means

Here's what actually happens after most user interviews:

Week 1: You talk to 8 users. Everyone nods along. Some interesting quotes emerge. You put them in a doc titled "User Research Findings Q3 2024."

Week 2: Leadership asks what you learned. You say "users want X" (because 3 of the 8 mentioned it). But you're not really sure if X is the problem or just a symptom.

Week 3: Engineering asks for specs. You write a feature brief. Somewhere in the "Background" section, you paste a few quotes from Week 1.

Week 8: You ship the feature. Usage is disappointing. Someone says "we should probably talk to more users."

The research happened. Decisions happened too. But they never actually connected.

What's missing isn't more interviews. It's synthesis under pressure. Research only matters when you force yourself to argue about what it means—when two smart people look at the same interview transcript and come to different conclusions about what to build.

The three questions that expose whether research is real

I've started asking teams three questions when they show me user research:

1. What did you expect to hear that you didn't?

If every interview "validated" your hypothesis, you didn't do research. You did confirmation theater. Real research surprises you. It should change your mental model at least 20% of the time.

One team I worked with was convinced users wanted automated workflows. After 15 interviews, they realized users didn't trust automation—they wanted faster manual controls. Complete opposite direction. That's research working.

2. What are you still confused about?

Good research creates clarity and new questions. If you walk away from 10 interviews with zero confusion, you asked surface-level questions. The best research sessions end with "huh, I need to understand why they said that."

3. What are you betting on that contradicts what users said?

This is the hardest one. Sometimes users tell you to build X, and you decide to build Y anyway—because you've synthesized patterns they can't articulate. That's not ignoring research. That's using it.

Steve Jobs didn't ignore user research. He synthesized it differently than the users themselves would have. But he had to do the synthesis. It doesn't happen by osmosis.

What synthesis actually looks like

Real synthesis is arguing with your cofounder at 11pm about whether "easier collaboration" means real-time editing or async comments. It's sketching three different interpretations of the same user quote on a whiteboard. It's admitting you're not sure which of two problems is more urgent.

The teams that get this right do a few things differently:

  • They replay actual interview clips in roadmap meetings (not just quotes)
  • They keep a "conflicting signals" doc where they track tensions in the feedback
  • They have at least one person whose job is to argue the opposite interpretation
  • They revisit old research when new data comes in, updating their mental models

At Mimir, we see this gap constantly. Teams upload hundreds of feedback items, interview transcripts, support tickets. They have the data. The AI can surface patterns. But the actual work—the arguing, the tension, the "wait, does this mean we're solving the wrong problem?"—that still requires humans willing to sit in discomfort.

The false choice between research and intuition

The worst version of this gap is when teams swing to extremes. Either they become research fundamentalists ("we can't decide anything without 20 more interviews") or they abandon research entirely ("we're just going to build what we think is right").

Both are cop-outs.

Good product decisions are research plus intuition plus arguing until you've pressure-tested both. The research gives you raw material. Your intuition gives you pattern recognition. The arguing gives you conviction.

You need all three. And the arguing is the part nobody wants to do because it's uncomfortable and slow and means admitting you might be wrong.

But that discomfort is the actual work. Everything else is just collecting receipts for decisions you've already made.

Related articles

Ready to make evidence-based product decisions?

Paste customer feedback into Mimir and get ranked recommendations in 60 seconds.

Try Mimir free