Customer discovery interviews are the cheapest tool a founder has and the most commonly misused. The mechanics look easy — find some people, ask some questions, take some notes — so most founders never stop to learn how to do it well. They come back from a week of interviews convinced their idea is great, build for six months, and then watch a launch with no buyers.
The point of an interview is not to confirm an idea. It is to learn things you did not already know about how a real person spends their time, money, and attention. If you treat it that way, the rest of the playbook follows.
Decide what you are trying to learn
Before you contact anyone, write down — in one sentence — the question your interviews exist to answer. Not "do people like my idea." Something specific:
- "How do solo accountants currently handle client onboarding, and what do they hate about it?"
- "What do operations managers at independent restaurants actually do during a closing shift?"
- "How do parents of children with severe food allergies decide which restaurants to trust?"
Each of those points to a real population, a real activity, and a real source of pain. They also imply different recruiting strategies and different scripts. The first ten minutes you spend writing a sharper question save you days of wasted conversation.
If you have not done a validation pass yet, this is the first round. Your goal is to understand the problem space, not to pitch a solution.
Recruit people who would actually use the thing
The biggest source of bad data in early-stage interviews is the wrong sample. Friends and acquaintances will not say no to a 20-minute call. They will also lie to you politely.
Better sources, in rough order of signal quality:
- Strangers in the target population recruited from the venues those people already gather in — niche subreddits, professional Slack workspaces, LinkedIn groups, conference attendee lists, alumni networks. Cold outreach to a real practitioner is worth more than a warm intro to someone adjacent.
- People who already pay for an imperfect alternative. Anyone who has opened their wallet to solve the problem you are studying has done some of the thinking for you. They will describe the workaround, the friction, and what they wish someone would build. They are also the easiest converts later.
- Users of competitor products who churned. Ex-customers explain the gap between what was promised and what they got. That gap is your opening.
Avoid friends, family, classmates, and people in adjacent industries. Their feedback is contaminated by goodwill. If you have to talk to them — sometimes their network is your only door in — treat their input as anecdotes, not data, and discount it accordingly.
Write a script you do not actually follow
Scripts exist to keep you from forgetting your own goals when adrenaline kicks in mid-call. They are a backstop, not a track. A good early-stage script is short — five to seven open questions — and stays focused on past behavior.
Behavior questions ("what did you do the last time…") are far more reliable than opinion questions ("would you use a tool that…"). People are surprisingly good at remembering what they did and bad at predicting what they would do. Anchor your script in their actual recent past:
- "Walk me through the last time you had to do X. Where did you start?"
- "What was the most annoying part of that?"
- "What did you try to fix it? Did anything stick?"
- "How much time did the whole thing take you?"
- "If a magic wand could change one thing about that process, what would it be?"
The right closer is almost never "would you use this?" It is "who else should I talk to?" That question converts one good interview into three, and it filters for genuine engagement: people who actually care will tell you who else cares.
Listen for what you did not expect
The whole reason to do interviews instead of staring at a spreadsheet is to be surprised. If every interview feels like it confirms what you already thought, one of two things is happening: you are leading the witness, or you are interviewing the wrong people.
Useful surprises usually look like this:
- The problem you came to investigate is real, but a different problem in the same workflow is bigger.
- The person responsible for solving the problem is not who you assumed it was.
- The current workaround is shockingly elaborate — spreadsheets, group chats, sticky notes, hand-edited PDFs — which means the willingness to invest is already there.
- Nobody is doing anything about the problem at all, which usually means either (a) the pain is genuinely too small, or (b) you have found a real gap. Distinguishing between those takes follow-up.
Read signals, not encouragement
Polite interest is not signal. The interviewee saying "that sounds like a great idea" is the conversational equivalent of "let's grab coffee sometime" — easier to say than no, and meaningless.
Strong signals tend to look like:
- Unprompted mentions of money. "I'd pay for that," with a number attached, said before you ask.
- Unprompted introductions. They send your contact details to a colleague during the call.
- Specific past behavior. "Last quarter I spent two weekends building a spreadsheet to do exactly this."
- Visible frustration. They get faster, louder, or more detailed when describing the pain.
Weak signals — pleasant nods, "interesting," "I could see that being useful," "sure I'd try a free version" — are not data. Discard them.
Take notes a future-you can actually use
Write down quotes verbatim wherever you can. Your memory is going to compress everything to "they liked it" within 48 hours, and a verbatim quote is the only way to push back on that compression later. After every call, spend ten minutes capturing three things in writing:
- The exact words they used to describe the problem (in their language, not yours).
- What they currently do about it, including tools, people, and time.
- Any moment that surprised you — even if you cannot articulate why yet.
After ten or fifteen interviews, those notes start clustering. Themes that show up in different interviewees' own words, without prompting, are signal. Themes only you find interesting are not.
Common mistakes
- Pitching halfway through. Once you describe your solution, the rest of the call is theatre. Save the pitch for a later call, after you have learned what they actually need.
- Asking about the future instead of the past. "Would you use…" produces fiction. "Have you ever…" produces fact.
- Rewarding politeness. If you only finish calls with people who said nice things, you are filtering your own data.
- Stopping at five interviews. Patterns do not show up until somewhere between fifteen and thirty conversations. The first five will mislead you.
- Treating numbers as proof. "Eight out of ten said they would use it" is meaningless if you led the question. Behavior, not survey output, is the unit of evidence.
When to stop
Stop when new interviews stop changing your mental model — when, three minutes in, you can already predict what the interviewee is about to describe. That is saturation. It usually arrives sooner than founders expect, and signals that you have learned what this round can teach you. The next step is to design a small, falsifiable test of the most promising thread, not to run another twenty conversations.
Once you have that thread, the rest of the validation framework kicks in: a landing page that talks back to the interviewee in their own words, a small concrete test of willingness to pay, and a careful look at whether the early numbers support continuing.
Customer discovery is not a phase you finish. It is a habit you keep. The companies that stay close to their users — even at thousand-customer scale — keep running these conversations indefinitely. Done well, they remain the cheapest way to make better decisions.