Book a demo

Survey & Feedback

Product market fit survey: how to know if you've built something people actually need

blog author

Article written by Shmiruthaa Narayanan

Growth Marketer

clock icon

14 min read

25 March 2026

Superhuman delayed their public launch for years. Not because the product wasn't ready — the email client worked fine. They delayed because their product market fit score was 22%. Only 22% of users said they'd be "very disappointed" if the product disappeared.

The benchmark is 40%.

So Rahul Vohra and his team did something most founders won't: they listened to the data instead of the buzz. They segmented users, identified what the "very disappointed" group loved, figured out what held the "somewhat disappointed" group back, and rebuilt their roadmap around those two insights. Three quarters later, their PMF score hit 58%.

That's what a product market fit survey does. It's not a satisfaction check. It's a survival check. It tells you whether you've built something people genuinely need — or something they'd politely forget about by next Tuesday.

This guide covers the exact survey Sean Ellis created, how to score it, which follow-up questions turn a number into a roadmap, who to survey (and who to exclude), and how to use the results to actually improve your score. Whether you're pre-launch, post-pivot, or scaling into a new market — the framework is the same.

What is a product market fit survey?

A product market fit survey measures how dependent your users are on your product. Not how satisfied they are — how dependent. There's a meaningful difference.

The survey was created by Sean Ellis, the growth strategist behind Dropbox, LogMeIn, and Eventbrite. After benchmarking nearly 100 startups, he found one question that predicted which products would scale and which would stall:

"How would you feel if you could no longer use [product]?"

Respondents pick one answer:

  • Very disappointed
  • Somewhat disappointed
  • Not disappointed (it really isn't that useful)
  • N/A — I no longer use it

That's it. One question. The percentage of "very disappointed" responses is your PMF score.

Why disappointment instead of satisfaction? Because asking "do you like our product?" invites polite, positive answers. Asking "how would you feel if it vanished?" reveals how necessary it is. People will say they "like" dozens of products they'd never miss. The disappointment framing separates must-haves from nice-to-haves.

How to write survey questions that get honest answers → 

The 40% rule — how to score your PMF survey

Your product market fit score is calculated as:

PMF Score = (Number of "Very Disappointed" responses ÷ Total valid responses) × 100

Exclude "N/A — I no longer use it" from the denominator. Those respondents aren't active users, so their answers don't reflect current fit.

What the score means

40% or higher — You likely have product-market fit. A meaningful share of your users consider the product a must-have. This is the green light to invest in scaling growth. Ellis found that products clearing 40% almost always achieved sustainable traction. Products below 40% almost always struggled.

25–39% — Getting closer, but not there yet. Users see value, but your product isn't essential enough to enough people. This is where the follow-up questions (covered below) become critical — they tell you what to fix.

Below 25% — Product-market fit hasn't been achieved. The product either isn't solving a real problem, isn't solving it well enough, or isn't reaching the right users. This doesn't mean the idea is dead — it means you need to iterate before scaling.

Worked example

You survey 200 active users. The results:

  • Very disappointed: 88 (44%)
  • Somewhat disappointed: 72 (36%)
  • Not disappointed: 32 (16%)
  • N/A — no longer use: 8 (excluded)

PMF Score = 88 ÷ 192 × 100 = 45.8%

You're above the 40% threshold. Your next move: understand why the 44% love it (so you can double down) and what's holding the 36% of "somewhat disappointed" users back (so you can convert them into your next wave of must-have users).

Real-world benchmarks

CompanyPMF ScoreContext
Slack (2015)51%Hiten Shah surveyed 731 Slack users — over half said "very disappointed"
Superhuman (initial)22%First survey before product refinement
Superhuman (3 quarters later)58%After segmenting users and rebuilding roadmap based on survey data
Buffer (early stage)~40%Surveyed most engaged users to validate core value

These aren't lucky numbers. They're the result of surveying the right users, reading the data honestly, and acting on what it says.

Who to survey (this is where most teams get it wrong)

The most common mistake with PMF surveys isn't the question — it's the audience. Survey the wrong people and your score is meaningless, high or low.

Sean Ellis recommends surveying users who meet all three criteria:

1. They've experienced the core of your product. Not people who signed up and never logged in. Not people who poked around the homepage. Users who've actually used the thing your product exists to do. If you're Uber, that means people who've taken a ride — not people who downloaded the app.

2. They've used your product at least twice. One-time users haven't had enough exposure to form a real opinion. Two uses suggest intentional return, which means their feedback reflects genuine experience.

3. They've used your product in the last two weeks. Recency matters. A user who loved your product six months ago but hasn't touched it since is telling you something different from a user who used it yesterday.

If your sample doesn't meet these criteria, your score will be skewed. Survey power users only and you'll get an inflated number. Survey everyone who ever signed up and you'll get a deflated one. Neither is useful.

How many responses do you need? Aim for at least 100 valid responses. Below that, the 40% threshold is too noisy to be reliable. For segmented analysis (breaking results down by user type, plan, or use case), you'll want 40–50 responses per segment.

15+ PMF survey questions (beyond the core question)

The "very disappointed" question tells you the score. The follow-up questions tell you what to do about it.

The core question (always include this)

Q1: "How would you feel if you could no longer use [product]?"

  • Very disappointed
  • Somewhat disappointed
  • Not disappointed (it really isn't that useful)
  • N/A — I no longer use it

Understanding what drives the score

Q2: "What is the main benefit you receive from [product]?" (open-ended)

This reveals your actual value proposition — not the one you wrote on your landing page, but the one your users experience. If 60% of "very disappointed" users say "it saves me 3 hours a week on reporting," that's your real positioning.

Q3: "What type of person do you think would benefit most from [product]?" (open-ended)

Your users describe your ideal customer profile better than you can. Their language tells you who to target and how to talk to them.

Q4: "How can we improve [product] to better meet your needs?" (open-ended)

This is your roadmap input. Pay special attention to answers from "somewhat disappointed" users — they're the group most likely to convert into "very disappointed" fans if you address their specific friction.

Q5: "Please tell us why you selected your answer to question 1." (open-ended)

The "why" behind the score. A user who says "very disappointed" because "I've built my entire workflow around it" is telling you something different from one who says "I just like the interface." Both are promoters, but the retention dynamics are different.

Competitive and switching questions

Q6: "What would you use as an alternative if [product] were no longer available?" (open-ended)

This maps your competitive landscape from the user's perspective. If most users name the same alternative, you know exactly who you're competing against — and what to differentiate on.

Q7: "Have you tried any alternatives to [product]? If yes, how did they compare?" (open-ended)

Q8: "What made you choose [product] over other options?" (open-ended)

Usage and engagement questions

Q9: "How often do you use [product]?" (daily / several times a week / weekly / monthly / rarely)

Q10: "Which features do you use most?" (select all that apply or ranking)

Q11: "Are there features you expected to find but didn't?" (open-ended)

Recommendation and advocacy questions

Q12: "Have you recommended [product] to anyone?" (yes / no)

Q13: "How likely are you to recommend [product] to a friend or colleague?" (0–10 NPS scale)

Running the NPS question alongside the PMF question gives you two complementary data points: PMF tells you if the product is a must-have, NPS tells you if users will actively promote it. The two don't always align — a product can be essential but frustrating.

Q14: "What would you tell a friend about [product]?" (open-ended)

The language users use here is your best marketing copy. It's how real people describe your product to other real people — no jargon, no positioning framework, just the truth.

Pricing and value perception

Q15: "How do you feel about the price of [product] relative to the value you receive?" (much too expensive → great value for the price, 5-point scale)

Q16: "Would you pay more for [product] if it included [specific feature]?" (yes / no / depends — explain)

How to analyze your PMF survey results (the Superhuman method)

Collecting the score is step one. What separates teams that improve their PMF from teams that just measure it is how they analyze and act on the data. Here's the framework Superhuman used — and it works for any product.

Step 1 — Segment by disappointment level

Split all responses into three groups: very disappointed, somewhat disappointed, and not disappointed. Analyze each group's open-ended answers separately. You're looking for patterns within each group, not across the whole sample.

Step 2 — Understand what "very disappointed" users love

Read every open-ended response from this group. What benefit do they mention most? What features come up repeatedly? These are your product's actual strengths — double down on them.

For Superhuman, the "very disappointed" group consistently mentioned speed, keyboard shortcuts, and focused inbox. That became half their roadmap: more speed, more shortcuts, more automation.

Step 3 — Understand what holds "somewhat disappointed" users back

This group sees value but isn't locked in. Their improvement suggestions (Q4) are your highest-leverage fixes. If "somewhat disappointed" users consistently say "I love the product but it doesn't have a mobile app," building a mobile app converts them into must-have territory.

The other half of Superhuman's roadmap came from here: mobile app, integrations, better attachment handling.

Step 4 — Ignore feedback from "not disappointed" users

This is counterintuitive but critical. Users who wouldn't miss your product are not your target customer. Their feature requests will pull you away from the users who actually need you. Don't let the loudest non-customers hijack your roadmap.

Step 5 — Track quarterly, not once

PMF isn't a one-time achievement. Markets shift, competitors improve, and user expectations change. Run the survey every quarter with a fresh sample. Track your score as a trend line, not a snapshot. A score that drops from 48% to 35% over two quarters is a signal that something has changed — even if your revenue hasn't caught up yet.

How to build a Voice of Customer program → 

NPS survey questions and best practices → 

When to run a product market fit survey

After your first 100 active users. Not signups — active users who've experienced the core product at least twice. Below 100 responses, the 40% threshold is statistically noisy.

After a major pivot. You've changed direction. The survey tells you whether the new direction resonates. A pivot that doesn't produce 40%+ within 6 months is a pivot that didn't work — even if early usage looks promising.

When entering a new market or segment. You might have PMF in your core market but not in the new one. Run the survey separately for each segment. Your overall score could mask that you're crushing it with startups but missing the mark with enterprise.

After major feature launches. Buffer learned this the hard way. They launched their Daily app to great PR and ProductHunt buzz — but when they finally checked whether users were coming back, they weren't. A PMF survey after launch would have caught this in weeks, not months.

Quarterly, once you've achieved PMF. Even after hitting 40%, your score can drift. New competitors, changing expectations, or feature bloat can erode fit over time. Treat PMF like a vital sign, not a graduation certificate.

Product market fit survey template (ready to use)

Here's a 7-question survey you can deploy today. It covers the Sean Ellis core question, the essential follow-ups, and the competitive context — without being so long that users abandon it.

Question 1: How would you feel if you could no longer use [product]? (Very disappointed / Somewhat disappointed / Not disappointed / N/A — I no longer use it)

Question 2: What is the main benefit you receive from [product]? (Open-ended)

Question 3: What type of person do you think would benefit most from [product]? (Open-ended)

Question 4: How can we improve [product] to better meet your needs? (Open-ended)

Question 5: What would you use as an alternative if [product] were no longer available? (Open-ended)

Question 6: How likely are you to recommend [product] to a friend or colleague? (0–10 NPS scale)

Question 7: Is there anything else you'd like to share about your experience? (Open-ended)

This takes under 3 minutes to complete. The first question gives you your PMF score. Questions 2–4 give you your roadmap. Question 5 maps your competition. Question 6 gives you a complementary loyalty metric. Question 7 catches what you didn't think to ask.

Here's a ready-to-use template for you:

Product-Market Fit Survey Template

desktop-frame
Product-Market Fit Survey Template
Use This Template

CogniVue analyzes open-ended responses automatically. Questions 2–5 generate qualitative data that's useless unless someone reads it. CogniVue scans every response for sentiment, recurring themes, and key drivers. Instead of reading 200 comments, you see "47% of 'very disappointed' users mention workflow automation as their primary benefit." That's a roadmap insight in 30 seconds.

SmartReach AI optimizes delivery. Send the survey via email, SMS, or WhatsApp — SmartReach picks the channel and timing that each user is most likely to respond to. Higher response rates mean more reliable data.

Recurring surveys for quarterly tracking. Set it up once. SurveySparrow sends the PMF survey to a fresh sample every quarter automatically. Your trend line updates itself.

Segmentation built in. Filter results by plan type, usage frequency, signup date, or any contact property. See your PMF score for enterprise customers vs. self-serve users without exporting to a spreadsheet.

CTA (strong): Ready to measure your product-market fit? Create your PMF survey in minutes — conversational format, AI analysis, and quarterly tracking built in. [Start your free 14-day trial — no credit card required. No strings attached. →]

See SurveySparrow's AI survey builder → 

How CogniVue analyzes customer feedback → 

PMF survey vs. NPS vs. CSAT — what's the difference?

These three metrics answer different questions. Using the wrong one leads to the wrong conclusions.

MetricWhat it measuresWhen to use itKey question
PMF surveyWhether your product is a must-havePre-scale, post-pivot, quarterly tracking"How would you feel if you could no longer use this?"
NPSWhether users would recommend youOngoing loyalty tracking"How likely are you to recommend us?"
CSATWhether a specific interaction was satisfactoryAfter support, purchase, or onboarding"How satisfied were you with this experience?"

PMF is a leading indicator. It tells you whether you have a foundation worth scaling. NPS and CSAT are health metrics — they track how well you're serving users who already have the product. You can have a great NPS and terrible PMF (users are satisfied but not dependent). You can have great PMF and mediocre NPS (the product is essential but frustrating to use).

Run PMF to decide whether to scale. Run NPS to track loyalty over time. Run CSAT to monitor specific touchpoints. They complement each other — they don't replace each other.

Complete guide to transactional NPS surveys → 

CSAT survey questions for every touchpoint → 

blog floating bannerblog floating banner

Know your PMF score before you scale.

blog author image

Shmiruthaa Narayanan

Growth Marketer

Frequently Asked Questions (FAQs)

A product market fit survey measures how dependent users are on your product. Created by Sean Ellis, it centers on one question: "How would you feel if you could no longer use this product?" If 40% or more of respondents say "very disappointed," you've likely achieved product-market fit. The survey also includes follow-up questions about key benefits, ideal users, improvement suggestions, and competitive alternatives.

The 40% rule states that if 40% or more of surveyed users say they'd be "very disappointed" without your product, you likely have product-market fit. Sean Ellis established this benchmark after studying nearly 100 startups. Products above 40% almost always achieved sustainable growth. Products below 40% almost always struggled to gain traction.

Survey users who have experienced the core of your product, have used it at least twice, and have used it within the last two weeks. This ensures respondents have enough experience to give meaningful feedback while their impressions are still fresh. Aim for at least 100 valid responses for statistical reliability.

PMF measures whether your product is a must-have (a leading indicator of traction). NPS measures whether users would recommend you (an ongoing loyalty metric). You can have high NPS with low PMF — users might like your product without being dependent on it. Both are valuable, and many teams run them together for a complete picture.

Run it at key milestones: after your first 100 active users, after pivots, and when entering new markets. Once you've achieved PMF, run it quarterly to track whether your score holds. Markets shift, competitors improve, and user expectations change. A quarterly trend line is worth more than any single measurement.

Don't panic — and don't scale. Analyze the open-ended responses from "somewhat disappointed" users to understand what's holding them back. Focus half your roadmap on strengthening what "very disappointed" users already love and the other half on fixing what "somewhat disappointed" users need. Superhuman went from 22% to 58% in three quarters using this exact approach.

Not in the traditional sense — the Sean Ellis question requires users who've actually used the product. But you can run a modified version during beta or closed testing. As long as participants have experienced the core product at least twice, their responses are valid. Pre-launch PMF data is some of the most valuable feedback you'll ever collect.

blog sticky cta