Customer Satisfaction Score (CSAT) is the most versatile customer experience metric available. Unlike NPS — which measures long-term loyalty — CSAT measures satisfaction with a specific interaction, product, or moment in the customer journey. Ask the right questions at the right time, and CSAT data becomes one of the most actionable inputs you have. Ask the wrong questions, and you get noise.
Here are 20 customer satisfaction survey questions organized by context, with an explanation of why each one works and when to use it.
Understanding CSAT Before You Survey
CSAT is calculated by asking customers to rate their satisfaction on a 1–5 scale (1 = Very Unsatisfied, 5 = Very Satisfied). The score is the percentage of respondents who answered 4 or 5:
CSAT = (Number of 4s and 5s ÷ Total responses) × 100A CSAT score of 78% means 78% of respondents were satisfied or very satisfied. Above 75% is generally good; above 85% is excellent. Industry benchmarks vary — SaaS support typically runs 78–82%, retail around 80%.
CSAT vs. NPS vs. CES at a glance: CSAT measures satisfaction with a touchpoint. NPS measures overall loyalty and likelihood to recommend. CES (Customer Effort Score) measures how hard it was to accomplish something. The strongest CX programs use all three at different stages of the customer journey.
The Core Question (Always Include This)
1. The Overall Satisfaction Question
"How satisfied are you with [your experience / this interaction / this product]?" (1–5 scale)
Why it works: This is the CSAT baseline — the one question every CSAT survey should include. Keep the scale consistent across all surveys so you can track trends over time. Changing the scale mid-program breaks your trend data.
2. The Open-Ended Follow-Up
"What is the primary reason for your score?" or "What could we have done better?"
Why it works: The combination of one rating plus one open-ended follow-up is the CSAT gold standard. The number tells you what; the text tells you why. Without the qualitative follow-up, you cannot prioritize which improvements to make first.
Post-Support Questions
3. Resolution Satisfaction
"How satisfied are you with how your issue was resolved?"
Why it works: Separates satisfaction with the outcome from satisfaction with the agent. A customer can be satisfied with a friendly agent but still dissatisfied with an unresolved problem — or vice versa. This distinction drives different actions.
4. Resolution Speed
"Did we resolve your issue in a reasonable amount of time?" (Yes / No)
Why it works: Speed is one of the top drivers of support satisfaction. Binary format eliminates rating scale ambiguity and makes the data immediately actionable: if 40% say no, investigate first-response and resolution time targets.
5. Agent Knowledge
"How knowledgeable was the support representative you worked with?" (1–5)
Why it works: Training and knowledge base gaps show up in this metric before they appear in overall satisfaction scores. Use it to identify agents or topics that need additional support resources.
6. Ease of Getting Help (CES-style)
"How easy was it to get the help you needed?" (Very Difficult → Very Easy)
Why it works: Customer Effort Score is the strongest predictor of support-related churn. If customers have to try multiple channels or repeat themselves, they leave — even when the final outcome was positive.
Post-Purchase Questions
7. Product Quality Satisfaction
"How satisfied are you with the quality of [product name]?"
Why it works: Directly measures whether the product delivered on its promise. Send this 3–7 days after delivery — not immediately, before customers have had a chance to use the product.
8. Expectation Match
"Did the product meet your expectations?" (Yes, exceeded / Yes, met / No, fell short)
Why it works: Expectation mismatches are a leading cause of returns and churn. This question reveals whether the problem is with the product itself or with how it was marketed and described.
9. Purchase Experience
"How easy was it to complete your purchase?"
Why it works: Friction in checkout is a preventable conversion killer. This question flags UX issues in the purchase flow that analytics alone might not surface — like confusing form fields, unexpected shipping costs, or payment method limitations.
10. Repurchase Intent
"How likely are you to purchase from us again?" (Very Unlikely → Very Likely)
Why it works: A forward-looking loyalty signal in a transactional context. Correlated strongly with customer lifetime value. If this score is low, investigate whether the issue is product quality, price, or delivery experience.
Product & Feature Satisfaction
11. Overall Product Satisfaction
"How satisfied are you with [product/service] overall?"
Why it works: The umbrella question for product experience. Use this as your top-line product health metric, tracked quarterly or after major releases to detect changes in perception.
12. Feature Usefulness
"How useful is [specific feature] for your needs?"
Why it works: Product teams can use this to prioritize the roadmap. Features that are consistently rated low-usefulness are candidates for redesign or deprecation. Features that surprise users with high usefulness are worth doubling down on.
13. Onboarding Satisfaction
"How satisfied are you with how easy it was to get started?"
Why it works: Onboarding friction is the silent killer of SaaS retention. Customers who struggle during onboarding churn disproportionately, even when the product is strong. This question, sent at the end of the onboarding flow, benchmarks your first-time experience.
14. Value for Price
"How would you rate the value for the price you paid?"
Why it works: Perceived value is independent of absolute price. A customer who paid $500 and feels they got $1,000 worth of value is more loyal than one who paid $50 and feels they overpaid. This metric predicts price sensitivity and upgrade/downgrade behavior.
Relationship & Brand Questions
15. Overall Experience
"How would you rate your overall experience with [company]?"
Why it works: The broadest relationship-level CSAT question. Use in quarterly check-ins to get a top-line satisfaction trend that spans all touchpoints, not just the most recent interaction.
16. Need Fulfillment
"How well does our product meet your needs?"
Why it works: Separates product-market fit from satisfaction with execution. A product can be executed brilliantly but fail if it doesn't actually solve the right problem. Low scores here point to positioning or product strategy issues, not just service quality.
17. Communication Satisfaction
"How satisfied are you with how we communicate with you?"
Why it works: Covers email, in-app notifications, and proactive outreach. Useful for companies with high communication volume (e-commerce, SaaS) where notification fatigue or unclear updates cause friction.
18. Post-Complaint Recovery
"After your recent issue, how satisfied are you with how we handled it?"
Why it works: Service recovery, when done well, can produce higher satisfaction than if the problem had never occurred — a phenomenon called the service recovery paradox. This question measures whether your recovery process is capturing that opportunity.
Closing Questions
19. Willingness to Recommend (CSAT variant)
"Based on this experience, how likely are you to recommend us to a friend or colleague?" (0–10)
Why it works: A transactional NPS question. Different from relationship NPS — it captures advocacy intent from a specific interaction, not the overall relationship. Useful for identifying which touchpoints are generating referrals versus which ones are quietly damaging word-of-mouth.
20. Improvement Priority
"If you could change one thing about your experience, what would it be?"
Why it works: The single most actionable open-ended question available. Unlike "what could we do better?" (which invites wish lists), asking for the single most important change forces respondents to prioritize for you. Group the responses by theme and you have a direct roadmap input from your customers.
What Makes a CSAT Question Effective
- Single-topic: One question, one idea. "Was the product easy to use and did it meet your expectations?" is two questions — split them.
- Plain language: No jargon. If your customers wouldn't use the word in conversation, don't put it in a survey question.
- Non-leading: "How great was your experience?" biases responses upward. "How would you rate your experience?" does not.
- Consistent scale: Use the same 1–5 scale throughout your program so data is comparable across time and touchpoints.
- Actionable: Only ask about things you can actually change. Asking "how satisfied are you with shipping speed?" when you have no control over your carrier erodes trust.
Survey Length and Timing
For transactional CSAT surveys: aim for 3–5 questions total. One rating question + one open-ended follow-up is the minimum effective survey. Never exceed 10 questions for a touchpoint survey — completion rates drop significantly, and late answers are lower quality.
Timing by trigger:
- Post-support ticket: Within hours of ticket closure — while the experience is fresh
- Post-purchase: 3–7 days after delivery — after the customer has had time to use the product
- Post-onboarding: At the end of the onboarding flow
- Quarterly relationship check: Every 3 months for a broader view across all touchpoints
The Bottom Line
The best CSAT surveys are short, well-timed, and always paired with an open-ended question. Start with the core satisfaction question plus "what's the primary reason for your score?" — then add 1–3 more questions only if they map to specific things you can act on. Consistency matters more than sophistication: a simple 3-question survey run reliably every quarter will produce more insight than an elaborate 20-question survey run once a year.