Every PMM says they are customer-led. Very few can show you the specific customer phrases that shaped their last positioning decision.
Voice of customer analysis is the practice of collecting, organising, and extracting positioning insight from direct buyer language. Not surveys about satisfaction. Not NPS comments. Not edited case study quotes. The raw, unedited language buyers use when they describe their problems, their decision criteria, and their experience of your product — before you have had a chance to influence how they frame it.
That raw language is the most valuable raw material in product marketing. It is also the most consistently ignored. This framework covers where to find it, how to organise it, and how to turn it into decisions.
The Three Tiers of VoC Data
VoC data exists at three levels of richness. Most PMMs only access the bottom level. All three are necessary for complete analysis.
Tier 1: Unprompted public language (lowest richness, highest volume)
This is what buyers say about your category without being asked: G2 reviews, Capterra ratings, Reddit threads, community discussions, LinkedIn comments, Trustpilot reviews of you and your competitors. The advantage is volume and authenticity — nobody is saying what they think you want to hear. The disadvantage is noise, context-free statements, and recency bias.
What to mine from Tier 1:
- The specific words buyers use to describe the problem (these become your headline copy candidates)
- The complaints buyers have about the category — including competitors — which reveal unmet needs
- The outcomes buyers celebrate when the product works — these are your proof points
- The objections buyers raise before purchase — these are your pre-purchase messaging gaps
Concrete example: A PMM working on a revenue intelligence platform searches G2 for three-star reviews of the top two competitors. Pattern across twelve reviews: buyers complain that "data is only accurate if reps update it manually." This is not a feature complaint — it is a problem statement in buyer language. It becomes the opening of the product's positioning: "Pipeline data that does not depend on rep discipline."
Tier 2: Internal captured language (moderate richness, variable volume)
This is buyer language captured inside your own systems: CRM notes, call recording transcripts (Gong, Chorus), support ticket descriptions, onboarding call notes, CS account notes, and email threads from the sales process.
Most companies have thousands of hours of Tier 2 data and almost no PMM access to it. Getting access is worth the political effort — this is where the most specific and context-rich buyer language lives.
What to mine from Tier 2:
- Discovery call transcripts: what did the buyer say when describing their trigger event and their problem?
- Demo call transcripts: what questions did the buyer ask? What did they react to positively?
- Loss notes: what was cited as the reason for going elsewhere?
- Onboarding notes: what was confusing? Where did buyers get stuck?
- CS account notes: what are long-tenure customers using the product for that PMM did not anticipate?
Practical access method: Ask RevOps or Sales Operations to export CRM notes and Gong/Chorus transcripts for the last 90 days of won and lost deals. A sample of 30-40 transcripts is usually enough to see patterns. Use a keyword search to surface mentions of the problem category, competitor names, and specific pain phrases. You do not need to read every transcript in full — search and sample.
Tier 3: Prompted interview language (highest richness, lowest volume)
This is what buyers say in structured interviews — win/loss interviews, customer research interviews, and ICP discovery interviews. It is the most reliable source of positioning insight because you can probe, clarify, and sequence questions to surface information that would never appear unprompted.
Tier 3 data is the primary input for positioning decisions. Tier 1 and Tier 2 data confirm patterns at scale. Tier 3 data explains the mechanisms behind the patterns.
See the positioning research interview questions guide for the full interview framework.
The VoC Analysis Process
Collecting data is the easy part. Turning it into positioning decisions requires a structured analysis process. Here is the four-step method:
Step 1: Language harvest
From Tier 1, Tier 2, and Tier 3 sources, pull the raw language in which buyers describe their problem, their trigger, their criteria, and their experience. Do not paraphrase at this stage. Capture exact phrases.
Create a simple document with three columns: Source (where the quote came from), Context (what was being discussed), Quote (the exact phrase). Aim for 80-120 raw data points across all three tiers before starting analysis.
Step 2: Language clustering
Group the raw phrases into thematic clusters. The clusters should emerge from the data, not be imposed on it. Common clusters that appear in B2B SaaS VoC analysis:
- Problem descriptions (what the buyer was trying to solve)
- Trigger events (what made it urgent)
- Status quo descriptions (what they were doing before)
- Decision criteria (what they used to evaluate options)
- Outcome language (how they describe value received)
- Objections (what almost stopped them from buying)
- Category beliefs (how they think about the space in general)
Within each cluster, note the frequency of specific phrases. A phrase that appears in eight of twelve interviews is a pattern. A phrase that appears once is an outlier.
Step 3: Positioning gap analysis
Compare the clustered buyer language against your current positioning and messaging. Identify three types of gaps:
| Gap type | What it means | Action |
|---|---|---|
| Language gap | Buyers use different words for the same concept as your messaging | Update messaging to use buyer language verbatim where possible |
| Content gap | Buyers raise criteria or concerns that your messaging does not address | Add messaging that addresses the missing criterion or pre-empts the concern |
| ICP gap | Buyers who convert well describe themselves differently from your ICP definition | Update the ICP definition to reflect who is actually buying and why |
Step 4: Decision and documentation
For each gap identified, make a specific decision: what changes, what does not, and why. Document the decision with the supporting evidence. This creates an audit trail that prevents positioning drift — the slow, invisible shift that happens when individual team members start using their own language because the original decision was never recorded.
Write the decisions in a format the whole GTM team can access: "We changed [X] to [Y] because buyers consistently describe the problem as [Z]. The source data is [link to transcript/review]. This decision stands until we see contrary evidence in [specific signal]."
VoC Analysis Cadence
VoC analysis is not a quarterly project. It is a continuous input to messaging decisions. The cadence that works for most PMM teams:
- Monthly: Tier 2 scan — 30 minutes reviewing CRM notes and flagged call recordings from the past 30 days. Look for new language patterns or objections that were not present before.
- Quarterly: Tier 1 and Tier 3 deep analysis — four to six customer or prospect interviews plus a review of the top competitors' G2 reviews. Full gap analysis against current positioning.
- Annual: Full VoC cycle — twelve to fifteen interviews across wins, losses, and long-tenure customers. Complete positioning review based on findings.
Common VoC Mistakes
- Interviewing only happy customers: Happy customers confirm what is working. Churned customers and losses tell you what is missing. Your VoC sample needs both.
- Confusing satisfaction data with insight: NPS scores and CSAT ratings tell you how buyers feel. VoC analysis tells you what they think and why they decided. They are different questions with different data sources.
- Paraphrasing before analysis: The moment you paraphrase a buyer quote, you have introduced your own interpretation. Keep raw language raw until after the clustering step.
- Analysis without decision: VoC reports that summarise what customers said but do not make specific positioning decisions are research projects, not product marketing assets. Every analysis session should end with a list of decisions and who is making them.
- Ignoring Tier 2 data: Call recordings and CRM notes are the most specific and context-rich VoC data most companies own. Not using them because the access is awkward is an expensive miss.
The VoC Analysis Artefact
The output of a full VoC cycle should be a single document with four sections:
- Language library: The top 20-30 buyer phrases organised by cluster, with source and frequency noted. This is the raw material for copywriters, reps, and content teams.
- Pattern summary: Three to five key patterns from the analysis — what the data shows is true about how buyers experience the category and the product.
- Gap findings: The specific language, content, and ICP gaps identified in the gap analysis.
- Decisions: The specific changes to positioning, messaging, ICP definition, or enablement materials that will be made based on the findings, with rationale and timeline.
Keep the document short. The language library will grow over time. The pattern summary should be no more than one page. The decisions section should be no more than five bullet points. If you cannot summarise the key decisions in five bullets, you have not prioritised rigorously enough.
Recommended Tools for VoC Data Collection
The tools matter less than the discipline, but these are the most reliable sources for each data tier:
- Call recordings: Gong and Chorus are the most widely used. Both allow you to search transcripts by keyword and filter by deal stage or outcome. If you do not have either, a Zoom recording library with manual review works for early-stage teams.
- Review mining: G2 and Capterra are the primary sources for B2B SaaS. Search competitor reviews alongside your own — buyers describing what a competitor does well are describing your positioning gap.
- Interview scheduling and note-taking: Calendly for scheduling, Otter.ai or Fireflies for transcription. The goal is to remove the manual burden of note-taking so you can focus on listening and probing during the interview itself.
- Pattern clustering: A shared spreadsheet with a column per theme is sufficient for most teams. Dedicated tools like Dovetail are worth considering once you are running more than twenty interviews per quarter.