PMMs get asked the same question in every leadership review: "What is product marketing contributing to revenue?"
Most PMMs answer with activities. Decks produced. Campaigns shipped. Content published. The question was about revenue. The answer was about effort. That gap is why product marketing teams lose budget and headcount fights.
A KPI tree solves this by mapping PMM activity to business outcomes in a traceable chain. It does not require PMM to own revenue directly — nobody expects that. It requires PMM to own the leading indicators that reliably predict revenue, and to measure them consistently enough to build a credible record of cause and effect.
This guide walks through building that tree: the metric hierarchy, the leading indicators that matter, and how to present it to leadership without overreaching on attribution.
Why a KPI Tree, Not a KPI List
A list of metrics — win rate, MQL volume, NPS, message adoption — does not tell a story. It is a dashboard, not a narrative. A KPI tree shows the structure: which metrics drive which outcomes, at what lag, and through what mechanism.
The tree structure makes three things clear:
- Which metrics PMM controls versus influences versus monitors
- Where the chain breaks (if win rate drops but message adoption is high, the problem is not PMM output — it is sales execution)
- How to prioritise measurement investment (not every metric deserves equal tracking effort)
The Four Levels of the PMM KPI Tree
Structure the tree as four levels: the North Star metric at the top, then revenue metrics, then PMM-owned metrics, then leading indicators at the base.
Level 1: North Star (Revenue)
Revenue is the outcome. PMM does not own it. But PMM contributes to it through a traceable chain. At Level 1, you are simply anchoring the tree to what the business cares about:
- Net new ARR from new logos
- Net new ARR from expansion (existing customers)
- Net Revenue Retention (NRR)
These are the numbers leadership reports to investors. PMM's job is to show how the metrics they own contribute to these outcomes.
Level 2: Revenue Contribution Metrics (Influenced)
At Level 2, you track the metrics PMM directly influences — not owns, but influences in a traceable way. These are where most PMM teams stop short. They report activity (Level 4) but skip the influence metrics (Level 2), which is why the chain is invisible to leadership.
- Win rate on competitive deals: PMM owns the battlecards, the competitive intelligence, and the messaging that arms reps in head-to-head evaluations. A rising win rate against named competitors is directly attributable to PMM output if messaging adoption is also tracked.
- Sales cycle length: Faster sales cycles are often a signal of clearer positioning. When buyers understand immediately why you are the right choice, the evaluation compresses. Track average cycle length by segment and by deal type.
- Average deal size: Positioning that clearly frames value (rather than just capability) supports higher deal sizes. If your messaging consistently leads with ROI and outcome, buyers bring a larger budget conversation sooner.
- Net Promoter Score: NPS at 30, 60, and 90 days post-onboarding is a leading indicator of retention. PMM owns the onboarding messaging and the initial value delivery narrative. Low early NPS often signals a positioning-promise gap.
Level 3: PMM-Owned Metrics (Controlled)
At Level 3, you track what PMM owns directly. These are the metrics you can act on if they move in the wrong direction — because you control the inputs.
- Message adoption rate: What percentage of sales calls use the positioning language and frameworks PMM has defined? Track this through call recording review (Gong, Chorus) or rep self-report, validated through manager observation. Target: 70%+ adoption within 60 days of new messaging launch.
- Asset utilisation rate: What percentage of sales assets (decks, one-pagers, battlecards) are being used in actual deals? Measure through CRM attachment data, Highspot or Showpad analytics, or Sales self-report. An asset library nobody uses is an expensive failure.
- Launch engagement rate: For product launches, track the percentage of target accounts that engaged with launch content (opened email, attended webinar, visited launch landing page) within 14 days. This is a leading indicator of pipeline generated by the launch.
- Competitive intelligence currency: How many days since each competitive battlecard was last updated? Stale battlecards are a measured risk. Set a maximum age (30 days for top three competitors, 90 days for tier-two competitors) and track against it.
Level 4: Leading Indicators (Activity)
At Level 4, you track the activity signals that predict whether Level 3 metrics will hit target. These are early warning signals, not success metrics.
- Number of win/loss interviews completed per month (target: minimum 4)
- Number of sales rep enablement sessions delivered per quarter (target: 1 per major messaging update)
- Days to ship post-launch messaging update following a product release (target: under 5 business days)
- Number of new proof points collected per quarter (customer quotes, case study metrics, referral signals)
Level 4 is where you track your own operational discipline. If Level 3 metrics are slipping, the first diagnostic is: are Level 4 activities being completed at the right cadence?
A Concrete Example: Launch KPI Tree
For a major product launch, the KPI tree looks like this:
| Level | Metric | Target (example) | Review cadence |
|---|---|---|---|
| North Star | Net new ARR from launch-sourced pipeline | £200k in Q1 | Monthly |
| Revenue contribution | Win rate on deals where new feature was mentioned | 55%+ | Monthly |
| PMM-owned | Launch email open rate (target accounts) | 35%+ | Weekly (first 4 weeks) |
| PMM-owned | Feature adoption rate (existing customers, 30 days post-launch) | 40%+ | Monthly |
| Leading indicator | Sales reps trained on new messaging (pre-launch) | 100% of AEs | One-time (T-5 days) |
| Leading indicator | Battlecard updated with new competitive response | Done by launch day | One-time |
What Not to Measure
The KPI tree only works if it is disciplined. Vanity metrics dilute attention and weaken the revenue narrative. Remove these from your PMM dashboard:
- Content views and page sessions: Unless directly tied to a conversion event (demo request, email capture), these measure reach, not impact.
- Social impressions from thought leadership: Useful for brand, not for PMM performance review.
- Number of assets produced: This is an input metric, not an outcome metric. Measuring it incentivises volume over quality.
- Satisfaction scores from internal stakeholders: Being liked by Sales is not the same as being effective. Measure message adoption and win rates, not relationship quality.
The attribution problem: how to handle it honestly
PMM cannot claim sole attribution for win rate improvements or deal size increases. Multiple variables affect these metrics. The right framing is: "We track these metrics because our work is designed to influence them. When they move in the right direction, we look for correlation with our activity. When they move in the wrong direction, we diagnose whether the cause is in PMM or elsewhere." This is honest, defensible, and credible to leadership teams who understand that attribution in GTM is always shared.
What Good Measurement Looks Like: A Worked Example
A PMM team of three at a Series B B2B SaaS company (£4M ARR, moving upmarket from SMB to mid-market) builds the following KPI tree for Q2:
- Level 1 (Revenue): New ARR from mid-market accounts (target: £800k for Q2). Win rate on mid-market deals (target: 32%, up from 26%).
- Level 2 (PMM Impact): Message adoption rate on mid-market deck (target: 90% of AEs using new deck by week 3). Deal cycle length in mid-market (target: reduce from 78 days to 65 days via sharper qualification collateral).
- Level 3 (Activity): Mid-market battlecard delivered by end of week 1. Win/loss interviews with four mid-market losses completed by end of month one. Sales certification on new ICP messaging completed by all AEs by week 3.
Each Level 3 activity has a named owner, a deadline, and a data source. Each Level 2 metric has a defined measurement method (call recording review for message adoption; CRM stage timestamps for deal cycle). Level 1 metrics are pulled directly from the revenue dashboard. At the end of Q2, the PMM team can show a clear line from what they shipped to whether revenue moved.
Presenting the KPI Tree to Leadership
The format that works in leadership reviews is a one-page view of the tree, with traffic-light status on each metric and a two-line narrative explaining any red or amber status.
Example narrative for a red metric: "Win rate on competitive deals is down 8 points this quarter. Call recording review shows reps are not using the Competitor X battlecard. Root cause: battlecard was updated six weeks ago and enablement session has not yet been delivered. Scheduled for [date]."
This narrative format does three things: it shows you know the cause, it shows you have a solution, and it demonstrates that your measurement system is catching problems early rather than explaining them retrospectively.
The Decision Trade-off: Breadth vs Depth of Measurement
Every PMM team has a measurement bandwidth limit. Tracking twenty metrics poorly produces noise. Tracking five metrics rigorously produces a narrative. The trade-off:
- Broad measurement (15+ metrics): Better for spotting unexpected correlations. Requires dedicated analytical resource. Risks drowning stakeholders in data.
- Deep measurement (5–7 metrics): Better for accountability and action. Produces a cleaner narrative. Risk of missing signals that fall outside the tracked set.
For most PMM teams under five people, five to seven metrics — two from Level 2, two from Level 3, and two from Level 4 — is the right balance. Add metrics as the team grows and measurement capacity increases. Never add a metric unless you have a plan for how you will act when it moves.
Building the KPI Tree: Implementation Steps
- Identify the two or three revenue outcomes your company cares most about this year. These become Level 1.
- Map the PMM activities that plausibly influence each revenue outcome. These become the Level 2 and Level 3 candidates.
- Test the causal logic. "If PMM message adoption increases, what mechanism would cause win rate to improve?" Make the mechanism explicit. If you cannot articulate it, the metric is not in the right place in the tree.
- Select two metrics from each level. Set targets based on current baseline plus realistic improvement — not aspirational stretch goals.
- Define measurement method for each metric. Where does the data come from? How often is it captured? Who pulls it?
- Present the tree to your VP or CMO for alignment before you start tracking. Agreement on the tree prevents disputes about measurement methodology later.
- Review monthly. Quarterly, ask: is the tree still reflecting the right causal chain, or have business priorities shifted?