After exploring the data, I developed a scoring methodology that fairly compares BD-Sales pairings while accounting for inherent lead quality differences. This phase defines the metrics, establishes baseline comparisons, and classifies performance.
After evaluating multiple candidates, four key metrics were selected for analysis
Metrics needed to be: (1) Actionable - can we change behavior?, (2) Measurable - reliable calculation from data, (3) Impactful - correlates with revenue, and (4) Independent - provides unique information
Definition: % of decided opportunities (Won or Lost) that closed won
Why included: Direct measure of conversion success
Definition: % of lost opportunities that died within 14 days
Why included: Indicates lead qualification quality (lower is better)
Definition: % of open deals inactive for 90+ days
Why included: Measures deal momentum and pipeline health (lower is better)
Definition: Mean deal value across all opportunities
Why included: Revenue impact per opportunity (higher is better)
The critical decision: How do we fairly compare pairings when different BDs pass different quality leads?
Each BD-Sales pair is compared to THAT BD's average performance (not the company-wide average).
Why this matters: Different BDs pass different quality/difficulty leads. Enterprise BDs naturally have lower win rates than SMB BDs. Comparing to BD's baseline controls for this, making comparisons fair.
Example:
Percentage deviation accounts for baseline difficulty. A Sales Rep who performs +30% above baseline is impressive whether that baseline is 18% (hard leads) or 32% (easy leads). The percentage normalizes for difficulty level, making scores comparable across different BD lead sources.
Scores are used at TWO different levels for different purposes
LEVEL 1: BD-Specific Rankings (For Routing)
LEVEL 2: Population Classification (For Context)
Why percentiles? Data-driven (not arbitrary), adaptive to score distribution, and easily explainable to stakeholders.
After normalization and weighting, how do pairings score?
Identifying the best and worst BD-Sales combinations
Visual overview of all BD-Sales combinations
Metric contribution analysis: High Performers vs At-Risk Pairs
I initially expected Win Rate to be the primary differentiator. However, Stale Pipeline Rate showed the largest separation. This suggests that training and coaching should focus on deal momentum management, not just closing techniques. High-performing pairs keep deals progressing decisively rather than letting them languish.
The complete analytical framework
✓ Four Key Metrics Selected: Win Rate, Early Death, Stale Pipeline, Deal Size (25% each)
✓ BD Baseline Comparison: Controls for lead quality differences across BDs
✓ Percentage Deviation Scoring: Normalizes for difficulty, makes scores comparable
✓ Confidence Multiplier Applied: Accounts for sample size reliability
✓ Two-Level Classification: BD-specific for routing + Population for context
→ Ready for Business Recommendations (Dashboard 3)