Before the Leaderboard — What the Team Looked Like
Three SDRs. Four months of running cold outreach to home services businesses in the UK — plumbers, electricians, landscapers. Consistent enough to know the business model worked. Inconsistent enough to know something in the operation was not clicking.
The problem was not bad SDRs. All three were capable. The problem was that nobody had full visibility into what anyone else was doing, including me. I was managing by check-in — asking each SDR at end of day how many calls they made, how many interested responses, any closes. I was getting honest answers. But I was also getting sanitised answers — numbers that were probably a little higher than the actual count, because who wants to say to their manager "I only did 40 calls today" when yesterday you said you did 55?
The thing I kept missing was that I was the only person who could see everyone's output — and I was seeing it retrospectively. The SDRs were working in isolation. Sarah had no idea James had done 60 dials that morning. James had no idea Priya had just closed a plumber. There was no shared reality to react to.
I had read about sales leaderboards. I had also read about how they can make teams toxic. I spent two weeks overthinking it and then, on a Tuesday in February, I turned it on.
Week One — What Actually Happened Day by Day
Launch day — quiet at first
I turned the leaderboard on at 9am and said nothing except "the leaderboard is live — you can all see each other's numbers in real time now." Then I left them to it. First hour: no visible change. Everyone called at their normal pace. By 11am Sarah had 34 dials, James had 28, Priya had 31. Normal-ish numbers. Then James's total jumped — 12 dials in 40 minutes after he saw Sarah at 34. He pulled level by noon. Nothing said. Nobody asked.
Team total: 147 dials · +18% above Monday baselineThe first competitive moment
James closed a plumber at 10:40am — £1,800 deal. His commission updated on the leaderboard immediately: £270 earned. By 11:15am Sarah had closed a landscaper at £1,200 — commission £156 on the board. By 3pm Priya had logged 71 dials — more than she had ever logged in a session. She did not close that day but she had 5 new warm leads. Nobody talked about competition. Everyone was just looking at the screen and responding to what they saw.
2 closes on day 2 of leaderboard. Pre-leaderboard average: 0.5 closes per day across team.The first difficult moment
Tom — our fourth SDR who joined the previous month — had 38 dials by end of Thursday while Sarah had 94. The gap was on the screen for everyone including Tom. I was worried this would demoralise him. Instead Tom messaged me at 5pm: "I need to talk about my calling time — I think I'm starting too late." He had identified his own problem by watching the leaderboard. That conversation took 8 minutes. I had been trying to figure out how to have it for three weeks.
Unexpected: Tom self-diagnosed using leaderboard data. Zero coaching prompt from me.End of week 1 — the numbers
Total dials across the team for the week: 1,034. Pre-leaderboard four-week average for a full week: 770 dials. That is 34% more dials without a single instruction to dial more. Three verified closes. Commission paid: £680. The week-end conversation I had with each SDR was not "how many dials did you do?" — it was specific: "Priya, you have 8 warm leads going into next week — let's talk about callback prep for Monday." Different conversation entirely.
Week 1 total: 1,034 dials · 3 verified closes · £680 commission · 34% dial increaseThe Numbers — Week 1 Before vs After
What Each SDR Did With the Leaderboard
Four people. Four different responses. All four improved — but in completely different ways, for completely different reasons. This is the part no general sales leaderboard article tells you.
Before: Strong performer but no external reference point. Did not know she was the top performer — nobody had told her explicitly.
After: The leaderboard confirmed her position and gave her something to protect. Her dial count went up because she was actively maintaining first place, not because she needed more closes. Commission visibility showed her £1,200 earned in week 1 — the largest on the team. She had been doing good work for months with no real-time financial signal that it was paying off.
Before: Inconsistent — brilliant days followed by quiet days. No consistent motivation pattern. Check-ins did not tell him what the consistent motivation lever was.
After: The leaderboard showed James something specific: he could reach first place on any given day with 2 more closes than Sarah. That specific, visible, closeable gap was the motivation he had been missing. His best weeks went from inconsistent peaks to consistent output. Three consecutive weeks in second place — consistently, not occasionally.
Before: High dial volume, lower immediate close rate. Feeling undervalued because the check-in only asked about closes — not the 8 warm leads she was building every week.
After: The leaderboard's pipeline count column showed Priya's warm pipeline was consistently the highest on the team. She was building for next week while others were optimising for today. Visible pipeline count changed how I talked to her — we focused on callback conversion, not dial volume. Her close rate from callbacks went from 22% to 38% over four weeks.
Before: Lowest performer. I was building up to a difficult conversation about whether the role was right. I thought the issue was pitch quality.
After: Tom saw his session start times on the leaderboard relative to others. He was consistently starting 90 minutes later than Sarah. Nobody told him this was a problem — the data made it obvious. He moved his start time. His dial count increased 40% the following week. The issue was never pitch quality. It was 90 minutes of lost session time every day that neither of us could see without the leaderboard.
The thing I did not expect: The leaderboard gave the SDRs information about themselves that they could act on without me. Tom figured out his problem at 5pm on a Thursday by looking at a screen. I had been trying to diagnose the same problem for three weeks through end-of-day conversations that were too polite and too vague to surface the real cause. The leaderboard was more honest than I was.
Month One — The Permanent Changes
By the end of week four, the leaderboard had changed three things permanently — and one of them was not what I expected at all.
Permanent Change 1 — Management Conversations Became Specific
Before the leaderboard, my daily check-in was general: "How was your day? How many calls? Any closes?" The answers were general: "Good. About 50. One close, maybe." After the leaderboard, I stopped doing daily check-ins entirely. The leaderboard answered all three questions without asking. My management conversations became weekly and specific: "Priya, your callback close rate this week is 38% — up from 28% last week. What changed on your callbacks?" That is a fundamentally different conversation. It is about performance improvement rather than performance reporting.
Permanent Change 2 — Commission Disputes Disappeared
Before the leaderboard and the connected sale verification system, month-end commission was occasionally contentious. An SDR would remember a close that I had not verified. I would remember rejecting a close that the SDR had expected to count. The leaderboard with the verification queue eliminated this entirely. Every pending close sat visible. Every verified close updated commission immediately. At month end, commission payment was a report — not a negotiation.
Permanent Change 3 — The One Nobody Warned Me About
What We Learned — 5 Things Nobody Told Us
The leaderboard surfaces problems faster than management does
Tom's session start time problem had been visible to the leaderboard the moment it existed. It had been invisible to me for three weeks of daily check-ins. Data surfaces truth faster than conversation because data is not polite. The leaderboard showed Tom's dial count relative to start time in a way that a check-in could not.
Commission visibility is more motivating than commission amount
Our commission rates did not change when we added the leaderboard. The amounts were the same. What changed was when SDRs could see them. A £270 commission showing on the board at 10:40am on Wednesday is more motivating than a £270 commission listed on a month-end statement on March 31st. Same money. Completely different psychological effect. Immediacy is the mechanism — not generosity.
The pipeline count column matters as much as the close count column
Without the warm pipeline column, Priya looked like she was underperforming in week 1 — 0 closes versus Sarah's 2. With the pipeline column showing her +8 warm leads versus Sarah's +3, the picture was completely different. Priya was building next week's closes right now. I almost had the wrong coaching conversation because the close count column alone was misleading about her actual trajectory.
The toxic leaderboard risk is real but avoidable with one decision
The risk everyone warns about — leaderboards making teams toxic — only materialised in one moment. Week 2, James made a comment about Tom's dial count in the team group chat. Brief, not malicious, but pointed. I responded immediately and clearly: the leaderboard is for personal motivation and team coaching — not for commentary on teammates' numbers. That was the only incident in six weeks. The boundary needed stating once. It has not been crossed since.
The leaderboard changed what SDRs asked for
Before the leaderboard, SDRs asked for more leads and better lists. After it, they asked for coaching on specific metrics. "How do I improve my interested rate?" is a better question than "can you find me better lists?" because it is asking about skill rather than circumstance. The leaderboard gave SDRs the data to know where their specific gap was — and that changed the nature of the development conversations they initiated.
"I had been managing by gut feel for four months. I thought I knew who was performing and who was not. The leaderboard showed me I was mostly right — but also showed me three specific things about my team that I had completely misread. Tom's timing issue. Priya's pipeline quality. James's competitive response to a visible gap. None of those came from a check-in. All three came from the screen."
— Hamid Khan, CEO & Co-Founder, Get Map Leads
Would We Do It Differently?
Three things I would change if I were starting the leaderboard from day one.
Set the one boundary publicly from day one. The James/Tom comment in week 2 was small — but it could have been bigger. I would state the leaderboard culture expectation on day one: the board is for personal motivation and coaching, not for public commentary on teammates. One sentence. Say it before it is ever needed.
Add the pipeline column immediately. I did not have the warm pipeline count column on the leaderboard for the first two weeks. I added it in week 3. Priya's performance looked different the moment that column appeared. If you implement a leaderboard for a web agency cold outreach team, the pipeline count is not optional — it is the leading indicator that makes the close count column interpretable.
Turn commission visibility on from day one. I delayed showing commission updates in real time for the first four days because I was worried about what SDRs would think when they saw the amounts. It was unnecessary. The amounts were fair. Real-time commission visibility is the single highest-ROI change in the leaderboard setup — and I delayed it for no good reason.
The one thing I would not do again: Two weeks of overthinking before turning it on. Every concern I had — toxicity, resentment, unfair comparison, gaming — either did not materialise or was resolved in under 5 minutes. The cost of not having the leaderboard for those four pre-leaderboard months was approximately 40% of the closes we should have made. The cost of the one incident we had was a one-sentence message in a group chat. I optimised for the wrong risk.
