OKRs Scoring: A Brief History
Your approach to scoring Key Results is one of your most important deployment parameters. My recommended scoring system is the coolest thing I’ve created and applied with my OKR clients. You can find an updated approach to scoring in the book I wrote with Paul Niven on OKRs in 2016 that removes the “0.5” as it often does not add value when defining scoring upfront. This post includes a brief history of scoring in OKRs. I hope it gives you context for thinking about how best to score your OKRs! Note: I do not recommend scoring Objectives. If you wonder why, please post a comment and let’s get a conversation going!
Part 1: Binary Scoring
When I first got going with OKRs about 5 years ago, we did not apply a scoring scale. Each Key Result was either achieved or not. Things were simple. It was binary. If your Key Result was “10 new customers by end of quarter” and you ended the quarter with 9 new customers, the Key Result was not met.
In fact, it was assumed that you’d hit 10 customers midway through the quarter, cross out the 10 and raise the bar to 15 and then end the quarter with 20. This approach is sometimes referred to as “set the bar high and overachieve.”
It was an unwritten rule that if your team achieved its Objectives, your team would be more celebrated and more likely to get promotions. Your team was successful to the extent that the OKRs were achieved. To be clear, individuals on the team that achieved its Objectives were more likely to get a bonus. After all, shouldn’t a bonus be tied to success?
This system didn’t always work well nor did it claim to be perfect. Suppose a team actually had the Key Result “10 new customers” but ended up with 9. There would be a sense of failure given that we had a binary scoring system. In other words, 9 was interpreted as “falling short.” Not by much, but still, the feeling was one of losing. In summary, this approach to scoring OKRs was diametrically opposed to the culture of OKRs at Google where the worst thing you can do is blow out all your OKRs.
Part 2: Google-style grading on a 0-1 scale.
I later learned about how Google grades OKRs, about the time the Google Ventures video came out in 2013. The idea was to standardize how all OKRs are scored across the organization. A score of “1” reflects a complete achievement; a score of “0” is “no progress.” At Google, the culture values stretch goals. So much so that scoring all 1s on your OKRs means you didn’t set your goals high enough. Now I recently heard a story about a Googler who set goals very high and then went on to achieve all them. Apparently, everyone assumed he sandbagged. I’m not 100% sure this story is true, but given the number of people working at Google, it’s likely that this scenario has occurred multiple times.
I see Google’s normalized scoring model can be very effective. It gives everyone a way of knowing how to measure success. While it may not be perfect, it probably solves way more problems than it creates. If nothing else, it standardizes conversations and streamlines communication about performance on Objectives.
Part 2a: Adding “pre-scoring” into the Google-style grading.
As an OKRs coach, I find most organizations that implement a scoring system either score the Key Results at the end of the quarter only or at several intervals during the quarter. However, they often do not define scoring criteria as part of the definition of the Key Result. If you want to use a standardized scoring system, scoring criteria for each Key Result SHOULD be defined as part of the creation of the Key Result. The conversation about what makes a “.3” or a “.7” is also not very interesting unless we translate the “.3” and the “.7” into English. After discussing this with Vincent Drucker — and yes, Vincent is Peter Drucker’s son — I arrived on the following guidelines that my clients are finding very useful:
Key insight from OKRs coaching: Clearly define OKRs with consistent scoring system for every Key Result
Grading Key Results
- When – At the beginning of the Quarter as part of defining a Key Result
- Make the “rules of the game” clear
- How – Apply a 0-1 scale as follows:
Here’s an example showing the power of defining scoring criteria upfront for a Key Result.
Key Result: Launch new product ABC with 10 active users by end of Q3
- 0.3 = Prototype tested by 3 internal users
- 0.7 = Prototype tested and approved with launch date in Q4
- 1.0 = Product launched with 10 active users
Notice that the “1.0” is identical to the Key Result itself, so it need not be included. Also, the 0.5 is optional and is not typically used. This forces a conversation about what is aspirational versus realistic. The Engineering team may come back and say that even the 0.3 score is going to be difficult. Having these conversations before finalizing the Key Result ensures everyone’s on the same page from the start.
*An alternative approach to scoring Key Results, that I first heard about from a super-cool colleague, takes yet another approach to scoring OKRs that emphasizes the future rather than the past. For more on this, please refer to my white paper.
Part 3: Predictive scoring
Most organizations that approach me for help with OKRs do have some form of scoring OKRs. However, their scores focus exclusively on “progress to date.” You wind up with a data point for each Key Result in the form of “X% complete.” X% complete may have some value; however, more and more OKR users include a predictive element to their scoring. Let’s go back the “10 new customer” Key Result to analyze why predictive scoring is gaining so much traction.
Say you signed 6 customers in the first month of the quarter. Great, you’re 60% complete! However, say you do not believe your team will sign additional customers because the pipeline is dry or a key sales rep just left for a better gig. If you had a way of communicating that you’ve lost confidence and feel this Key Result will stay at 60% and will not be met, you could alert your colleagues. Predictive scoring serves as an early-warning system to better manage expectations and leadership void.
I predict scoring systems will continue to evolve. While some organizations first getting started with OKRs may not grasp the importance of scoring, my prediction is that scoring will continue to be one of the most critical variables to get right for your organization to ensure a successful OKRs deployment for the long term. Some organizations may wind up taking a hybrid approach that combines predictive and historical scoring for OKRs.
However you score OKRs, remember the intent is to communicate targets, manage expectations, and enable continuous learning. Please share your approach to scoring here or contact me, [email protected], if you’d like to discuss privately.
*My colleague is Christina Wodtke. She writes: “Status toward OKRs: If you set a confidence of 5 out of ten, has that moved up or down? Have a discussion about why.” Source: http://eleganthack.com/monday-commitments-and-friday-wins/