DORA metrics as leading and lagging indicators
Fair warning: in this article, I'll be using the terms "indicators", "measures" and "metrics" more or less interchangeably.
Terminology
Leading indicator
something that shows what a situation will be like in the future rather than showing what it is like now or has been like in previous weeks, months etc. ref
Lagging indicator
something that shows what a situation has been like in previous weeks, months, etc., rather than showing what it is like now or will be like in the future. ref
DORA metrics
While I won't go into a deep explanation on DORA metrics because there are many great articles about them online, I'd like to briefly cover a couple of points.
DORA stands for DevOps Research and Assessment. DORA is an organisation founded by Nicole Forsgren, Jez Humble and Gene Kim, that aims to research into what makes technology teams highly performant.
There are already several great articles out there about the DORA metrics (and you should, of course, give the founders' book, Accelerate, a read). Here's an article that explains what the 4 DORA metrics are. You should get familiar with them before continuing to read. (Worth mentioning that there might be a 5th metric coming up soon – Reliability.)
DORA metrics are all about speed and precision. Speed of getting updates into the hands of users. Speed of remediating issues. Precision in the implementation of bug-free features. Speed of restoring service after a fault.
The DORA metrics are agnostic of your business or the feature you're building. They can track speed and precision just as well for useful features as they do for useless features. That’s why having strong Product direction is essential to ensure you’re not spinning your wheels or speeding towards a disaster.
DORA metrics as leading indicators
We may not like it, but luck (starting conditions, the environment etc.) plays a big role in the success of a business, especially at a start-up stage. The number of experiments you run is a predictor of success. ref 1 (credits to my skip-line manager for the link), ref 2
It's simple: the more stuff you throw at the wall, the better the chance that something will stick. Conversely, the more time you spend planning, building and polishing a product before getting it in front of any sort of user, the bigger the chance that the product you built is going to come up short.
In this sense, the DORA metrics' focus on speed makes them a direct indicator of how fast you can churn out experiments. They are a part of the leading indicators of the success of your business.
DORA metrics as lagging indicators
When used as lagging indicators, DORA metrics enable you to:
- Track how a change you’ve deployed to the engineering department, or a separate or auxiliary department affects the critical path of your software delivery and operation processes. Examples of changes that have ramifications on your delivery or operation processes:
- You’ve deployed a new alerting system that lets teams know about faults as soon as they occur.
- Audit regulations require that “all new code must go through a ‘second pair of eyes’ review process”. Your teams were doing trunk-based development. You chose the GitHub flow as the solution that meets the auditors' requirements.
- You’ve implemented a 4-day workweek.
- Track the trend of your team’s/department’s performance over a period of time. (When tracking trends, remember to take into consideration factors such as environmental/external changes, regression towards the mean etc.)
In this sense, DORA metrics are lagging indicators of the outcomes of your engineering strategy.
Conclusion
I’d like to leave you with the following questions to consider:
- Which type of indicator do you anticipate caring about more in your current role? What about other roles at your company?
- Which type of indicator would be most interesting for a team that is self-motivated to improve?
- What cadency would you implement for reviewing these indicators by the different personas? What format would you choose for presenting them?