DORA metrics are the gold standard for measuring software delivery performance. Originally developed by the DevOps Research and Assessment team, these four key metrics help engineering managers understand how effectively their teams deliver software and where to focus improvement efforts.
What Are DORA Metrics?
DORA metrics comprise four key measurements that collectively paint a picture of your software delivery performance: deployment frequency, lead time for changes, change failure rate, and mean time to recovery. These metrics were identified through years of research by Dr. Nicole Forsgren, Jez Humble, and Gene Kim, and have become the industry standard for assessing DevOps capability.
What makes DORA metrics particularly valuable for engineering managers is that they balance speed with stability. Deployment frequency and lead time measure your team's throughput, whilst change failure rate and mean time to recovery measure your stability. High-performing teams excel at both, debunking the myth that you must sacrifice quality for speed.
The annual State of DevOps reports consistently show that elite performers deploy on demand (multiple times per day), have lead times of less than one hour, change failure rates below 5%, and recover from incidents in under one hour. These benchmarks provide a clear target for engineering organisations striving to improve.
How to Measure DORA Metrics Effectively
Measuring DORA metrics requires connecting data from your version control system, CI/CD pipeline, and incident management tools. Deployment frequency can be tracked through your deployment pipeline, counting the number of successful deployments to production per unit of time. Lead time for changes is measured from the first commit to when that code is running in production.
Change failure rate is calculated as the percentage of deployments that result in a degraded service requiring remediation (such as a hotfix, rollback, or patch). Mean time to recovery measures how long it takes to restore service after an incident occurs. Both metrics require a well-defined incident tracking process to be measured accurately.
- Use your CI/CD pipeline data to automatically track deployment frequency and lead time
- Define clear criteria for what constitutes a 'failure' to ensure consistent change failure rate measurement
- Integrate your incident management system to automatically calculate MTTR
- Track metrics weekly and review trends monthly to identify patterns
- Avoid gaming the metrics by focusing on the overall picture rather than individual numbers
DORA Benchmarks and Performance Tiers
The State of DevOps research categorises teams into four performance tiers: elite, high, medium, and low. Elite performers deploy multiple times per day with lead times under one hour, maintain change failure rates between 0-5%, and recover from incidents in under one hour. High performers deploy between once per day and once per week, with lead times between one day and one week.
Medium performers typically deploy between once per week and once per month, with lead times between one week and one month. Low performers deploy less frequently than once per month, with lead times exceeding six months. Understanding where your team falls helps you set realistic improvement goals and prioritise your engineering investments.
It is worth noting that these benchmarks are not one-size-fits-all. Context matters enormously. A team working on safety-critical systems may intentionally have lower deployment frequency with more rigorous change management. The key is to understand your current position and make deliberate choices about where you want to improve.
Strategies for Improving DORA Metrics
Improving deployment frequency starts with reducing batch sizes. Smaller, more frequent deployments are inherently less risky and easier to debug when issues arise. Invest in CI/CD automation, feature flags, and trunk-based development to make frequent deployments both safe and practical. Ensure your deployment process is fully automated and requires minimal manual intervention.
To reduce lead time, focus on eliminating bottlenecks in your development pipeline. Common culprits include lengthy code review queues, manual testing gates, and complex approval processes. Implement automated testing, establish clear code review SLAs, and empower teams to deploy independently. Pair programming and mob programming can also reduce the feedback loop significantly.
- Start with the metric that has the most room for improvement in your team
- Invest in CI/CD automation to reduce manual toil and deployment friction
- Adopt trunk-based development and feature flags to enable smaller, safer deployments
- Establish code review SLAs to prevent pull requests from languishing
- Run blameless post-mortems to systematically reduce change failure rate and MTTR
Common Pitfalls When Using DORA Metrics
The most dangerous pitfall with DORA metrics is using them as individual performance measures. These are team-level and organisational-level metrics, not tools for evaluating individual engineers. Using them to compare individuals creates perverse incentives and erodes psychological safety. Instead, use them to identify systemic improvements that benefit the entire team.
Another common mistake is focusing on a single metric in isolation. Optimising deployment frequency without regard for change failure rate leads to an unstable production environment. Similarly, obsessing over change failure rate can lead to overly cautious deployment practices that slow delivery to a crawl. Always consider the four metrics as an interconnected system.
Finally, avoid treating the benchmarks as absolute targets. The goal is continuous improvement relative to your own baseline, not hitting arbitrary numbers. A team that moves from deploying monthly to deploying weekly has made tremendous progress, even if they are not yet at the 'elite' tier. Celebrate progress and maintain momentum.
Key Takeaways
- DORA metrics measure both speed (deployment frequency, lead time) and stability (change failure rate, MTTR) of software delivery
- Elite performers prove that speed and stability are complementary, not opposing forces
- Measure at the team and organisational level, never use these metrics to evaluate individual engineers
- Focus on continuous improvement from your baseline rather than chasing absolute benchmarks
- Invest in CI/CD automation, trunk-based development, and blameless post-mortems for the biggest improvements
Frequently Asked Questions
- How often should we review our DORA metrics?
- Track DORA metrics continuously but review them formally on a monthly basis. Weekly check-ins can help identify sudden changes, but meaningful trends require at least a month of data. Quarterly reviews are useful for assessing the impact of larger improvement initiatives and adjusting your strategy.
- Can DORA metrics be applied to non-software teams?
- Whilst DORA metrics were designed specifically for software delivery, the underlying principles of measuring throughput and stability can be adapted for other contexts. Data engineering, infrastructure, and platform teams can all benefit from tracking similar metrics tailored to their delivery processes.
- What tools can we use to track DORA metrics?
- Popular options include LinearB, Sleuth, Faros AI, and Jellyfish for automated DORA tracking. Many CI/CD platforms like GitLab and GitHub also offer built-in DORA metric dashboards. You can also build custom dashboards using data from your existing toolchain if you prefer a bespoke approach.
- Should we share DORA metrics with the wider organisation?
- Yes, sharing DORA metrics with stakeholders helps build understanding of engineering capability and the impact of technical investments. Present them in business-friendly terms: deployment frequency shows agility, lead time shows responsiveness, change failure rate shows quality, and MTTR shows resilience.
Download DORA Metrics Tracking Template
Get our free engineering management templates to start tracking and improving your team's DORA metrics today.
Learn More