When looking for a way to measure their DevOps performance, most teams reach for Google’s DevOps Research and Assessment (DORA) metrics. These metrics will help you assess your team’s progress so far and benchmark against other teams. But once you understand how your team is performing right now, what next? This blog post outlines what ‘good’ results are, and how you can use your metrics to plan improvements to your process.
Why should we measure release performance?
Whether your release process is advanced or basic, making use of metrics and interpreting them correctly will allow you to mature your processes and boost your performance. Without understanding the current state of play, it’s difficult to identify obstacles and pick the best route forward for your team. Importantly, each team faces unique challenges and areas that can be improved — performance metrics help you understand where your own team currently stands and lets you tailor your goals and strategy, rather than hoping a blanket approach to success will pay off.
A more mature release process — one that incorporates DevOps practices — can accelerate your deployments, increase release velocity, and reduce downtime and errors. Ultimately, all of this means you’re serving your end users better.
How your DORA metrics point the way forwards
A DevOps process is built gradually, and not every team will end up with the same process — trying to achieve DevOps maturity in one gigantic leap will only lead to a ton of errors and blockers. Knowing how to interpret your DORA metrics is a key step in reaching your Salesforce release performance goals. There are lessons you can draw out of each of the four key DORA metrics:
- Deployment Frequency: how often an organization successfully releases to production
- Lead Time for Changes: the amount of time it takes a feature to get into production
- Mean Time to Recover: the time it takes to recover from a failure in production
- Change Failure Rate: the percentage of deployments causing a failure in production
With the Deployment Frequency metric, you can better understand whether your team is managing to release more often. This metric will be high if your team is releasing daily, but more than once a week is still good. A high Deployment Frequency is a good sign you’re on the right track of releasing smaller deployments but more frequently, a key aim within DevOps. Weekly or fortnightly deployments are very typical, and any less frequently than that would be considered poor performance. If your team falls into the latter, you’re not alone — our 2022 report showed that only 10% of teams release at least once a day, so very few have reached an elite level of performance.
The next step in this scenario would be to understand why it’s low. Are your deployments so slow and painful that they’re put off? If so, the team should invest in a smart deployment solution like Gearset. Following that is a cultural change: releasing more often means releasing less at a time, which makes it easier to troubleshoot any issues that crop up. Understanding that can motivate a team to up their release cadence. As we discovered in our State of Salesforce DevOps Report 2022, the skyrocketing adoption of DevOps in the Salesforce ecosystem over the last few years has been triggered by the pressing need to meet the workload of an increasingly sophisticated Salesforce. This is because DevOps, simply speaking, offers a better approach to release management.
Lead Time for Changes
The inability to keep up with demand from the business is not only impacted by the frequency of deployments, but also the speed of delivery. The next DORA metric, Lead Time for Changes, measures the time it takes for committed code to reach production. The most efficient DevOps teams have a short lead time for changes — between a day and a week is good; less than a day is excellent.
If your lead times are low, what should your team do? The problem could be that work takes a long time to get through the release pipeline. In this case, your team should aim to figure out where in the pipeline things slow down, and what can be done about it. For example, if your team is getting slowed down by merge conflicts, then your feature branches should be more focused and short-lived. Are bugs and errors being caught late on in the process? Then testing earlier in the release cycle will help. Are CI jobs stalling? This could be all sorts of things, but a reliable automation solution will help!
Mean Time to Recover
Mean Time to Recover is an essential metric because failures are never entirely avoidable — no matter how sophisticated your process. If your team takes less than a day to recover, you’re doing well, and more than one week leaves room for improvement.
Your team’s results for this metric will show how robust your systems for rollback are built, how closely you’re monitoring production, and whether you’re prioritizing recovery enough or too little when there’s a failure. A vital step in surviving a data disaster is having a comprehensive backup solution in place ready for when you need it — not if. Since adopting DevOps also means adapting to a different mindset, and finding the right tools and processes, focusing on the positives — building robust systems for rollback, knowing you’re prepared for failures, etc — will aid in the cultural shift necessary to reach DevOps maturity, as opposed to ruminating on inevitable failures.
Change Failure Rate
To measure how your team’s time is divided between debugging, building new features, and testing, look at your Change Failure Rate. This metric is a calculation of the percentage of releases that resulted in rollbacks or any type of production failure. The lower the percentage, the greater the likelihood that your team is producing better quality work and more stable deployments. According to our report, 0-10% is excellent, 10-25% is good, and more than 50% is bad.
If your team is struggling with a high change failure rate, testing is key — in particular, testing work in a dedicated environment, such as UAT or QA. A high change failure rate might suggest that you need to get your team’s orgs more in sync. If the UAT or QA don’t look that similar to production, then testing is less likely to catch all the edge cases that the work will encounter in production. The metadata needs to be in sync between these orgs. But teams also need to have realistic data in that org for testing, which is where sandbox seeding is super useful.
Each Salesforce team is unique
You might look at teams with a mature Salesforce DevOps process and wonder how to replicate their journey. But every team is different — the right DevOps process varies between teams and the steps to get there are different.
In the DevOps world, to understand where you need to get to, you first need to understand where you’re coming from. The best place to start is to take our free Salesforce DevOps Assessment of your team’s DevOps maturity and performance. Input your current DORA metrics and development processes, and get an instant, tailored report analyzing your setup. You’ll see how well you’re performing, where you are on your DevOps journey, and what steps you can take next.