Let’s be clear: DevOps is a very technical topic, created by techies, for techies. Like most things in IT today, it provides a philosophy for the humans, and a set of best practices for the toolset.
While the name DevOps points toward a merger of the role of the developer (creating software) and operations (making software run), this is not really the case. What DevOps does is automate as much as possible the work of operation, and in a way that is comfortable and usable by developers. You may have heard of infrastructure as code. If not, you will soon.
DevOps is very much technical, and for this reason is not useful outside of the dev teams and marketing. Marketing likes to write things like: “Our DevOps team is working on a new app that uses AI to generate big data results validated by blockchain.”
So how do you measure the success of a DevOps initiative? As always, let’s ask Google. The query “measuring DevOps success” returns 148,000 results. Impressive ... there is data. After reading some articles, it is clear that only two or three are providing valuable information, the others are abstract — like modern art, or a copy-paste of someone else’s content. So what are the original articles saying? They say that you should focus on the following metrics:
- Deployment frequency
- Change volume
- Deployment time
- Lead time
- Customer tickets
- Automated test pass percentage
- Defect escape rate
- Service-level agreements
- Failed deployments
- Error rates
- Usage and traffic
- Application performance
- Mean time to detection (MTTD)
- Mean time to recovery (MTTR)
First of all, the obvious: all those metrics to measure whether the DevOps initiative is a success? Anything above three is a no-go for me. Dev loves lots of numbers; execs only love numbers about money or customers.
What these metrics point out is that you can expect new features more often (deployment frequency, change volume, deployment time, lead time) with fewer errors (customer tickets, automated test pass percentage, defect escape rate, availability, service-level agreements, failed deployments, error rates) and, when there is an error, the dev team hopes to solve it fast (MTTD and MTTR).
As an executive, to check a DevOps initiative, I recommend one metric: customer utilization.
Are customers using all the new, quickly delivered features? Are more customers using the tool due to the new features?
What you want to know is whether the money spent on developing provides a return. In simple terms: Is Margareth, the accountant on the third floor, aware that with a click in an obscure submenu she can now change the colors of the application background? Is she using it often?
You can have an answer thanks to all the data collected at the dev level, including the cost of implementation.
Disclaimer: It is normal and healthy for the dev team to create new things that might or might not work. It is OK to make mistakes and get priorities wrong. However, it is not OK when the majority of the code is useless.