Accelerate: Building and Scaling High Performance Technology Organizations, by Nicole Fersgren, Jez Humble, and Gene Kim
I was inspired to read this book, because our project team at work is interested in making some changes to our infrastructure. Currently, we are using outdated technologies to track our software bugs and do our reporting. I think that transitioning our infrastructure to Atlassian’s Jira, and also updating some of our project management practices will have significant positive implications for the future. This book, “Accelerate,” was supposed to articulate these benefits and craft a case for improved DevOps practices. It was a bit different than I originally anticipated. I expected the book to provide principles and provide recommendations on how to apply those principles in the workplace, and maybe include some specific examples. That’s not what this book was mainly about. Rather, this book was written to explain the results of DevOps research, and argue that the research results were reliable. Historically, we have understood that DevOps improves overall company performance, but this conclusion was anecdotal. The purpose of “Accelerate” was to find and explain quantitative data that supported the perceived benefits of DevOps. To accomplish this goal, the authors first talked about the results of their research, and then they talked about their specific research methods and argued why their results scientifically support DevOps practices.
The big idea is that software delivery performance affects overall organizational performance, either in a positive or negative fashion. Since we live in a technology driven world and software delivery performance has a significant impact on the overall profitability of a company, we should all be committed to better software delivery practices. Broadly, the authors identified five (5) statistically significant categories that affect performance:
Continuous delivery
Architecture
Product and process
Lean management and monitoring
Culture
Each of these categories has more specific sub-categories that are broken down below. I tried to evaluate our team’s performance in each category and provide a brief description. Overall, I think that we are performing well; however, there is always space for improvement, and this table helps identify some of those potential areas of improvement. Strong leadership leads to continuous delivery practices, embodies lean management practices, improves software delivery performance, reduces deployment pain and rework, improves job satisfaction, and reduces burnout. It all starts with the leadership.
Category | Sub-Category | Do we do this? | Description |
Continuous Delivery | Version control | Yes | We use git and Bitbucket |
Continuous Delivery | Automate deployment process | Partially | Deployment should not require manual intervention. We can improve this |
Continuous Delivery | Continuous integration | Yes | Code is regularly checked in and a series of tests are automatically run |
Continuous Delivery | Test automation | Yes | Tests run automatically |
Continuous Delivery | Test data management | Yes | Ability to condition your test data, acquire test data on demand, and run subsets of tests |
Continuous Delivery | Include security | N/A | Security is an entirely different topic for our project |
Continuous Delivery | Continuous delivery | No | The software is in a deployable state to end users anywhere in its lifecycle. We still require manual steps to make changes deployable |
Architecture | Architecture | Yes | Allow teams to use whatever software enables them to be the most effective |
Product and Process | Gather customer feedback | Yes | We do this bi-weekly or monthly with Consortium meetings |
Product and Process | Make the flow of work visible | Partially | This can be improved and I think that Jira has great features to improve visibility |
Product and Process | Work in small batches | No | I think we do a poor job here. Work should be divided into small tasks that take approximately 1 week to complete |
Product and Process | Foster experimentation | Yes | Allow the team to experiment and keep code reviews inside the team |
Lean management and monitoring | Lightweight change approvals | Yes | Do not use external change approval boards. Use peer reviews |
Lean management and monitoring | Proactively check system health | Yes | Be proactive to identify infrastructure problems and other issues |
Lean management and monitoring | Limit work in progress (WIP) | No | Use something to limit the number of tasks that a single person is working on. We do not do this, and I think that it can become a problem |
Lean management and monitoring | Visualize work | No | Use visual displays such as dashboards, Kanban boards, or internal websites |
Lean management and monitoring | Encourage learning | Yes | I try to do this with my team |
Culture | Support collaboration | Yes | I try to encourage the team to ask questions and find help |
Culture | Provide resources and tools | Yes | This helps with job satisfaction to ensure that work is challenging and meaningful |
Culture | Transformational leadership | Yes | That's what I'm trying to do right now - develop a vision, communicate my inspiration, recognize strong performance, and provide intellectual stimulation |
To reach these conclusions, the authors used surveys. The second part of “Accelerate” discusses surveys and explains why the authors’ results can be trusted. I really enjoyed this part of the book! Under normal circumstances, I would not choose to read about surveys. But since it was included in this book, I decided to keep reading, and once I started reading, I became more and more interested.
I learned the difference between primary and secondary research. Secondary research relies on data collected by somebody else, whereas primary research requires the research team to collect new data. Primary research requires significantly more effort, and it’s the form of research that the authors used in this book to generate their conclusions. Surveys often get a poor reputation, because the researchers who implement these surveys do not do a good job. Common errors include asking leading questions, asking loaded questions, using unclear language, or asking multiple questions in one. Because of these mistakes, survey results are often unreliable and biased. However, with well-planned questions, these common errors can be avoided and reliable results can be drawn. Overall, I was convinced that properly developed surveys yield valuable data. Even rogue respondents, who lie on the survey to manipulate the results, cannot affect a well-designed survey. For a rogue to have a significant impact on the results, the rogue would need to align with thousands of other rogues and coordinate their falsified responses so that they all perfectly align with one another. The probability of that occurring is extremely low.
An interesting idea was the employee Net Promotor Score (NPS) which was used to measure job satisfaction. It is based on the answer to a single question:
How likely is it that you would recommend your company/product/service to a friend or colleague?
NPS is simple, reliable, and well-understood. The results of the NPS survey showed that high performers were more than 2.2 times more likely to recommend their company/product/service than low performers. This is significant, because it shows that high performers have a sense of ownership and are more engaged. When employees feel connected to their work, and have values that align with their company’s values, then those employees do better work and the company thrives.