Following the famous mantra of “you can’t manage what you can’t measure”, Scrum teams have often a set of metrics to monitor their activity. Velocity, the amount of work performed by a team during a single sprint, might be one of the most famous Agile metric. Doc Norton has written an interesting book about the negative sides of velocity and what might be a good metric for an Agile team.
Based on his experience, Doc Norton starts his book with a long discussion of the flaws of the velocity metric. From this base, he explains then the issues with metrics in general and proposes alternative measures (lead time, team joy, etc.) to monitor the evolution of Agile teams. My favorite part is the section that tries to examine the problem from a customer perspective. This book is easy to read, mixing concepts with stories from the trenches.
I will recommend this book naturally to every member of a Scrum team, but it should provide interesting ideas to every software development project manager or executive.
Reference: Escape Velocity – Better Metrics for Agile Teams, Doc Norton, https://leanpub.com/escapevelocity
Velocity is a poor metric that offers far less value than the angst that results from the perpetual misuse.
In agile software development, velocity is the rate at which a team delivers value to the customer. Value is quantifiable, and we can therefore say it has size. The direction is indicated by the traversal from idea to implementation. A team that completes lots of tasks, but delivers no value to the customer should have a zero velocity. I say “should” because I see an awful lot of teams that measure velocity based on criteria other than value to the customer. Some consider a story done when development is complete. Others are a tad more “mature” and count a story as done when it is ready to go to production. Not actually in production, mind you; that takes weeks – what with manual testing requirements and change control procedures and getting into queue for the operations team… And some teams consider a story complete once it is actually in production. But even that assumes value to the customer. What if you didn’t count a story as done until customers were actually using it and liked it? If velocity is the rate at which we deliver value to the customer, wouldn’t a true measure of velocity include verification of value delivery?
Leaders (and teams) attempt to achieve velocity increases in numerous ways. Most, as you can imagine, have unintended side effects on the teams and none significantly improve the actual flow/delivery of value to their customers. Here we explore a few ways teams might be encouraged to increase their velocity. Unfortunately, no matter how well intentioned, using such techniques still has a negative impact.
If you have to estimate (you probably don’t), then I recommend you use a reference story. Identify a story that the entire team can agree is 1 point. Identify a second story that the entire team can agree is 5 points. When you are estimating, keep these stories in mind and attempt to estimate all others relative to them. Then, at the end of the estimating session, compare the stories to one another and your reference models. Do the points still feel right? Compare the stories to random selections from prior iterations? Do the points still feel right? If not, make adjustments to your new estimates as necessary.
Most teams I work with have three distinct roles; BA, Developer, and QA. Most teams I work with have three distinct phases of their work; gather requirements, build, verify. Even on agile teams, these separations exist. There are clear delineations in the process and clear segregation of responsibilities. But this segregation is a contributor to erratic velocity. How many “agile” teams do you know of where the BA group is an iteration ahead of the developers who are an iteration ahead of QA, leaving us with a three-iteration cycle time and significant lag in our feedback loops between the groups? Tighten the loops. Get people working together in not only close proximity, but close time-frame. Involve the developers and QA in the formation of requirements. Push QA to the front and automate, automate, automate. Don’t let manual testing be a bottleneck. Start development before you’ve polished the requirements. And don’t wait until the end to test it all comprehensively.