In my previous blog post about the “Real Danger of Quick-and-Dirty Programming“, I criticized some Agile practices such as measuring the Velocity and drawing Burndown charts. In this post I would like to extend and clarify this criticism, bringing also some very relevant references.
Here is the definition of Velocity from Wikipedia:
“Velocity is a capacity planning tool sometimes used in Agile software development. The velocity is calculated by counting the number of units of work completed in a certain interval, the length of which is determined at the start of the project. The main idea behind Velocity is to help teams estimate how much work they can complete in a given time period based on how quickly similar work was previously completed.”
In my opinion, the main problem with Velocity is when it becomes a goal by itself. When software development teams are focused on keeping their Velocity or increasing their Velocity, they will be ready to make sacrifices.
Having Velocity as a goal introduces trade-offs:
- Should we make it faster or should we make it better?
- Do we have enough time for Refactoring?
- Why not accumulate some Technical Debt to increase our Velocity?
In contrast to the Agile term, here is the definition of the Physical concept of velocity, also from Wikipedia:
“Velocity is a physical vector quantity; both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called ‘speed’, a quantity that is measured in metres per second (m/s) in the SI (metric) system.”
Thus apparently what we call Velocity in Agile should actually be called Speed, because there is no clear definition of the dimensions of “Direction”. Of course any User Story that is implemented should satisfy some basic quality attributes such as Correctness. But what about other desirable quality attributes, both functional and non-functional, such as low coupling, high cohesion, efficiency and robustness? Who is measuring them? Who is tracking them? Unfortunately I’ve never heard of an Agile team drawing charts to track the average Cyclomatic Complexity of their code.
When the only thing you are really measuring is your Speed, you will attempt anything to “make progress” as fast as possible. Anything that causes you to move slower may be considered a burden. Including improving your code. Including Refactoring. Including avoiding Technical Debt.
To conclude this article, I bring below two opinions about Velocity from people that we should listen to.
Sacrificing Quality For Predictability by Joshua Kerievsky
Joshua Kerievsky is the CEO of Industrial Logic and author of the book “Refactoring to Patterns“. Below is an extract from his article “Stop Using Story Points“:
“Technical practices like test-driven development and refactoring are often the first things to be dropped when someone is trying to ‘make their estimate.’
We see this behavior emerge from the outset, starting in training: a team that completes all work at the end of a timebox (say, two hours) often does so by sacrificing quality.
For years, we addressed this common problem by helping students experience what it was like to deliver on commitments while also producing quality code.
We explained how great teams ultimately learned to maintain technical excellence while also sustaining a consistent velocity (the same number of story points completed per sprint).
Yet the fact is, such teams are rare and their consistency is often fleeting as their team size changes or other pressures take hold.
For too many years, I thought it was important to maintain a consistent velocity, since it was supposed to be helpful in planning and predicting when a product/system could be completed.
Yet I’ve seen too many teams sacrifice quality and face compounding technical debt solely to make or exceed their estimates and keep their burndown charts looking pretty.”
Velocity is Killing Agility by Jim Highsmith
Jim Highsmith is an executive consultant with ThoughtWorks and author of several books on Agile software development, including “Agile Software Development Ecosystems” and “Agile Project Management: Creating Innovative Products“. Here is an extract from his article “Velocity is Killing Agility!“:
“Over emphasis on velocity causes problems because of its wide used as a productivity measure. The proper use of velocity is as a calibration tool, a way to help do capacity-based planning, as Kent Beck describes in ‘Extreme Programming: Embrace Change‘. Productivity measures in general make little sense in knowledge work—but that’s fodder for another blog. Velocity is also a seductive measure because it’s easy to calculate. Even though story-points per iteration are calculated on the basis of releasable features, velocity at its core is about effort.
While Agile teams try to focus on delivering high value features, they get side-tracked by reporting on productivity. Time after time I hear about comments from managers or product owners, ‘Your velocity fell from 24 last iteration to 21 this time, what’s wrong? Let’s get it back up, or go even higher.’ In this scenario velocity has moved from a useful calibration tool (what is our capacity for the next iteration?) to a performance (productivity) measurement tool. This means that two things are short-changed: the quality of the customer experience (quantity of features over customer experience) and improving the delivery engine (technical quality).”
What do you think? Should we continue measuring our Velocity? Please share your opinions in the comments below.
Awesome post. Thanks for sharing it. I’m not exactly one of the “speediest” developers there are, so I can recall at least one “agile” team where my “speed” did not fit with the rest of the members. If I reflect back, this “agile” mentality has gained space in industry because in business “time = money”. Nonetheless, I doubt that going fast allows you to deliver long-term quality results in 100% of the projects, let alone to really ponder whether indeed the project(s) went on the right direction.
I’ve always understood velocity to be an estimate, solely for the team concerned, of what they think they can achieve this iteration. Also, it will differ for each team and individual and that’s ok. It’s arbitrary and has no meaning beyond helping estimate sizes.
They know roughly how many points they can get done and use the very rough points allocated to each task so work out what they can fit into the next iteration.
It’s an extremely crude, arbitrary, measure to make sure you’re not overcommitting and that you can keep your promises. If a manager starts saying you need to “improve” an arbitrary measure it’s really easy to shut them up, just multiply everything by 2 before you put it in their report or create some tasks that are solely there to bump the number up and don’t take any time to do at all. Or educate them that the number has nothing to do with actual productivity.
In essence they haven’t understood what velocity is.
I tend to use T-shirt sizes – Small, Medium and Large and say roughly how many of each type can get done. Large tasks are anathema anyway because their variability can completely skew everything (see http://www.amazon.co.uk/gp/product/B007TKU0O0). I then add contingency on afterwards, 30% across the whole thing. See http://www.amazon.co.uk/gp/product/B002LHRM2E for why you stick contingency on the whole project or iteration instead of each item. I do tend to work for people who understand this is necessary though. 🙂
But in essence, if people are using velocity as a stick they don’t understand that it’s completely arbitrary and team-specific. You can make the numbers add up to anything you please if you really want.