The Problem with Velocity in Agile Software Development

direction_vs_speedIn my previous blog post about the “Real Danger of Quick-and-Dirty Programming“, I criticized some Agile practices such as measuring the Velocity and drawing Burndown charts. In this post I would like to extend and clarify this criticism, bringing also some very relevant references.

Here is the definition of Velocity from Wikipedia:

“Velocity is a capacity planning tool sometimes used in Agile software development. The velocity is calculated by counting the number of units of work completed in a certain interval, the length of which is determined at the start of the project. The main idea behind Velocity is to help teams estimate how much work they can complete in a given time period based on how quickly similar work was previously completed.”

In my opinion, the main problem with Velocity is when it becomes a goal by itself. When software development teams are focused on keeping their Velocity or increasing their Velocity, they will be ready to make sacrifices.

Having Velocity as a goal introduces trade-offs:

  • Should we make it faster or should we make it better?
  • Do we have enough time for Refactoring?
  • Why not accumulate some Technical Debt to increase our Velocity?

In contrast to the Agile term, here is the definition of the Physical concept of velocity, also from Wikipedia:

“Velocity is a physical vector quantity; both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called ‘speed’, a quantity that is measured in metres per second (m/s) in the SI (metric) system.”

Thus apparently what we call Velocity in Agile should actually be called Speed, because there is no clear definition of the dimensions of “Direction”. Of course any User Story that is implemented should satisfy some basic quality attributes such as Correctness. But what about other desirable quality attributes, both functional and non-functional, such as low coupling, high cohesion, efficiency and robustness? Who is measuring them? Who is tracking them? Unfortunately I’ve never heard of an Agile team drawing charts to track the average Cyclomatic Complexity of their code.

When the only thing you are really measuring is your Speed, you will attempt anything to “make progress” as fast as possible. Anything that causes you to move slower may be considered a burden. Including improving your code. Including Refactoring. Including avoiding Technical Debt.

To conclude this article, I bring below two opinions about Velocity from people that we should listen to.

Sacrificing Quality For Predictability by Joshua Kerievsky

Joshua Kerievsky is the CEO of Industrial Logic and author of the book “Refactoring to Patterns“. Below is an extract from his article “Stop Using Story Points“:

“Technical practices like test-driven development and refactoring are often the first things to be dropped when someone is trying to ‘make their estimate.’

We see this behavior emerge from the outset, starting in training: a team that completes all work at the end of a timebox (say, two hours) often does so by sacrificing quality.

For years, we addressed this common problem by helping students experience what it was like to deliver on commitments while also producing quality code.

We explained how great teams ultimately learned to maintain technical excellence while also sustaining a consistent velocity (the same number of story points completed per sprint).

Yet the fact is, such teams are rare and their consistency is often fleeting as their team size changes or other pressures take hold.

For too many years, I thought it was important to maintain a consistent velocity, since it was supposed to be helpful in planning and predicting when a product/system could be completed.

Yet I’ve seen too many teams sacrifice quality and face compounding technical debt solely to make or exceed their estimates and keep their burndown charts looking pretty.”

Velocity is Killing Agility by Jim Highsmith

Jim Highsmith is an executive consultant with ThoughtWorks and author of several books on Agile software development, including “Agile Software Development Ecosystems” and “Agile Project Management: Creating Innovative Products“. Here is an extract from his article “Velocity is Killing Agility!“:

“Over emphasis on velocity causes problems because of its wide used as a productivity measure. The proper use of velocity is as a calibration tool, a way to help do capacity-based planning, as Kent Beck describes in ‘Extreme Programming: Embrace Change‘. Productivity measures in general make little sense in knowledge work—but that’s fodder for another blog. Velocity is also a seductive measure because it’s easy to calculate. Even though story-points per iteration are calculated on the basis of releasable features, velocity at its core is about effort.

While Agile teams try to focus on delivering high value features, they get side-tracked by reporting on productivity. Time after time I hear about comments from managers or product owners, ‘Your velocity fell from 24 last iteration to 21 this time, what’s wrong? Let’s get it back up, or go even higher.’ In this scenario velocity has moved from a useful calibration tool (what is our capacity for the next iteration?) to a performance (productivity) measurement tool. This means that two things are short-changed: the quality of the customer experience (quantity of features over customer experience) and improving the delivery engine (technical quality).”

What do you think? Should we continue measuring our Velocity? Please share your opinions in the comments below.

Posted in Agile, Refactoring, Technical Debt | Tagged , , | 2 Comments

On the Real Danger of Quick-and-Dirty Programming

quote-if-10-years-from-now-when-you-are-doing-something-quick-and-dirty-you-suddenly-visualize-that-i-edsger-dijkstra-50997As in Dijkstra‘s quote above, when people criticize Quick-and-Dirty programming they are in general focusing on the negative impact in the system being developed. Software that is built following a Quick-and-Dirty approach will certainly have some serious deficiencies, which are also called Technical Debt.

However the Quick-and-Dirty approach has a different consequence that is as important as its violation of software design principles: It destroys the morale of the development team. Programmers should be proud of their work, but when they adopt Quick-and-Dirty solutions they are sacrificing their own professionalism.

In one of my first posts in this blog, I claimed that “Nothing is More Effective Than Enthusiasm“. The idea is that the best programmers are always enthusiastic about their work. But working Quick-and-Dirty is actually an enthusiasm-killer. Eventually the developers will become so disgusted with the system they are building that they will prefer to leave the project than trying to fix it.

Doing It Wrong

Besides the suffering caused by intentionally creating a bad product, programmers constantly working in Quick-and-Dirty mode will never have the opportunity to gain experience, improve their skills and actually become professional software developers. As in the quote by Frank Sonnenberg:

“Practice doesn’t make perfect if you’re doing it wrong.”

Worse yet, working Quick-and-Dirty may become a habit. In this case the poor and simplistic solutions may be chosen even when there is enough time to implement a proper design and avoid the creation of Technical Debt. Instead of being an emergency solution, Quick-and-Dirty may become the standard approach.

Unfortunately I think this phenomenon is very common in Agile projects: Programmers are motivated to adopt Quick-and-Dirty solutions to increase their own Velocity and display more progress in the Burn-Down charts. I discuss that in my article “The End of Agile: Death by Over-Simplification“.

In conclusion, if you are a software developer currently working on a project in which most of your work is Quick-and-Dirty, I seriously recommend you should consider finding another job, either in a different project in your company or in another company. Good luck!

What do you think? Do you agree? Please share your experience in the comments below.

Posted in Agile, Programming, Technical Debt | Tagged , , | 11 Comments

Watch-It-Next: A Contextual TV Recommendation System

yahoo_labsLast week my colleague Raz Nissim presented our joint work at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), which was held at Porto, Portugal. This was part of a Yahoo Labs project in which we investigated Recommender Systems for TV shows, under the leadership of Ronny Lempel and in collaboration with Michal Aharon, Eshcar Hillel and Amit Kagian.

Title: Watch-It-Next: A Contextual TV Recommendation System


As consumers of television are presented with a plethora of available programming, improving recommender systems in this domain is becoming increasingly important. Television sets, though, are often shared by multiple users whose tastes may greatly vary. Recommendation systems are challenged by this setting, since viewing data is typically collected and modeled per device, aggregating over its users and obscuring their individual tastes.

This paper tackles the challenge of TV recommendation, specifically aiming to provide recommendations for the next program to watch following the currently watched program the device. We present an empirical evaluation of several recommendation methods over large-scale, real-life TV viewership data. Our extensions of common state-of-the-art recommendation methods, exploiting the current watching context, demonstrate a significant improvement in recommendation quality.

You can download the article from the publisher’s site.

You can also watch here a video (in English) of a previous presentation of our work.

Here are the slides of Raz’s presentation:

Feel free to share your comments below. Thanks!

Posted in Data Mining, Recommender Systems, Research, Yahoo! | Tagged , , , | Leave a comment

On Wisdom and Happiness

I spend considerable time thinking about the meaning of Wisdom and Happiness:

  • What are the characteristics of a wise person?
  • What are the characteristics of a happy person?
  • Can wisdom and happiness be measured?

Of course I also think about the relationship between the two:

  • Are wise people happier?
  • Can we increase our happiness by increasing our wisdom?

In the Venn diagram below I try to capture one possible aspect of wisdom: The relationship between the “things we want” and the “things that are good for us”. The set W is the intersection of these two kinds of things. I believe that the number of items in W may be a measure of wisdom, meaning that a wise person is able to understand what is good for him/her.


Consequently, happiness would be the capacity to obtain these “things that we want and that are good for us”. In the Venn diagram below, the set H is the intersection between W and the “things we have”. I believe that the number of items in H may be a measure of happiness, meaning that a happy person has many of these things that he/she wants and that are good for him/her.


Of course I’m not talking only about material possessions. I believe that the “things you have” should consist mostly of the personal goals you achieved and the dreams you realized.

So what is the the path to happiness? First you increase the number of things in W, by understanding which things are good for you and “wanting them”. Then you increase the number of things in H, by obtaining the things that you want and that are good for you.

In other words, before realizing your dreams, make sure you are dreaming the right dreams.

What do you think? Do you agree? Please share your comments below.

Posted in Efficacy | Tagged | 2 Comments

Resource Adaptive Software Systems

The International Association of Software Architects (IASA) in Israel organized a special event about Adaptive Software Systems. We invited Tom Mueck to give a talk about “Resource Adaptive Software Systems”.

Title: Resource Adaptive Software Systems

Abstract: DARPA issued an announcement about building Resource Adaptive Software Systems this past April. More specifically “to build survivable, long-lived complex software systems that are robust to changes in the resources provided by their operational environment”. The motivation behind this project has to do with the short shelf life of modern day software systems:

  • Orders of magnitude less than other engineering artifacts.
  • Strongly inversely correlated with the rate and magnitude of change.
  • When changes do occur they lead to high software maintenance costs and premature obsolescence of otherwise functionally sound systems.

All of these factors greatly impact an enterprise architecture and every architect wishes there would be better technology around.

This talk will introduce a new general computing abstraction named Resource-Oriented Computing (ROC) and discuss these areas of particular interest to architects:

  • Coupled, loosely coupled and decoupled systems
  • Resource-Oriented Architectures
  • Messaging
  • Risk surface / Security

Bio: Tom Mueck is the Director of Business Development at 1060 Research – which was spun out of Hewlett Packard Labs in Bristol. Tom has an MA in Asian Studies and an MBA from the University of Chicago. Previously he was a Political Risk and Financial Analyst, and Sales Manager at Hotsip AB (spun out of Ericsson).

These are the original slides of Tom’s presentation:

Here is the video of the talk (in English):

Feel free to share your comments below.

Posted in IASA Israel, Software Architecture | Tagged , | Leave a comment

Introduction to Event Sourcing

The International Association of Software Architects (IASA) in Israel organized a special event about Adaptive Software Systems. We invited Vladik Khononov to give a talk about “Introduction to Event Sourcing”.

Title: Introduction to Event Sourcing

Abstract: Event sourcing is a pattern for modeling the application’s business logic. It states that all changes to application state should be defined and stored as a sequence of events. Its advantages are many:

  • Gives freedom to refactor the business logic, allowing better response to new requirements.
  • Suitable for building scalable, highly concurrent, distributed systems.
  • Stored events give the true history of a system, which is required by law in some industries.
  • The system’s state can be reversed to any point in the past for retroactive debugging.
  • The required infrastructure is simple – no monstrous databases are involved.

Vladik will also describe CQRS, an architecture that goes hand in hand with Event Sourcing.

Bio: Vladik Khononov is a software engineer with 15 years’ expertise in building web and enterprise applications for leading Israeli and global companies. Currently he serves as a solutions architect for Plexop, an Israeli-based startup company, where he specializes in Domain-Driven Design, CQRS/ES, and design of scalable, fault tolerant, distributed systems.

These are the original slides of Vladik’s presentation:

Here is the video of the talk (in Hebrew):

Feel free to share your comments below.

Posted in IASA Israel, Software Architecture | Tagged , | Leave a comment

Antifragile Software Design

The International Association of Software Architects (IASA) in Israel organized a special event about Adaptive Software Systems. I was glad to be invited to give a talk about “Antifragile Software Design”.

Title: Antifragile Software Design

Abstract: The concept of Antifragility was introduced by Nassim Taleb to describe systems that benefit from impacts and volatility.

In this talk we will discuss how this concept may be applied in the field of Software Design with the goal of developing Change-Resilient Systems.

In particular we will address two patterns which frequently appear in Antifragile systems:

1) The Barbell Strategy and the importance of the separation between high-level abstract elements and concrete implementation details.

2) The Componentization Strategy and its applications in SOA, Microservices and Software Product Lines.

Bio: Hayim Makabee was born in Rio de Janeiro. He immigrated to Israel in 1992 and completed his M.Sc. studies on Computer Sciences at the Technion. Since then he worked for several hi-tech companies, including also some start-ups. Currently he is a Predictive Analytics Expert at Pontis. He is also a co-founder of the International Association of Software Architects (IASA) in Israel. Hayim is the author of a book about Object-Oriented Programming and has published papers in the fields of Software Engineering, Distributed Systems and Genetic Algorithms.

These are the original slides of Hayim’s presentation:

Here is the video of the talk (in Hebrew):

Please see here my previous posts on Antifragility and Software Design.

Feel free to share your comments below.

Posted in Adaptable Design, Antifragility, IASA Israel, Software Architecture, Software Evolution | Tagged , , , , | Leave a comment