The End of Agile: Death by Over-Simplification

hypeThere is something basically wrong with the current adoption of Agile methods. The term Agile was abused, becoming the biggest ever hype in the history of software development, and generating a multi-million dollar industry of self-proclaimed Agile consultants and experts selling dubious certifications. People forgot the original Agile values and principles, and instead follow dogmatic processes with rigid rules and rituals.

But the biggest sin of Agile consultants was to over-simplify the software development process and underestimate the real complexity of building software systems. Developers were convinced that the software design may naturally emerge from the implementation of simple user stories, that it will always be possible to pay the technical debt in the future, that constant refactoring is an effective way to produce high-quality code and that agility can be assured by following strictly an Agile process. We will discuss each one of these myths below.

Myth 1: Good design will emerge from the implementation of user stories

Agile development teams follow an incremental development approach, in which a small set of user stories is implemented in each iteration. The basic assumption is that a coherent system design will naturally emerge from these independent stories, requiring at most some refactoring to sort out the commonalities.

However, in practice the code does not have this tendency to self-organize. The laws governing the evolution of software systems are that of increasing entropy. When we add new functionality the system tends to become more complex. Thus, instead of hoping for the design to emerge, software evolution should be planned through a high-level architecture including extension mechanisms.

Myth 2: It will always be possible to pay the technical debt in the future

The metaphor of technical debt became a popular euphemism for bad code. The idea of incurring some debt appears much more reasonable than deliberately producing low-quality implementations. Developers are ready to accumulate technical debt because they believe they will be able to pay this debt in the future.

However, in practice it is not so easy to pay the technical debt. Bad code normally comes together with poor interfaces and inappropriate separation of concerns. The consequence is that other modules are built on top of the original technical debt, creating dependencies on the simplistic design decisions that should be temporary. When eventually someone decides to pay the technical debt, it is already too late: the fix became too expensive.

Myth 3: Constant refactoring is an effective way to produce code

Refactoring became a very popular activity in software development; after all it is always focused on improving the code. Techniques such as Test-Driven Development (TDD) allow refactoring to be performed at low risk, since the unit tests automatically indicate if some working logic has been broken by code changes.

However, in practice refactoring is consuming an exaggerated amount of the efforts invested in software development. Some developers simply do not plan for change, in the belief that it will always be easy to refactor the system. The consequence is that some teams implement new features very fast in the first iterations, but at some point their work halts and they start spending most of their efforts in endless refactorings.

Myth 4: Agility can be assured by following an Agile process

One of the main goals of Agility is to be able to cope with change. We know that nowadays we must adapt to a reality in which system requirements may be modified unexpectedly, and Agile consultants claim that we may achieve change-resilience by adhering strictly to their well-defined processes.

However, in practice the process alone is not able to provide change-resilience. A software development team will only be able to address changing system requirements if the system was designed to be flexible and adaptable. If the original design did not take in consideration the issues of maintainability and extensibility, the developers will not succeed in incorporating changes, not matter how Agile is the development process.

Agile is Dead, Now What?

If we take a look at the hype chart below, it is sure that regarding Agile we are after the “peak of inflated expectations” and getting closer to the “trough of disillusionment”.

gartner-hype-cycle

Several recent articles have proclaimed the end of the Agile hype. Dave Thomas wrote that “Agile is Dead”, and was immediately followed by an “Angry Developer Version”. Tim Ottinger wrote “I Want Agile Back”, but Bob Marshall replied that “I Don’t Want Agile Back”. Finally, what was inevitable just happened: “The Anti-Agile Manifesto”.

Now the question is: what will guide Agile through the “slope of enlightenment”?

In my personal opinion, we will have to go back to the basics: To all the wonderful design fundamentals that were being discussed in the 90’s: the SOLID principles of OOD, design patterns, software reuse, component-based software development. Only when we are able to incorporate these basic principles in our development process we will reach a true state of Agility, embracing change effectively.

Another question: what will be the next step in the evolution of software design?

In my opinion: Antifragility. But this is the subject for a future post

What about you? Did you also experience the limitations of current Agile practices? Please share with us in the comments below.

Posted in Agile, Refactoring, Software Evolution | Tagged , , | 96 Comments

Coping with Change in Agile Software Development

changeTraditional software development methodologies, such as Waterfall, tried to follow a series of isolated steps: First you define all the system requirements, then you devise a detailed system design that satisfies these requirements, and then you implement the system according to this design. This approach is also called Big Design Up Front (BDUF), since the detailed design is completed before starting the implementation.

However, in practice the software development teams always must face changes in requirements which invalidate parts of the original design. This fact is expressed in one of the Lehman’s Laws of Software Evolution:

“Continuing Change — A system must be continually adapted or it becomes progressively less satisfactory.”

The modern approach of Agile software development understands that changes are inevitable and that investing in detailed plans is not effective. This is clearly expressed in one of the values from the Agile manifesto:

Responding to change over following a plan”

This means that it is more important to be able to respond to change than to follow a detailed plan. For example, here is how the Scrum process increases our ability to respond to change, according to the Scrum Alliance:

“Many projects that have elaborate project plans with detailed Gantt charts have failed. We make the plans so complex and detailed that they’re difficult to modify when changes occur. Agile process replaces project plans with release schedules and burn-down charts that can accommodate change. We can still track progress and, in fact, progress is more transparent than in a typical Waterfall project plan.”

It should be noticed that this description focuses entirely in the process: it explains how the Scrum development process is able to cope with changes more effectively than traditional processes.

Now the question is: Is it sufficient to have a process that deals with change? May we continue to design and implement our systems as we did before, or do we need also to have special design guidelines to make our systems more resilient to change?

I think that the proponents of Agile methods have done a really good work in defining new processes and convincing us that BDUF is a bad idea. However, I do not think the question above has received enough attention.

Agile Design

Besides the Agile manifesto, there are also 12 principles that may help guide us in the definition of what are the characteristics of an Agile design:

“Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.”

“Continuous attention to technical excellence and good design enhances agility.”

Therefore real agility, or the capacity to cope with changing requirements, can only be achieved through investment in high-quality designs. In other words, the design must be change-resilient as much as the development process is change-resilient. Trying to deal with constant changes through a simplistic design is the most common cause of technical debt and continuous refactoring.

I believe that change-resilient systems must be planned to be adaptable. This means that the design should incorporate degrees of freedom, focusing on low coupling and high cohesion. These degrees of freedom do not necessarily introduce more complexity in the design. Actually, they often are the most effective way to defer decisions until the last responsible moment.

The Last Responsible Moment (LRM) is a lean development concept defined as:

“A strategy of not making a premature decision but instead delaying commitment and keeping important and irreversible decisions open until the cost of not making a decision becomes greater than the cost of making a decision.”

How do you defer a design decision until the last responsible moment? The answer is that you must introduce a placeholder, an abstraction that represents the module that will be implemented in the future. The introduction of this abstraction represents an additional degree of freedom, and it allows the rest of the system to be developed as if the specific decision was already made.

When the design is adaptable, the changing requirements may be implemented through new versions of existing components, without the need to change the relationships among these components. A specific component is the realization of a low-level design decision, but the high-level abstractions are stable. Thus, there is no need to refactor the system: the components may evolve independently of each other.

Adaptable Design Up Front (ADUF) is an approach that promotes change-resilience through the introduction of degrees of freedom in software design. I explain and discuss the ADUF approach extensively in several of my previous posts.

In Summary

Agile software development has focused mostly on the definition of new processes to cope with continuously changing requirements. However the processes are not enough, it is necessary to devise new software design approaches to produce systems that are indeed change-resilient. Adaptable Design Up Front is one such approach based on the introduction of degrees of freedom in software design.

What is your personal experience? What are your design strategies to deal with changing requirements? Please share in the comments below.

Posted in Adaptable Design, Agile, Refactoring, Requirements Specification, Software Architecture, Software Evolution | Tagged , , , , , | 4 Comments

IASA Israel hosts Prof. Dan Berry

DanBerryBigIn the last week of December/2013 we at the International Association of Software Architects (IASA) in Israel had the great pleasure to host Prof. Dan Berry for a talk on Requirements Engineering. Please find the details about the talk below, including links to research papers that Prof. Berry has published on this topic.

Title: The Impact of Domain Ignorance on the Effectiveness of Requirements Idea Generation during Requirements Elicitation and on Software Development in General

Abstract:

It is believed that the effectiveness of requirements engineering activities depends at least partially on the individuals involved. One of the factors that sems to influence an individual’s effectiveness in requirements engineering activities is knowledge of the problem being solved, i.e., domain knowledge. While a requirements engineer’s having in-depth domain knowledge helps him or her to understand the problem easier, he or she can fall for tacit assumptions of the domain and might overlook issues that are obvious to domain experts.

Historically, several, including the speaker, have reported observations that sometimes ignorance of the domain in a software development project is useful for promoting the elicitation of tacit assumptions and out-of-the-box ideas.

Recently there have been attempts to confirm these observations with well-designed empirical studies.

This talk describes a controlled experiment to test the hypothesis that adding to a requirements elicitation team for a computer-based system in a particular domain, requirements analysts that are ignorant of the domain improves the effectiveness of the requirements elicitation team. The results show some support for accepting the hypothesis. The results were analyzed also to determine the effect of creativity, industrial experience, and requirements engineering experience.

The controlled experiment was followed up by a confirmatory case study of an elicitation brainstorm in a software-producing company by a team with four domain experts from the company and four domain ignorants from our university. According to the company domain experts, the brainstorm produced ideas that they would not have produced themselves.

A different empirical study tested the hypothesis that putting newbies to a project to work doing tasks that benefit from domain ignorance makes their immigration to the project smoother. First, a survey was conducted among software development managers of varying experience to determine what software development activities they thought were at least helped by domain ignorance. Second, transcripts from fourteen interviews of presumably-domain-ignorant immigrants to new software development projects at one large company were examined to determine if the activities performed by those with the smoothest immigrations were activities that are at least helped by domain ignorance. The conclusions are that ignorance can play an important role in software development but there are a lot of other factors that influence immigration smoothness.

Publications:

Paper: “The impact of domain knowledge on the effectiveness of requirements idea generation during requirements elicitation”, 20th IEEE International Requirements Engineering Conference, 2012

Paper: “An industrial case study of the impact of domain ignorance on the effectiveness of requirements idea generation during requirements elicitation”, 21st IEEE International Requirements Engineering Conference, 2013

Joint work with Ali Niknafs and Gaurav Mehrotra

Prof. Berry’s Bio:

Daniel M. Berry got his B.S. in Mathematics from Rensselaer Polytechnic Institute, Troy, New York, USA in 1969 and his Ph.D. in Computer Science from Brown University, Providence, Rhode Island, USA in 1974. He was on the faculty of the Computer Science Department at the University of California, Los Angeles, California, USA from 1972 until 1987. He was in the Computer Science Faculty at the Technion, Haifa, Israel from 1987 until 1999. From 1990 until 1994, he worked for half of each year at the Software Engineering Institute at Carnegie Mellon University, Pittsburgh, Pennsylvania, USA, where he was part of a group that built CMU’s Master of Software Engineering program. During the 1998-1999 academic year, he visited the Computer Systems Group at the University of Waterloo in Waterloo, Ontario, Canada. In 1999, Berry moved to the the Cheriton School of Computer Science at the University of Waterloo. Prof. Berry’s current research interests are software engineering in general, and requirements engineering and electronic publishing in the specific.

To be informed about future events of IASA in Israel, please join our LinkedIn group.

This meeting was organized in cooperation with ILTAM.

Posted in IASA Israel, Requirements Specification | Tagged , | Leave a comment

Adaptable Design Up Front talk at USP (in Portuguese)

logo_USPThis month I had the great pleasure to speak about Adaptable Design Up Front (ADUF) at the University of Sao Paulo (USP).
I was invited by my friend, professor Fabio Kon, who shares my interests on Software Architecture and Agile Methods.

Abstract: This talk tries to answer the question: “How much Design Up Front should be done in an Agile project?” I present my approach of Adaptable Design Up Front (ADUF), describing its rationale, applications in practice and comparison to other approaches such as Emergent Design.

You can see the video and the slides.

Your comments are welcome!

Posted in Adaptable Design | Tagged | 2 Comments

Attention Agile Programmers: Project Management is not Software Engineering

I’m happy to see so many software developers who are enthusiastic about Agile methods. It’s nice when people enjoy their work. However, I think there is a problem: Some teams are giving too much emphasis to Project Management activities, while at the same time they appear to have forgotten some basic Software Engineering practices. In this post I will present some examples of this lack of balance between the two.

Drawing Burn-down Charts vs. Drawing UML Diagrams

DailyBurndownBurn-down charts help us track the progress being done. They present clearly how much work has been completed, and how much work is left until the end of an iteration. By taking a look at the chart, it’s easy to know if the project is behind or ahead of schedule. But the burn-down chart tells us nothing about the quality of the software being produced, and neither has it any value after the iteration has been completed. Burn-down charts are a project management artifact, they are not a deliverable of the software development process.

UML diagrams are an essential part of the software development process. They are required by the person doing the design to conceive and compare several alternatives. They are needed to present these alternatives and allow design reviews. They serve as part of the documentation of the code that is actually implemented. Drawing UML diagrams is a software engineering activity, and these diagrams are part of the deliverables of the software development process.

However, it seems that some Agile programmers invest very few time drawing UML diagrams. Perhaps they think that these diagrams are only needed in Big Design Up Front, and not when you do Emergent Design or TDD. In my opinion this is a mistake. If you spend more time drawing burn-down charts than UML diagrams, then you are doing more project management than software engineering.

Estimating User Stories vs. Computing Algorithms Complexity

PlanningPokerIn an Agile project the User Stories are the basic unit of work, and it is very important to estimate the effort that will be required to implement these stories. Techniques such as the Planning Poker were invented to make this estimation process as accurate as possible, since iterations are very short and there is not much margin for errors. However, the time spent in effort estimation does not affect the quality of the software. Effort estimation is a project management activity.

Professional software developers must be able to compute the complexity of the algorithms they are adopting in their systems. Actually, they should base their design decisions on the different time and space complexities of diverse algorithms and data structures. They should understand the differences between best, worst, and average case complexities. Computing algorithm complexity is a software engineering activity, essential to assure the quality of the systems being developed.

However, it seems that some Agile programmers don’t ask themselves many questions about the complexity of the algorithms they are using. They just go to the library and choose the Collection with the most convenient API. Or they always choose the simplest possible solution, because of KISS and YAGNI. But if you spend more time estimating user stories than computing time and space complexity, then you are doing more project management than software engineering.

Measuring Velocity vs. Measuring Latency and Throughput

velocity-imgIn Agile projects, the team members are constantly measuring their Velocity: How many units of work have been completed in a certain interval. Of course people want to be as fast as possible, assuming that your velocity is an indication of your productiveness, efficiency and efficacy as a software developer. But the velocity gives us no indication of the quality of the software being produced. On the contrary, it is common to see Agile teams working very fast on the first iterations just to slow down considerably when they start doing massive Refactorings. Thus measuring the Velocity is just a project management activity with dubious meaning.

Many software developers today are working on client/server systems such as Web sites and Smartphone Apps. These systems are based on the exchange of requests and responses between a client and a server. In such systems, the Latency is the time interval between the moment the request is sent and the moment the response is received. The Throughput is the rate the requests are handled, i.e., how many requests are responded per unit of time. In client/server systems it is essential to constantly measure the latency and the throughput. A small code change, such as making an additional query to the database, may have a big impact on both.

However, it seems that some Agile programmers are constantly measuring their velocity, but they have absolutely no idea of what are the latency and throughput of the systems they are developing. They may ignore non-functional requirements and choose simplistic design alternatives that have a huge negative impact in the system’s performance. Thus if you measure your own velocity more frequently than you measure latency and throughput, you are doing more project management than software engineering.

Meetings and Retrospectives vs. Design and Code Reviews

meetingAgile projects normally have daily meetings to discuss how the team is making progress: What have been done, what will be done, are there any obstacles. These meetings do not include any technical discussion. Projects may also have retrospective meetings after each iteration, with the goal to improve the process itself. These meetings are project management activities, they do not address software quality issues.

Design and code reviews focus on software quality. In a design review, an alternative should be chosen based on non-functional quality attributes such as extensibility and maintainability. Code reviews are an opportunity to improve the quality of code that should be already correct. Code reviews should also be used to share technical knowledge and make sure that the team members are really familiar with each other’s work. These kinds of reviews are software engineering activities, with a direct effect on the deliverables of the software development process.

However, it seems that some Agile programmers focus too much on the process and not enough on the product. Successful software development is much more than having a team managing itself and constantly improving its processes. If the quality of the software is decaying, if technical debt is accumulating, if there is too much coupling among modules and if the classes are not cohesive, then the team is not performing its goal, no matter how efficient the process seems to be. If you are spending more time in stand-up and retrospective meetings than on design and code reviews, then you are doing more project management than software engineering.

In Summary

Drawing burn-down charts, estimating user stories, measuring your velocity and participating in retrospectives are project management activities. Drawing UML diagrams, computing the complexity of algorithms, measuring latency and throughput and participating in design reviews are software engineering activities. If you are a software engineer, you should know what is more important. Don’t let your aspiration to become a Scrum Master make you forget the Art of Computer Programming. Good luck!

Posted in Agile, Programming, Software Architecture | Tagged , , | 16 Comments

Software Product Line Engineering

PaulClementsLast week I had the opportunity to participate in a Product Line Engineering (PLE) seminar by Dr. Paul Clements, who is one of the global experts on this subject and author of the book “Software Product Lines: Practices and Patterns” among others. For many years, Paul has been at SEI, the Software Engineering Institute at Carnegie-Mellon University.

Here is Paul’s definition of Product Line Engineering:

“The disciplined engineering of a portfolio of related products using a common set of shared assets and a common means of production.”

Paul adopts the factory paradigm, in which the shared assets are like the items in the factory’s supply chain. A set of features describe capabilities that may vary among products. Then, the assets are configured according to feature profiles, generating concrete products.

Paul’s seminar was naturally focused on Software Product Lines (SPL). Here is the definition of SPL from SEI:

“A software product line (SPL) is a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way.

Software product lines are emerging as a viable and important development paradigm allowing companies to realize order-of-magnitude improvements in time to market, cost, productivity, quality, and other business drivers. Software product line engineering can also enable rapid market entry and flexible response, and provide a capability for mass customization.”

Software Product Lines are not only about the software

For many years I’ve been developing software systems and trying to make them highly configurable, through interchangeable components, well-defined interfaces, low coupling and high cohesion. Using the right Design Patterns, it’s easy to add degrees of freedom to an application, defining relationships between abstract components. That’s the basic idea of plug-ins, which may indeed be used to extend a system and create new variations.

In some of my previous posts I’ve defined my approach of Adaptable Design Up Front (ADUF), which promotes the definition of frameworks or platforms that serve as the basis for a family of products. Such a family can be formed by:

  • Several products coexisting at the same point in time.
  • Several instances (versions) of a product over a period of time.

In both cases, the several products in the family are the result of an evolutionary process, which in my opinion should be planned as much as possible. Thus, ADUF can be seen as a methodology to create a shared software architecture to support this evolution.

However, Software Product Lines are not only about the software. They are about all the artifacts generated during the software development process, including:

  • Requirements
  • Design diagrams
  • Documentation
  • Test cases
  • User guides

Clearly, each member in the family will have its own instances for each one of these artifacts, and each artifact should support configuration through the same variation points as the software itself. If the software system is configurable but the other artifacts are not, we cannot really say we have a product line.

I believe every software developer has experienced the difficulty of keeping all artifacts coherent during software evolution. I hope new tools will make this task easier. What has been your personal experience? Please share with us using the comments below.

Posted in Adaptable Design, Software Architecture, Software Evolution | Tagged , , | 3 Comments

35 Agile Development Best Practices

ninjaHow Agile are you? It seems that nowadays most software developers claim they are doing Agile. However, it is also a well-known fact that many teams are only following part of the practices that characterize Agile software development. In the case of Scrum, this is called “Scrum-but”: Doing Scrum but adapting it to your own taste.

Below is a list of 35 Agile Best Practices. You can go over it and check how many of these practices your team is currently following:

  1. Priorities (product backlog) maintained by a dedicated role (product owner)
  2. Development process and practices facilitated by a dedicated role (Scrum master)
  3. Sprint planning meeting to create sprint backlog
  4. Planning poker to estimate tasks during sprint planning
  5. Time-boxed sprints producing potentially shippable output
  6. Mutual commitment to sprint backlog between product owner and team
  7. Short daily meeting to resolve current issues
  8. Team members volunteer for tasks (self-organizing team)
  9. Burn down chart to monitor sprint progress
  10. Sprint review meeting to present completed work
  11. Sprint retrospective to learn from previous sprint
  12. Release planning to release product increments
  13. User stories are written
  14. Give the team a dedicated open work space
  15. Set a sustainable pace
  16. The project velocity is measured
  17. Move people around
  18. The customer is always available
  19. Code written to agreed standards
  20. Code the unit test first
  21. All production code is pair programmed
  22. Only one pair integrates code at a time
  23. Integrate often
  24. Set up a dedicated integration computer
  25. Use collective ownership
  26. Simplicity in design
  27. Choose a system metaphor
  28. Use class-responsibility-collaboration (CRC) cards for design sessions
  29. Create spike solutions to reduce risk
  30. No functionality is added early
  31. Refactor whenever and wherever possible
  32. All code must have unit tests
  33. All code must pass all unit tests before it can be released
  34. When a bug is found tests are created
  35. Acceptance tests are run often and the score is published

I’ve tried to provide links to all important concepts in the list above, in the case you would like to learn more about them. The list itself is from the paper: “What Do We Know About Scientific Software Development’s Agile Practices?

So, how many of these practices do you follow? Please leave a comment below.

Posted in Agile, Programming | Tagged , | 25 Comments

Avoiding Technical Debt: How to Accumulate Technical Savings

sisyphusThe metaphor of Technical Debt has been widely accepted as part of the current reality of software development. Programmers agree that they frequently need to make sacrifices in order to meet deadlines, and the consequences of these sacrifices are modules that should be redesigned in the future.  In order to “pay” the technical debt, Refactoring became a routine in the development process, and Test-Driven Development helps assure that these constant refactorings do not break working code.

I don’t feel comfortable with this reality. I think that software developers should not incur technical debt so easily. And I think software developers are wasting too many efforts with constant refactorings, what has been called “Refactor Distractor”. In my opinion, the right way to fight technical debt is to accumulate Technical Savings.

Technical Savings: Well-designed code, planned to satisfy future needs. Technical Savings are a form of wealth that makes a module more maintainable, extensible or reusable.

Now the question is: How do we accumulate Technical Savings?

We can save money when we earn more than we spend. If our earnings are the result of our work, then if we are saving money we could say that we are actually working more than we need. But we all understand the importance of having savings in our bank account, because we know that we may have more needs in the future.

Exactly in the same way, Technical Savings are the result of some extra work. We must invest in the design of our modules so that they are indeed more maintainable, extensible and reusable. But we must believe that this investment will pay itself in the future. If done right, the efforts invested in accumulating Technical Savings should be smaller than the efforts required in the alternative of incurring technical debt and then paying this debt through successive refactorings.

Technical Savings and Adaptable Design Up Front

In some of my previous posts, I’ve described my approach of Adaptable Design Up Front (ADUF). The key idea is that software systems must evolve, and that this evolution must be supported through well planned, adaptable designs.

In other words, there will always be changes, either in the form of new features or modifications in the original requirements. One option is to implement these successive changes as they happen, what inevitably will cause technical debt and the need for refactoring. Another option is to be aware that these changes will happen, and to plan our systems to accommodate these changes as easily as possible. A design that is planned to accommodate change is an adaptable design.

In one of my posts I explain how adaptable designs can be implemented through frameworks and platforms. In this same post I talk about the importance of decoupling through Design Patterns and about the need of Domain Modeling.

In another post I explain how adaptable designs should actually be an application of the Open-Closed principle. Our design will be really adaptable only if it has some parts that do not need to be changed, and if the parts that may change are easy to change.

In Summary

It is possible to avoid the trap of constant debt and refactoring. We may accumulate Technical Savings, a form of wealth that makes our modules more maintainable, extensible and reusable. One approach to acquire these Technical Savings is trough Adaptable Design Up Front.

Posted in Adaptable Design, Refactoring, Software Evolution, Software Reuse, Technical Savings | Tagged , , , | 24 Comments

On Technical Debt and the Psychology of Risk Taking

roulette-wheel-320x180I recently read the following message in a developers’ forum (rephrased here):

“I’m facing a dilemma. I must deliver the product by the end of this quarter, but there are still many tests I would like to execute. The product seems to be working, but even so I would like to run additional tests. However, if I do so I will miss the deadline. What should I do?”

Every programmer has been faced with this dilemma. And we all know what always happen in this case: The product is delivered on time, even with the risk of having some bugs. Meeting the deadline is considered more important than assuring 100% correctness.

There are many other situations in which a developer may make sacrifices to meet the deadline. This happens because the developer’s work is Multi-Objective. He must:

  • Deliver on time.
  • Deliver correct code.
  • Deliver code with good design.
  • Deliver well-documented code.

Frequently, there is not enough time to invest in all objectives, so some of them are assigned lower priority. This is the root of Technical Debt: A developer decides to postpone some task he should do now, because he does not have enough resources to satisfy all his objectives.

Now the question is: How do programmers decide what to postpone? How do they assign priorities to their conflicting tasks?

We would like to imagine that these decisions are being taken consciously and systematically. However, there are several unconscious and psychological influences. One of them is known as Loss Aversion.

Loss Aversion

Loss Aversion and its consequences are discussed by Daniel Kahneman in his book “Thinking, Fast and Slow”.

“Loss Aversion refers to people’s tendency to strongly prefer avoiding losses to acquiring gains. Some studies suggest that losses are twice as powerful, psychologically, as gains.”

In one experiment people are presented with the following options:

  • A certain loss of $100.
  • A 50% risk of losing $200, with a 50% chance of not losing anything.

The first option represents the sure loss of a smaller amount, and the second option represents a risk of losing a higher amount. It was proved that most people prefer the second choice. In other words, in this situation most people would take the risk of losing a higher amount. As Kahneman’s writes:

“In bad choices, where a sure loss is comparable to a larger loss that is merely probable, diminishing sensitivity causes risk seeking.”

Let’s now see how this applies to software developers and their multiple objectives.

Risk Taking in Software Development

Going back to the programmer’s dilemma we presented in the beginning of this article, suppose a software developer is presented with the following choices:

  • Continue testing the system and miss the deadline.
  • Deliver the system as-is, with the risk of a serious bug being found afterwards.

The first option is a sure loss: The deadline is missed. The second option represents the risk of a higher loss: Perhaps the system has a serious bug, but perhaps not. Thus, because of Loss Aversion, the second option will be chosen by most programmers.

What makes things worse is that, for some objectives, quality is not being measured. So when a programmer decides not to invest in a task, he is not only taking a risk on the consequences of this decision. Actually, programmers gamble with the probability that their lack of investment will never be detected.

For example, it is hard to measure design quality. Very often, code-reviews are not systematic and technical debt is considered an acceptable “fact of life”. Technical debt may never be “paid”, and when it is paid, it will probably be paid by someone else.

It is also hard to measure documentation quality. There are not appropriate tools, and sometimes documentation is considered “nice-to-have”. Some programmers believe that documentation is superfluous and that code should be self-explainable.

Regarding testing, traditional development uses QA teams and it is expected that code will have bugs. Developers are not supposed to deliver 100% bug-free code.

However, in Test-Driven Development (TDD), developers are responsible for unit testing, and if the tests don’t run the code cannot be delivered. The programmer has an immediate feedback on the quality of his code, so there is no place left for risk taking.

Conclusions

In order to improve the quality of software systems, we should avoid risk-taking behaviors. Because of Loss Aversion, if we want to avoid risk-taking, each software development task should have its own tools and methods that allow programmers to receive immediate feedback on the quality of the code they are writing.

Posted in Agile, Efficacy, Psychology of Programming, TDD | Tagged , , , | 27 Comments

The Etrog, Idealism and Concrete Choices

etrogIn the holiday of Succot, Jews make a blessing over the Four Species, including a special fruit called the Etrog. The Etrog is similar to a lemon, but it has a different shape and delicious smell. In the past the Etrog was considered a rare fruit, and it could be very expensive. When observant Jews chose an Etrog for the blessing, they looked for a beautiful one, which is called an “Etrog Mehoudar”.

There is an old tale about an Etrog dealer who arrives in a small village in Eastern Europe. The habitants of this village wanted to buy beautiful Etrogs to say the blessing, but they were not sure about how to choose a good one. So they would ask the Etrog dealer to give them a beautiful Etrog, and then go ask the Rav if it is really a good one.

When they arrived in the Rav’s house with the Etrog, they would ask:

“Rav, is this Etrog beautiful? Is it Mehoudar?”

Then the Rav would examine carefully the Etrog and answer:

“I’m sorry, but this Etrog is not Mehoudar.”

Then the person would return the Etrog to the dealer, claiming that the Rav said it was not Mehoudar.

After this happened several times, the dealer understood that he would not be able to sell any of his Etrogs, and then he decided to talk to the Rav. He went to his house and said:

“Rav, you must help me, no one wants to buy my Etrogs.”

The Rav answered:

“I’m sorry, but if they ask me if the Etrog is Mehoudar, I must say the truth.”

Then the dealer insisted:

“But Rav, if no one buys my Etrogs I will lose everything, and no one will have an Etrog to say the blessing.”

The Rav analyzed the situation and then proposed:

“Instead of giving a single Etrog to each person, you will give two Etrogs. Then, when they come to me, they will ask: ‘Which one among these two Etrogs is the most Mehoudar?’ And in this case I will always have a positive answer.”

Idealism and Concrete Choices

The Etrog Mehoudar represents our ideals. It is a symbol of perfection, of the satisfaction of all our desires. It is extremely important to have ideals, but we must always remember that we are constrained by the concrete choices we have at a particular moment.

It is important to have an Etrog Mehoudar, but when the holiday of Succot starts it is more important to have some Etrog to say the blessing, any Etrog, even if this Etrog is not Mehoudar.

As in the quote: “A goal is a dream with a deadline.” – Napoleon Hill

If the Etrog Mehoudar is the dream, then the beginning of Succot is the deadline. The goal is to have some Etrog to be able to say the blessing.

In the solution proposed by the Rav, the two Etrogs offered by the dealer are the unique concrete choices. The two choices are a limitation, but also part of the solution.

At any specific point in time, we must be wise to understand what our concrete choices are, and then select the best one based on our ideals.

Idealism and Software Development

Software developers are proud of the products they create, and they always want them to be as good as possible. A good piece of software should have many quality attributes: It should be well designed, should be efficient, and should not have bugs. But in our quest of the perfect software, it is easy to miss the deadlines.

One symptom of excessive perfectionism in software development is Analysis Paralysis: “The state of over-analyzing (or over-thinking) a situation so that a decision or action is never taken, in effect paralyzing the outcome.” In this case, software developers may spend too much time writing documents and drawing diagrams instead of writing code.

Another symptom is Featuritis: “The ongoing expansion or addition of new features in a product.” In this case the product will never be ready, because there will always be a new feature which may be added to make it better. Just another feature to make it perfect…

A solution to avoid these mistakes is to focus on a product that is both small and has very concrete purposes. This is the idea behind the concept of Minimum Viable Product, proposed by Eric Ries in his book “The Lean Startup”.

In summary:

There is no contradiction between idealism and pragmatism, as long as:

  • We are able to identify our concrete choices.
  • We use our ideals to select the best choice.

If you celebrate Succot, I wish you a Chag Sameach!

Posted in Efficacy, Jewish Sources | Tagged , | 8 Comments