The Psychology of Agile Software Development

gear_team_420Why is Agile so successful? It is a fact that Agile methods have many enthusiastic practitioners, who are firm believers that the adoption of Agile processes has revolutionized the way they build software systems, making them much more productive and effective as software developers. But this is not only a question of being able to write more code in less time. Software developers are very often enthusiastic about Agile practices, reporting that they are having much more fun when doing their work.

This is a phenomenon that certainly deserves to be investigated: How come people are having so much fun from a set of work processes and techniques mostly related to project management?

The answer is that Agile processes and techniques have a direct influence on cognitive aspects and change the dynamics of teamwork. Thus, it is important to analyze and understand which psychological aspects are being affected. Below is a list of topics that illustrate the effects of Agile on the behavior of programmers in a software development team.

Social Nudges and Burndown Charts

Our behavior is greatly driven by Social Nudges, which are forces that result from our interactions with other people. Agile processes increase the frequency of social interactions among team members, for example through daily stand-up meetings and retrospectives. Agile methods also provide more visibility on the progress and effectiveness of each team member, for example through Burndown and Velocity charts. These Social Nudges together help avoid negative phenomena such as procrastination and enable real self-managed teamwork. Read more.

Illusory Superiority and Pair Programming

Illusory Superiority is a cognitive bias that causes people to overestimate their positive qualities and abilities and to underestimate their negative qualities, relative to others. What is the cure for this illusory feeling of superiority? How can we make the programmers in a team respect each other, and even admire their peers because of the recognition of their experience and skills? The key for respect and recognition is joint work, as close as possible. And in this case the Agile methods provide more opportunities for cooperation than traditional approaches, for example through Pair Programming and daily stand-up meetings. Read more.

The Planning Fallacy and Planning Poker

The Planning Fallacy is a tendency for people and organizations to underestimate how long they will need to complete a task, even when they have experience of similar tasks over-running. In the workplace there are some other reasons for the Planning Fallacy: If the manager is the one doing the estimates, he has an interest to make them hard to achieve. If the worker is the one doing the estimates, he will be ashamed to assign too much time to himself. The methodology of Agile software development has addressed the problem of poor estimates through the concept of the Planning Poker, in which all members of the team participate and contribute. Read more.

TDD, Flow theory and Gamification

According to the Flow theory, a person can only enjoy a task if there is a match between his abilities and the complexity of the task. In traditional testing methodologies there was a mismatch between the ability of software developers and the complexity of the testing task. The consequence is that programmers could not enjoy testing. But TDD changes this picture because it transforms testing in just another type of programming task. Gamification is the use of game design techniques and mechanics to solve problems and engage audiences. TDD uses several game mechanisms, including immediate feedback and measurable progress. Read more.

In summary, understanding the reasons of the success of Agile methods allow us to become a learning organization and adapt Agile processes and techniques to our own particular environment and culture.

Posted in Agile, Psychology of Programming | Tagged , | 5 Comments

On Agile Architecture, Emergent Design and Framework-Based Design

toolsI recently read the very interesting Ph.D. thesis of Michael Waterman on the topic of Agile Software Architecture. Michael investigated how professional software engineers in the industry are applying Agile principles to software architecture and design. His conclusions are that there are five main strategies:

  1. Respond to Change
  2. Address Risk
  3. Emergent Architecture
  4. Big Design Up-Front
  5. Use Frameworks and Template Architectures

These are Michael’s definitions for these five strategies:

“Responding to change is a strategy in which a team designs an evolving architecture that is modifiable and is tolerant of change.”

“Addressing risk is a strategy in which the team does sufficient architecture design to reduce technical risk to a satisfactory level.”

“An emergent architecture is an architecture in which the team makes only the bare minimum architecture decisions up-front, such as selecting the technology stack and the high-level architectural patterns. In some instances these decisions will be implicit or will have already been made, in which case the architecture is totally emergent.”

“The big design up-front requires that the team acquires a full set of requirements and completes a full architecture design before development starts.”

“Using frameworks and template architectures is a strategy in which teams use standard architectures such as those found in development frameworks, template or reference architectures, and off-the-shelf libraries and plug-ins. Standard architectures reduce the team’s architecture design and rework effort, at the expense of additional constraints applied to the system.”

In his thesis Michael provides many details about each one of the strategies above. Reading his work provided me some very interesting insights. The most important of them I would like to present and discuss below.

Emergent Design vs. Framework-Based Design

I personally do not believe in Emergent Design, and thus I have always been surprised to see so many people saying that they were very successfully doing Emergent Design. But after reading Michael’s thesis, I understood that most people who claim they are doing Emergent Design in practice are doing Framework-Based design.

Today there are many categories of software systems for which there are very sophisticated and rich frameworks that already implement most of the hard work. For example, most people building Web sites do not need to understand anything about the underlying communication protocols; they just need to define the site structure, its content and some simple business logic. As a consequence, we all have heard stories about some very nice sites that were built in a few days by people with no degree on Computer Sciences or Software Engineering.

The same is true about applications for smartphones. There are frameworks that make the task of creating a new app extremely simple, to the point that the developers may completely ignore the underlying operating system and use the same code for Android or iPhone apps. There is no need for much development experience or sophisticated programming skills, and as a consequence there are teenagers creating apps and games that are used by millions of people worldwide.

This same reality applies to many domains: Software developers are not doing much design, but the reason is not that design became an obsolete activity. The reason is that developers are adopting ready-made designs; they are following design decisions that were already made by someone else and thus they don’t need to make their own decisions. They are not conceiving neither implementing their own architecture, they are basically “filling holes” with small pieces of relatively simple business logic.

Programming in the Large and Programming in the Small

The distinction between “programming in the large” and “programming in the small” was coined in the 70’s. At that time, it represented the difference between developing software systems and implementing simple programs. The term “programming in the small” characterized tasks that could be done by a single programmer. But at that time there were not sophisticated application frameworks that could be extended.

Today it is important to take in consideration the ratio between “new code” and “reused code”. It is a characteristic of modern software systems that this ratio is relatively small, meaning that most of the code is being reused from existing frameworks, platforms and libraries. In this case, it is possible to create complex systems with small amounts of new code. This is sometimes called “glue code” because it just connects existing components.

For example, Apache is the most popular HTTP server, with a 60% market share, and its implementation has more than 1.7 million lines of code. In contrast, the average Web page has only 295 kB of scripts. Assuming that each line of script has in average 30 characters, this means that the average page has 10,000 lines of scripts. Of course a Web site has multiple pages, and there is also code running on the server side. But on the other hand most sites also use other platforms such as MySQL. Thus, for the average Web site the ratio between new code and existing code is very small.

In another example, an analysis of the top 5000 most active open-source software (OSS) projects has shown that OSS written in Java has in average almost 6 millions lines of code. In contrast, the average iPhone application has in average just a few hundred thousand lines of code. In this sense, OSS is “programming in the large” and iPhone apps are “programming in the small”. But thanks to advanced frameworks, even small iPhone apps can have a very sophisticated and impressive user interface.

Adaptable Design Up Front

screwdriver-setWhat should be done in situations in which there is no framework ready to be used for a particular application? Of course in this case the software developers should be responsible also for defining the architecture, and in my opinion this always requires a certain amount of design up front.

I personally adopt the “Respond to Change” strategy, which I call Adaptable Design Up Front (ADUF). In my opinion, in an Agile environment in which the requirements are constantly changing, this is the only strategy that guarantees the successful evolution of software systems. The idea is basically: If you don’t have a framework to build on top of it, first create your own framework and then extend it.

In one of my previous articles I describe how the ADUF approach combines Domain Modeling and Design Patterns. In another post I explain how ADUF is an application of the Open/Closed principle to high-level design. I’ve also prepared a set of slides to introduce the ideas behind ADUF.

In Summary

Most people claiming that they are doing Emergent Design in practice are doing Framework-Based design in which they don’t need to make high-level design decisions, because they are adopting decisions that were already made by someone else.

In modern software systems the ratio between new code and reused code is quite small. Thanks to advanced frameworks, platforms and libraries, software developers may build sophisticated applications with a relatively small effort.

For the situations in which there are no existing frameworks that can be easily extended, the approach of Emergent Design does not work. It is necessary to do some amount of design up front. My suggested approach is called Adaptable Design Up Front (ADUF).

What do you think? Do you know projects that claimed to do Emergent Design and were not based on existing frameworks? Please share your experience in the comments below.

Posted in Adaptable Design, Agile, Software Architecture, Software Evolution, Software Reuse | Tagged , , , , | 7 Comments

Antifragility and Component-Based Software Development

AntifragileIn his book “Antifragile: Things That Gain From Disorder”, Nassim Taleb introduces the concept of Antifragility, which is the opposite of Fragility. Antifragile things are able to benefit from volatility. In a previous post, I explained how in the field of software development the main source of volatility are the changes in requirements, and how it is possible to make software systems more robust through the appropriate usage of abstractions.

However, robustness is not the same as Antifragility. A robust system may survive the impact of changes, but our goal is to develop systems that actually benefit and improve as a consequence of these changes. In this article we will analyze a strategy that may be adopted to develop such Antifragile systems.

It is common knowledge that systems should be divided into components. The reaction to a change in the environment should only affect a few components, and not the entire system. Thus, component-based systems are more robust than monolithic systems.

However, to be Antifragile, a system must be able to benefit from changes in the environment. This Antifragility may be achieved when several systems share the same components. Thus, when a specific component is improved, all systems using this component can benefit from this improvement.

One concrete example is AA batteries. During many years AA batteries were the standard for millions of different devices. Then, energy-consuming devices drove the development of rechargeable AA batteries. Now all devices can benefit from this improvement, and in this sense they are Antifragile.

Component-Based Software Development

Since the first years of software development it was understood that systems should be divided into components, and thus software engineers tried to identify the right principles to perform this division, which was once called modularity. Metrics such as coupling and cohesion and principles such as information hiding were defined to guide the decomposition of large software systems into modules.

Over the decades the advances in programming languages and software engineering practices were followed by better ways to perform this decomposition. Object-oriented programming languages allowed software developers to create classes of objects and organize them into hierarchies. Then, larger distributed systems were built based on Object-Request Broker (ORB) platforms. More recently Service Oriented Architectures (SOA) became popular and now the current trend is Microservices.

In all the examples above, the evolution was based on the same original principles: Focusing on low coupling, high cohesion and defining clear and abstract interfaces. All of them contributed to make software systems more robust. It is easier to change the implementation of a class than to modify a data structure and its associated procedures. Services can be deployed independently of each other, and without disrupting the rest of the system.

The next step, which is to develop Antifragile systems, going beyond robustness, will happen when many different systems share the same components. Then, if a particular component is improved to satisfy the requirements of a specific application, all the systems may enjoy the improved component at no additional cost. This is true Antifragility.

Consider for example a system that is based on a SOA architecture. At any point in time it is possible to deploy an enhanced version of one of the services without affecting the other ones, and thus the system is very robust. But if there are several systems based on shared services, each time one of these services is improved all the systems will be able to immediately benefit from the improvement. Thus while each system is robust, the collection of systems is Antifragile, because they benefit from the same changes at no added cost.

Another modern application of this approach is the idea of a Software Product Line (SPL), in which a series of software products are based on the same components and differ only on their configuration. In the case of SPLs there may be several coexisting versions for each component. Each time a new component version is created, all the products using previous versions may benefit through simple configuration updates.

In summary

A simple strategy for Antifragility is: Build component-based systems with shared components. When one of the shared components is improved all the systems will be able to benefit from this improvement at no additional cost.

What do you think? Have you experienced the benefits of building several systems on top of the same components? Please share your experience with us in the comments below.

Posted in Antifragility, Software Architecture, Software Evolution, Software Reuse | Tagged , , , | 10 Comments

Agile Practices and Social Nudges in the Workplace

Nudge-coverIn their best-selling book “Nudge: Improving Decisions About Health, Wealth, and Happiness“, Richard Thaler and Cass Sunstein propose the adoption of interventions to “attempt to move people in directions that will make their lives better.” A nudge “alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives.”

One classic example of a nudge is the “fly in the urinal”. Many public bathrooms face the problem of men’s lack of attention. To solve this problem, they place a sticker with the picture of a fly inside the urinal, and the result is that men aim at the fly as a target. Trials showed that this simple idea reduced spillage by 80%!

A special kind of nudge is the “Social Nudge”, which results from the interaction with other people. According to Thaler and Sunstein:

“Social influences come in two basic categories. The first involves information. If many people do something or think something, their actions and their thoughts convey information about what might be best for you to do or think. The second involves peer pressure. If you care about what other people think about you, then you might go along with the crowd to avoid their wrath or curry their favor.”

I believe that the success of Agile practices may be partially attributed to the Social Nudges created by changes in the team dynamics and by the richness of interactions among team members. Let’s consider the two social influences categories as discussed above:

Exchange of Information

Agile practices increase the amount of interactions among team members and also the depth of information exchanged among them. For example:

  • Daily stand-up meetings allow all team members to know what their peers are doing and to become aware of their particular contribution to the project.
  • Retrospective meetings allow the team members to think about ways to improve their own processes based on past experience.
  • Code reviews allow team members to learn from each other and develop improved coding standards and shared patterns.

In this environment rich of information exchange opportunities, the natural tendency for the team members is to adopt best practices and increase both their productivity and their satisfaction.

Peer Pressure

Agile practices increase the visibility of the work being performed by each team member, and thus increase also the peer pressure. For example:

  • Burn-down charts display clearly if a team member is making progress as originally estimated or if he is late in his work.
  • Velocity charts show the productivity of the team and will be negatively impacted if there is a team member who is working slower than others.
  • In the daily stand-up meetings each team member must say what he is going to do today, which serves as a public commitment to perform his task.

In this environment in which there is so much visibility of the progress and the commitments of each team member, a person certainly will not feel comfortable if he is under-performing in comparison to other members of the team.

In summary

Agile practices promote more opportunities for the exchange of information and for peer pressure, and thus create social nudges that result in improved performance and satisfaction.

What do you think? Did you experience these social nudges? Please share your opinion in the comments below.

Posted in Agile, Efficacy, Psychology of Programming | Tagged , , | 1 Comment

IASA Israel Meeting – Dror Helper on TDD as an Approach for Software Design

The International Association of Software Architects (IASA) in Israel organized a special event with the participation of Dror Helper, who talked about the “TDD as an Approach for Software Design”.

Title: Designing with Tests

Abstract: Unit tests are great, they help a developer gain control over his code and catch bugs and regression issues. Test Driven Development is a methodology that uses unit tests but it’s not about writing unit tests – in fact the tests are only a design tool. TDD is a methodology that solves problems in an iterative way, it’s about emergent design that creates a maintainable solution. In this session I’ll talk about common mistakes and misconceptions, how to benefit from TDD and show how to design your code using unit tests.

Bio: Dror Helper is a senior consultant at CodeValue specializing in engineering practices. He has been writing software professionally for more than a decade, during which he has worked for industry giants such as Intel and SAP as well as small start-up companies. Dror has designed and developed software in various fields including video streaming, eCommerce, performance optimization and unit testing tools. His first encounter with agile happened a few years ago while working for Typemock Ltd, since then he has been evangelizing agile wherever he goes — at his work, speaking at conferences and today as a consultant. In his blog ( Dror writes about programming languages, software development tools, unit testing and anything else he finds interesting.

These are the original slides of Dror’s presentation:

Here is the video of the talk (in Hebrew):

This meeting was organized with the cooperation of ILTAM and sponsored by SAP Labs Israel. Special thanks to Dima Khalatov who helped with the organization.

To participate in our future meetings, please join the IASA IL group on LinkedIn.

Feel free to share your comments below. Thanks!

Posted in IASA Israel, Software Architecture, TDD | Tagged , , | Leave a comment

Contextual Recommendations in Multi-User Devices

yahoo_labsMy last project at Yahoo! Labs was focused on Recommender Systems for TV shows. In the presentation below, Ronny Lempel (who was my manager at this project) discusses the challenges of producing personalized recommendations in multi-user devices. This is joint work with my colleagues Raz Nissim, Michal Aharon, Eshcar Hillel and Amit Kagian.

Abstract: Recommendation technology is often applied on experiences consumed through a personal device such as a smartphone or a laptop, or through personal accounts such as one’s social network account. However, in other cases, recommendation technology is applied in settings where multiple users share the application or device. Examples include game consoles or high end smart TVs in household livingrooms, family accounts in VOD subscription services, and shared desktops in homes. These multi-user cases represent a challenge to recommender systems, as recommendations exposed to one user may actually be more suitable for another user.

This talk tackles the shared device recommendation problem by applying context to implicitly disambiguate the user (or users) that are being recommended to. Specifically, we address the household smart-TV situation and introduce the WatchItNext problem, which — given a device — taps the currently watched show as well as the time of day as context for recommending what to watch next. Implicitly, the context serves to disambiguate the current viewers of the device and enables the algorithm to recommend significantly more relevant watching options than those output by state-of-the-art non-contextual recommenders. Our experiments, which processed 4-months long viewing histories of over 350,000 devices, validate the importance and effectiveness of contextual recommendation in shared device settings.

The presentation slides are available for download here.

Feel free to share your comments below. Thanks!

Posted in Data Mining, Recommender Systems, Research, Yahoo! | Tagged , , , | Leave a comment

IASA Israel Meeting – Lior Israel on TDD as an Approach for Software Design

The International Association of Software Architects (IASA) in Israel organized a special event with the participation of Lior Israel, who talked about the “TDD as an Approach for Software Design”.

Title: Providing Business Value Through TDD

Abstract: Our goal is to provide to our customers business value. In my daily work as an architect I show how to provide business value with quality in short feedback time. One of the best ways to provide business value is TDD, and this means adopting TDD for all the game player’s not only for developers! The presentation will be based on code examples. Outline:
– TDD overview as way of thinking – I am going to demo the simple truth: TDD is NOT unit testing.
– Let’s start coding code Kata – The best way to understand it is through a real example.
– Stop! Are we going to meet the DoD? We are going to see that TDD lead us to verify that we understand the business value not at the end of development but at the beginning.
– Architecture & TDD – demonstrate that TDD puts us on the right side of the sun (Architecture). We only need to do it as TDD way.

Bio: Lior Israel is an experienced software and hardware developer, designer and System Architect. Lior started his studies of Electronics in the early 1990’s and has a degree in Computer Sciences. He has a practical formal experience of over 19 years with software and hardware development and design. Currently working as a Software Architect on large scale IT applications with cutting edge technologies. Lior has a blog in which he writes about software development:

These are the original slides of Lior’s presentation:

Here is the video of the talk (in Hebrew):

This meeting was organized with the cooperation of ILTAM and sponsored by SAP Labs Israel. Special thanks to Dima Khalatov who helped with the organization.

To participate in our future meetings, please join the IASA IL group on LinkedIn.

Feel free to share your comments below. Thanks!

Posted in IASA Israel, Software Architecture, TDD | Tagged , , | Leave a comment