IASA Israel meeting – Atzmon Hen-Tov on the Adaptive Object Model

The International Association of Software Architects (IASA) in Israel organized a special event with the participation of Atzmon Hen-Tov, who talked about the “Adaptive Object Model”.

Title: Adaptive-Object-Model – Empower your users to evolve the system

Abstract: In this talk Atzmon Hen-Tov will introduce the Adaptive-Object-Model (AOM) architecture style.

Architectures that can dynamically adapt to changing requirement are sometimes called reflective or meta-architectures. We call a particular kind of reflective architecture an Adaptive Object-Model (AOM). An Adaptive Object-Model is a system that represents classes, attributes, relationships, and behavior as metadata.

Users change the metadata (object model) to reflect changes to the domain model. AOM stores its Object-Model in XML files or in a database and interprets it on the fly. Consequently, the object model is adaptive; when the descriptive information for the object model is changed, the system immediately reflects those changes.

“If something is going to vary in a predictable way, store the description of the variation in a database so that it is easy to change” — Ralph Johnson

Bio: Atzmon Hen-Tov is the VP R&D at Pontis, where he leads the development of the company’s Marketing Platform, iCLM™ on top of the ModelTalk AOM engine. Atzmon has led the development of the ModelTalk AOM Engine since its inception and is co-founder of the Ink AOM open-source framework. Atzmon is interested in Model driven software development and other methods for software development industrialization. Contact him at atzmon@ieee.org.

These are the original slides of Atzmon’s presentation:

Here is the video of the entire talk (in Hebrew):

And here is an animated version of the slides at Prezi.

This meeting was organized with the cooperation of ILTAM and sponsored by LivePerson. Special thanks to Dima Khalatov who helped with the organization.

To participate in our future meetings, please join the IASA IL group on LinkedIn.

Feel free to share your comments below. Thanks!

Posted in Adaptable Design, IASA Israel, Software Architecture | Tagged , , | Leave a comment

Lean Software Development: Before and After the Last Responsible Moment

The Last Responsible Moment (LRM) is a lean development principle defined as:

“A strategy of not making a premature decision but instead delaying commitment and keeping important and irreversible decisions open until the cost of not making a decision becomes greater than the cost of making a decision.”

Let tbe the specific point in time for a concrete design decision Di. The LRM principle tells us that tshould be deferred as much as possible. One of the reasons is to avoid making decisions based on assumptions, so that we should decide when we have optimal information about the requirements and there is a reduced risk that our decisions will need to be changed.

LRM1During the incremental software development process we will implement a set of concrete design decisions D1 … Dn, in which D1 … Di-1 are the decisions implemented before Di, and Di+1 … Dare the decisions made after Di. This order of decisions is not arbitrary and will follow some design rationale.

LRM2But of course these design decisions are not independent of each other. Thus, among the decisions D1 … Di-1 there will be some of them that depend on Di. For example, if each design decision Dj is encapsulated in a class Cj, we must know at least the interfaces of the other classes being used by Cj.

Before the Last Responsible Moment

This means that for each design decision Di there must be a point in time, before ti, in which we define an abstraction for Di. This abstraction allows the implementation of all design decisions that depend on Di but must be made before it. If Di is represented as a class, the abstraction would be its interface, and it should be defined at a time tAi < ti.

LRM3This definition of abstractions before implementations is exactly the result of Test-Driven Development (TDD). When we write unit tests for a class we are defining its interface, both at the syntactic and the semantic levels. By the time all unit tests are written for class Ci we have reached tAi, but we are still before ti, the last responsible moment in which we will decide how to implement Ci.

After the Last Responsible Moment

No matter how long we were able to defer a concrete design decision Di, there will probably be a moment in which it will need to be changed. The last responsible moment is the first time we provide an implementation for an abstraction, but certainly this does not mean that there is any guarantee that this implementation will never be modified.

LRM4For every design decision Di there may be a time tCi > ti in which its implementation is changed. We do not want this change to affect the modules that depend on Di, and this can only be done by preserving the interface. Therefore, the importance of abstractions is twofold: they allow concrete design decisions to be deferred until the LRM, and they also allow these design decisions to be changed after the LRM.

Conclusions

In order to adopt the Last Responsible Moment principle it is essential to focus on the definition of abstractions representing the diverse design decisions. These abstractions serve as the placeholders that will allow the concrete implementation details to be deferred as much as possible. Moreover, good abstractions are essential to support maintainability and allow these design decisions to be easily changed.

Posted in Lean Development, TDD | Tagged , | 1 Comment

The End of Agile: Death by Over-Simplification

hypeThere is something basically wrong with the current adoption of Agile methods. The term Agile was abused, becoming the biggest ever hype in the history of software development, and generating a multi-million dollar industry of self-proclaimed Agile consultants and experts selling dubious certifications. People forgot the original Agile values and principles, and instead follow dogmatic processes with rigid rules and rituals.

But the biggest sin of Agile consultants was to over-simplify the software development process and underestimate the real complexity of building software systems. Developers were convinced that the software design may naturally emerge from the implementation of simple user stories, that it will always be possible to pay the technical debt in the future, that constant refactoring is an effective way to produce high-quality code and that agility can be assured by following strictly an Agile process. We will discuss each one of these myths below.

Myth 1: Good design will emerge from the implementation of user stories

Agile development teams follow an incremental development approach, in which a small set of user stories is implemented in each iteration. The basic assumption is that a coherent system design will naturally emerge from these independent stories, requiring at most some refactoring to sort out the commonalities.

However, in practice the code does not have this tendency to self-organize. The laws governing the evolution of software systems are that of increasing entropy. When we add new functionality the system tends to become more complex. Thus, instead of hoping for the design to emerge, software evolution should be planned through a high-level architecture including extension mechanisms.

Myth 2: It will always be possible to pay the technical debt in the future

The metaphor of technical debt became a popular euphemism for bad code. The idea of incurring some debt appears much more reasonable than deliberately producing low-quality implementations. Developers are ready to accumulate technical debt because they believe they will be able to pay this debt in the future.

However, in practice it is not so easy to pay the technical debt. Bad code normally comes together with poor interfaces and inappropriate separation of concerns. The consequence is that other modules are built on top of the original technical debt, creating dependencies on the simplistic design decisions that should be temporary. When eventually someone decides to pay the technical debt, it is already too late: the fix became too expensive.

Myth 3: Constant refactoring is an effective way to produce code

Refactoring became a very popular activity in software development; after all it is always focused on improving the code. Techniques such as Test-Driven Development (TDD) allow refactoring to be performed at low risk, since the unit tests automatically indicate if some working logic has been broken by code changes.

However, in practice refactoring is consuming an exaggerated amount of the efforts invested in software development. Some developers simply do not plan for change, in the belief that it will always be easy to refactor the system. The consequence is that some teams implement new features very fast in the first iterations, but at some point their work halts and they start spending most of their efforts in endless refactorings.

Myth 4: Agility can be assured by following an Agile process

One of the main goals of Agility is to be able to cope with change. We know that nowadays we must adapt to a reality in which system requirements may be modified unexpectedly, and Agile consultants claim that we may achieve change-resilience by adhering strictly to their well-defined processes.

However, in practice the process alone is not able to provide change-resilience. A software development team will only be able to address changing system requirements if the system was designed to be flexible and adaptable. If the original design did not take in consideration the issues of maintainability and extensibility, the developers will not succeed in incorporating changes, not matter how Agile is the development process.

Agile is Dead, Now What?

If we take a look at the hype chart below, it is sure that regarding Agile we are after the “peak of inflated expectations” and getting closer to the “trough of disillusionment”.

gartner-hype-cycle

Several recent articles have proclaimed the end of the Agile hype. Dave Thomas wrote that “Agile is Dead”, and was immediately followed by an “Angry Developer Version”. Tim Ottinger wrote “I Want Agile Back”, but Bob Marshall replied that “I Don’t Want Agile Back”. Finally, what was inevitable just happened: “The Anti-Agile Manifesto”.

Now the question is: what will guide Agile through the “slope of enlightenment”?

In my personal opinion, we will have to go back to the basics: To all the wonderful design fundamentals that were being discussed in the 90’s: the SOLID principles of OOD, design patterns, software reuse, component-based software development. Only when we are able to incorporate these basic principles in our development process we will reach a true state of Agility, embracing change effectively.

Another question: what will be the next step in the evolution of software design?

In my opinion: Antifragility. But this is the subject for a future post…

What about you? Did you also experience the limitations of current Agile practices? Please share with us in the comments below.

Posted in Agile, Refactoring, Software Evolution | Tagged , , | 21 Comments

Coping with Change in Agile Software Development

changeTraditional software development methodologies, such as Waterfall, tried to follow a series of isolated steps: First you define all the system requirements, then you devise a detailed system design that satisfies these requirements, and then you implement the system according to this design. This approach is also called Big Design Up Front (BDUF), since the detailed design is completed before starting the implementation.

However, in practice the software development teams always must face changes in requirements which invalidate parts of the original design. This fact is expressed in one of the Lehman’s Laws of Software Evolution:

“Continuing Change — A system must be continually adapted or it becomes progressively less satisfactory.”

The modern approach of Agile software development understands that changes are inevitable and that investing in detailed plans is not effective. This is clearly expressed in one of the values from the Agile manifesto:

Responding to change over following a plan”

This means that it is more important to be able to respond to change than to follow a detailed plan. For example, here is how the Scrum process increases our ability to respond to change, according to the Scrum Alliance:

“Many projects that have elaborate project plans with detailed Gantt charts have failed. We make the plans so complex and detailed that they’re difficult to modify when changes occur. Agile process replaces project plans with release schedules and burn-down charts that can accommodate change. We can still track progress and, in fact, progress is more transparent than in a typical Waterfall project plan.”

It should be noticed that this description focuses entirely in the process: it explains how the Scrum development process is able to cope with changes more effectively than traditional processes.

Now the question is: Is it sufficient to have a process that deals with change? May we continue to design and implement our systems as we did before, or do we need also to have special design guidelines to make our systems more resilient to change?

I think that the proponents of Agile methods have done a really good work in defining new processes and convincing us that BDUF is a bad idea. However, I do not think the question above has received enough attention.

Agile Design

Besides the Agile manifesto, there are also 12 principles that may help guide us in the definition of what are the characteristics of an Agile design:

“Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.”

“Continuous attention to technical excellence and good design enhances agility.”

Therefore real agility, or the capacity to cope with changing requirements, can only be achieved through investment in high-quality designs. In other words, the design must be change-resilient as much as the development process is change-resilient. Trying to deal with constant changes through a simplistic design is the most common cause of technical debt and continuous refactoring.

I believe that change-resilient systems must be planned to be adaptable. This means that the design should incorporate degrees of freedom, focusing on low coupling and high cohesion. These degrees of freedom do not necessarily introduce more complexity in the design. Actually, they often are the most effective way to defer decisions until the last responsible moment.

The Last Responsible Moment (LRM) is a lean development concept defined as:

“A strategy of not making a premature decision but instead delaying commitment and keeping important and irreversible decisions open until the cost of not making a decision becomes greater than the cost of making a decision.”

How do you defer a design decision until the last responsible moment? The answer is that you must introduce a placeholder, an abstraction that represents the module that will be implemented in the future. The introduction of this abstraction represents an additional degree of freedom, and it allows the rest of the system to be developed as if the specific decision was already made.

When the design is adaptable, the changing requirements may be implemented through new versions of existing components, without the need to change the relationships among these components. A specific component is the realization of a low-level design decision, but the high-level abstractions are stable. Thus, there is no need to refactor the system: the components may evolve independently of each other.

Adaptable Design Up Front (ADUF) is an approach that promotes change-resilience through the introduction of degrees of freedom in software design. I explain and discuss the ADUF approach extensively in several of my previous posts.

In Summary

Agile software development has focused mostly on the definition of new processes to cope with continuously changing requirements. However the processes are not enough, it is necessary to devise new software design approaches to produce systems that are indeed change-resilient. Adaptable Design Up Front is one such approach based on the introduction of degrees of freedom in software design.

What is your personal experience? What are your design strategies to deal with changing requirements? Please share in the comments below.

Posted in Adaptable Design, Agile, Refactoring, Requirements Specification, Software Architecture, Software Evolution | Tagged , , , , , | 2 Comments

IASA Israel hosts Prof. Dan Berry

DanBerryBigIn the last week of December/2013 we at the International Association of Software Architects (IASA) in Israel had the great pleasure to host Prof. Dan Berry for a talk on Requirements Engineering. Please find the details about the talk below, including links to research papers that Prof. Berry has published on this topic.

Title: The Impact of Domain Ignorance on the Effectiveness of Requirements Idea Generation during Requirements Elicitation and on Software Development in General

Abstract:

It is believed that the effectiveness of requirements engineering activities depends at least partially on the individuals involved. One of the factors that sems to influence an individual’s effectiveness in requirements engineering activities is knowledge of the problem being solved, i.e., domain knowledge. While a requirements engineer’s having in-depth domain knowledge helps him or her to understand the problem easier, he or she can fall for tacit assumptions of the domain and might overlook issues that are obvious to domain experts.

Historically, several, including the speaker, have reported observations that sometimes ignorance of the domain in a software development project is useful for promoting the elicitation of tacit assumptions and out-of-the-box ideas.

Recently there have been attempts to confirm these observations with well-designed empirical studies.

This talk describes a controlled experiment to test the hypothesis that adding to a requirements elicitation team for a computer-based system in a particular domain, requirements analysts that are ignorant of the domain improves the effectiveness of the requirements elicitation team. The results show some support for accepting the hypothesis. The results were analyzed also to determine the effect of creativity, industrial experience, and requirements engineering experience.

The controlled experiment was followed up by a confirmatory case study of an elicitation brainstorm in a software-producing company by a team with four domain experts from the company and four domain ignorants from our university. According to the company domain experts, the brainstorm produced ideas that they would not have produced themselves.

A different empirical study tested the hypothesis that putting newbies to a project to work doing tasks that benefit from domain ignorance makes their immigration to the project smoother. First, a survey was conducted among software development managers of varying experience to determine what software development activities they thought were at least helped by domain ignorance. Second, transcripts from fourteen interviews of presumably-domain-ignorant immigrants to new software development projects at one large company were examined to determine if the activities performed by those with the smoothest immigrations were activities that are at least helped by domain ignorance. The conclusions are that ignorance can play an important role in software development but there are a lot of other factors that influence immigration smoothness.

Publications:

Paper: “The impact of domain knowledge on the effectiveness of requirements idea generation during requirements elicitation”, 20th IEEE International Requirements Engineering Conference, 2012

Paper: “An industrial case study of the impact of domain ignorance on the effectiveness of requirements idea generation during requirements elicitation”, 21st IEEE International Requirements Engineering Conference, 2013

Joint work with Ali Niknafs and Gaurav Mehrotra

Prof. Berry’s Bio:

Daniel M. Berry got his B.S. in Mathematics from Rensselaer Polytechnic Institute, Troy, New York, USA in 1969 and his Ph.D. in Computer Science from Brown University, Providence, Rhode Island, USA in 1974. He was on the faculty of the Computer Science Department at the University of California, Los Angeles, California, USA from 1972 until 1987. He was in the Computer Science Faculty at the Technion, Haifa, Israel from 1987 until 1999. From 1990 until 1994, he worked for half of each year at the Software Engineering Institute at Carnegie Mellon University, Pittsburgh, Pennsylvania, USA, where he was part of a group that built CMU’s Master of Software Engineering program. During the 1998-1999 academic year, he visited the Computer Systems Group at the University of Waterloo in Waterloo, Ontario, Canada. In 1999, Berry moved to the the Cheriton School of Computer Science at the University of Waterloo. Prof. Berry’s current research interests are software engineering in general, and requirements engineering and electronic publishing in the specific.

To be informed about future events of IASA in Israel, please join our LinkedIn group.

This meeting was organized in cooperation with ILTAM.

Posted in IASA Israel, Requirements Specification | Tagged , | Leave a comment

Adaptable Design Up Front talk at USP (in Portuguese)

logo_USPThis month I had the great pleasure to speak about Adaptable Design Up Front (ADUF) at the University of Sao Paulo (USP).
I was invited by my friend, professor Fabio Kon, who shares my interests on Software Architecture and Agile Methods.

Abstract: This talk tries to answer the question: “How much Design Up Front should be done in an Agile project?” I present my approach of Adaptable Design Up Front (ADUF), describing its rationale, applications in practice and comparison to other approaches such as Emergent Design.

You can see the video and the slides.

Your comments are welcome!

Posted in Adaptable Design | Tagged | 2 Comments

Attention Agile Programmers: Project Management is not Software Engineering

I’m happy to see so many software developers who are enthusiastic about Agile methods. It’s nice when people enjoy their work. However, I think there is a problem: Some teams are giving too much emphasis to Project Management activities, while at the same time they appear to have forgotten some basic Software Engineering practices. In this post I will present some examples of this lack of balance between the two.

Drawing Burn-down Charts vs. Drawing UML Diagrams

DailyBurndownBurn-down charts help us track the progress being done. They present clearly how much work has been completed, and how much work is left until the end of an iteration. By taking a look at the chart, it’s easy to know if the project is behind or ahead of schedule. But the burn-down chart tells us nothing about the quality of the software being produced, and neither has it any value after the iteration has been completed. Burn-down charts are a project management artifact, they are not a deliverable of the software development process.

UML diagrams are an essential part of the software development process. They are required by the person doing the design to conceive and compare several alternatives. They are needed to present these alternatives and allow design reviews. They serve as part of the documentation of the code that is actually implemented. Drawing UML diagrams is a software engineering activity, and these diagrams are part of the deliverables of the software development process.

However, it seems that some Agile programmers invest very few time drawing UML diagrams. Perhaps they think that these diagrams are only needed in Big Design Up Front, and not when you do Emergent Design or TDD. In my opinion this is a mistake. If you spend more time drawing burn-down charts than UML diagrams, then you are doing more project management than software engineering.

Estimating User Stories vs. Computing Algorithms Complexity

PlanningPokerIn an Agile project the User Stories are the basic unit of work, and it is very important to estimate the effort that will be required to implement these stories. Techniques such as the Planning Poker were invented to make this estimation process as accurate as possible, since iterations are very short and there is not much margin for errors. However, the time spent in effort estimation does not affect the quality of the software. Effort estimation is a project management activity.

Professional software developers must be able to compute the complexity of the algorithms they are adopting in their systems. Actually, they should base their design decisions on the different time and space complexities of diverse algorithms and data structures. They should understand the differences between best, worst, and average case complexities. Computing algorithm complexity is a software engineering activity, essential to assure the quality of the systems being developed.

However, it seems that some Agile programmers don’t ask themselves many questions about the complexity of the algorithms they are using. They just go to the library and choose the Collection with the most convenient API. Or they always choose the simplest possible solution, because of KISS and YAGNI. But if you spend more time estimating user stories than computing time and space complexity, then you are doing more project management than software engineering.

Measuring Velocity vs. Measuring Latency and Throughput

velocity-imgIn Agile projects, the team members are constantly measuring their Velocity: How many units of work have been completed in a certain interval. Of course people want to be as fast as possible, assuming that your velocity is an indication of your productiveness, efficiency and efficacy as a software developer. But the velocity gives us no indication of the quality of the software being produced. On the contrary, it is common to see Agile teams working very fast on the first iterations just to slow down considerably when they start doing massive Refactorings. Thus measuring the Velocity is just a project management activity with dubious meaning.

Many software developers today are working on client/server systems such as Web sites and Smartphone Apps. These systems are based on the exchange of requests and responses between a client and a server. In such systems, the Latency is the time interval between the moment the request is sent and the moment the response is received. The Throughput is the rate the requests are handled, i.e., how many requests are responded per unit of time. In client/server systems it is essential to constantly measure the latency and the throughput. A small code change, such as making an additional query to the database, may have a big impact on both.

However, it seems that some Agile programmers are constantly measuring their velocity, but they have absolutely no idea of what are the latency and throughput of the systems they are developing. They may ignore non-functional requirements and choose simplistic design alternatives that have a huge negative impact in the system’s performance. Thus if you measure your own velocity more frequently than you measure latency and throughput, you are doing more project management than software engineering.

Meetings and Retrospectives vs. Design and Code Reviews

meetingAgile projects normally have daily meetings to discuss how the team is making progress: What have been done, what will be done, are there any obstacles. These meetings do not include any technical discussion. Projects may also have retrospective meetings after each iteration, with the goal to improve the process itself. These meetings are project management activities, they do not address software quality issues.

Design and code reviews focus on software quality. In a design review, an alternative should be chosen based on non-functional quality attributes such as extensibility and maintainability. Code reviews are an opportunity to improve the quality of code that should be already correct. Code reviews should also be used to share technical knowledge and make sure that the team members are really familiar with each other’s work. These kinds of reviews are software engineering activities, with a direct effect on the deliverables of the software development process.

However, it seems that some Agile programmers focus too much on the process and not enough on the product. Successful software development is much more than having a team managing itself and constantly improving its processes. If the quality of the software is decaying, if technical debt is accumulating, if there is too much coupling among modules and if the classes are not cohesive, then the team is not performing its goal, no matter how efficient the process seems to be. If you are spending more time in stand-up and retrospective meetings than on design and code reviews, then you are doing more project management than software engineering.

In Summary

Drawing burn-down charts, estimating user stories, measuring your velocity and participating in retrospectives are project management activities. Drawing UML diagrams, computing the complexity of algorithms, measuring latency and throughput and participating in design reviews are software engineering activities. If you are a software engineer, you should know what is more important. Don’t let your aspiration to become a Scrum Master make you forget the Art of Computer Programming. Good luck!

Posted in Agile, Programming, Software Architecture | Tagged , , | 7 Comments