IASA Israel Meeting – Lior Israel on TDD as an Approach for Software Design

The International Association of Software Architects (IASA) in Israel organized a special event with the participation of Lior Israel, who talked about the “TDD as an Approach for Software Design”.

Title: Providing Business Value Through TDD

Abstract: Our goal is to provide to our customers business value. In my daily work as an architect I show how to provide business value with quality in short feedback time. One of the best ways to provide business value is TDD, and this means adopting TDD for all the game player’s not only for developers! The presentation will be based on code examples. Outline:
- TDD overview as way of thinking – I am going to demo the simple truth: TDD is NOT unit testing.
- Let’s start coding code Kata – The best way to understand it is through a real example.
- Stop! Are we going to meet the DoD? We are going to see that TDD lead us to verify that we understand the business value not at the end of development but at the beginning.
- Architecture & TDD – demonstrate that TDD puts us on the right side of the sun (Architecture). We only need to do it as TDD way.

Bio: Lior Israel is an experienced software and hardware developer, designer and System Architect. Lior started his studies of Electronics in the early 1990′s and has a degree in Computer Sciences. He has a practical formal experience of over 19 years with software and hardware development and design. Currently working as a Software Architect on large scale IT applications with cutting edge technologies. Lior has a blog in which he writes about software development: http://blogs.microsoft.co.il/lior_israel/

These are the original slides of Lior’s presentation:

Here is the video of the talk (in Hebrew):

This meeting was organized with the cooperation of ILTAM and sponsored by SAP Labs Israel. Special thanks to Dima Khalatov who helped with the organization.

To participate in our future meetings, please join the IASA IL group on LinkedIn.

Feel free to share your comments below. Thanks!

Posted in IASA Israel, Software Architecture, TDD | Tagged , , | Leave a comment

IASA Israel Meeting – Hayim Makabee on the Role of the Software Architect

The International Association of Software Architects (IASA) in Israel organized a special event with the participation of Hayim Makabee, who talked about the “Role of the Software Architect”.

Title: The Role of the Software Architect

Abstract: In this talk Hayim will present the practical aspects of the role of the Software Architect, including:

  • The four areas of expertise: Design, Domain, Technology and Methodology.
  • The cooperation with stakeholders: Developers, Team Leaders, Project Managers, QA and Technical Writers.

Understanding the expected areas of expertise is essential for the architect to develop his/her professional skills.
Understanding how to cooperate with the diverse stakeholders is essential to improve the architect’s impact and effectiveness.

Bio: Hayim Makabee was born in Rio de Janeiro. He immigrated to Israel in 1992 and completed his M.Sc. studies on Computer Sciences at the Technion. Since then he worked for several hi-tech companies, including also some start-ups. Currently he is a Research Engineer at Yahoo! Labs Haifa. He is also a co-founder of the International Association of Software Architects (IASA) in Israel. Hayim is the author of a book about Object-Oriented Programming and has published papers in the fields of Software Engineering, Distributed Systems and Genetic Algorithms. Hayim’s current research interests are Data Mining and Recommender Systems.

These are the original slides of Hayim’s presentation:

Here is the video of the talk (in Hebrew):

This meeting was organized with the cooperation of ILTAM.

To participate in our future meetings, please join the IASA IL group on LinkedIn.

Feel free to share your comments below. Thanks!

Posted in IASA Israel, Software Architecture | Tagged , | Leave a comment

Reducing Technical Debt

logo_white_towerLast week I traveled to Thessaloniki to participate in the Conference on Advanced Information Systems Engineering (CAISE’14). I presented a paper at the Workshop on Cognitive Aspects of Information Systems Engineering (COGNISE). This is joint work with Yulia Shmerlin and Prof. Doron Kliger from Haifa University.

Title: Reducing Technical Debt: Using Persuasive Technology for Encouraging Software Developers to Document Code

Abstract: Technical debt is a metaphor for the gap between the current state of a software system and its hypothesized ‘ideal’ state. One of the significant and under-investigated elements of technical debt is documentation debt, which may occur when code is created without supporting internal documentation, such as code comments. Studies have shown that outdated or lacking documentation is a considerable contributor to increased costs of software systems maintenance. The importance of comments is often overlooked by software developers, resulting in a notably slower growth rate of comments compared to the growth rate of code in software projects. This research aims to explore and better understand developers’ reluctance to document code, and accordingly to propose efficient ways of using persuasive technology to encourage programmers to document their code. The results may assist software practitioners and project managers to control and reduce documentation debt.

You can freely download the paper.

Below are the slides of my presentation at the workshop:

Posted in Psychology of Programming, Research, Technical Debt | Tagged , , | 9 Comments

Antifragile Software Design: Abstraction and the Barbell Strategy

AntifragileIn his book “Antifragile: Things That Gain From Disorder”, Nassim Taleb introduces the concept of Antifragility, which is the opposite of Fragility. The main question is how things react to volatility (such as randomness, errors, uncertainty, stressors and time). According to him, there is a Triad – Antifragility, Robustness, and Fragility:

  • Fragile things are exposed to volatility, meaning that volatility is prejudicial to them.
  • Robust things are immune to volatility, meaning that volatility does not affect them.
  • Antifragile things enjoy volatility, meaning that volatility is beneficial to them.

In Taleb’s words:

“Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.”

In order to reduce fragility, Taleb proposes the adoption of a Barbell Strategy:

“Barbell Strategy: A dual strategy, a combination of two extremes, one safe and one speculative, deemed more robust than a ‘monomodal’ strategy.”

In the case of software systems, volatility appears in the form of changes over time. These changes are unavoidable, as expressed in one of the Lehman laws of software evolution:

“Continuing Change — A software system must be continually adapted or it becomes progressively less satisfactory.”

There are two main types of changes that are required in a software system over time:

  • Functional changes: These are changes required to implement new requirements, or to implement modifications in the requirements. These changes have an impact on the system’s behavior and functionality.
  • Non-functional changes: These changes are required to improve the quality of the design. They are normally the result of Refactoring and focus on the reduction of Technical Debt. These changes should not affect the system’s behavior or functionality.

Now that we have identified the sources of volatility (changes) the question is: What is the software equivalent of a Barbell Strategy?

The answer is: Abstraction. Software systems should have a structure and organization based on different levels of abstraction. In this case the duality of the Barbell Strategy is expressed as the separation between high-level abstract elements and concrete implementation details.

  • Abstract elements: Are robust, and should not be easily affected by changes.
  • Concrete implementation details: Are fragile, directly affected by changes.

In other words, software systems should be built in such a way that the volatility (changes) does not affect its structure and organization, preserving the main, high-level abstractions and requiring modifications only on the low-level, concrete implementation details.

Below are several examples of this duality between abstractions and concrete details.

Information Hiding and Encapsulation

Information Hiding was defined by David Parnas as a criterion to decompose a system into modules. According to him: “Every module … is characterized by its knowledge of a design decision which it hides from all others. Its interface or definition was chosen to reveal as little as possible about its inner workings.”

The mechanism of Encapsulation in Object Oriented Programming (OOP) is a direct application of this principle. When applying encapsulation, the interface of a class is separated from its implementation, and thus the implementation may be changed without affecting the clients of the class. In other words, thanks to encapsulation, the clients of a class become less fragile to changes in its implementation details.

Open-Closed Principle (OCP)

The Open-Closed Principle (OCP) was defined by Bertrand Meyer as: “Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.” In practice, the implementation of the OCP requires the definition of an abstraction that allows a module to be open to extensions, so that each possible extension is a specialization of this abstraction.

strategyFor example, in the Strategy design pattern, the Strategy itself should be defined as an abstract interface that may be implemented by several possible concrete subclasses. Each specific strategy is fragile and may need to be replaced, but the object using the strategy is not fragile, thanks to abstraction from implementation details.

Test-Driven Development (TDD)

In the case of Test-Driven Development (TDD) the fundamental idea is to write the tests before writing the code. This requires the definition of abstract interfaces for the classes being tested, with a well-defined behavior. Since TDD forces developers to thinks in terms of interfaces and behaviors before writing the code, it actually focuses on the early separation between abstractions and concrete implementation details. This allows the code to be easily refactored in the future, because the unit tests would catch any mistake resulting from changes in the implementation. In other words, the unit tests are less fragile to volatility (changes) because they are based on abstractions.

Inheritance Hierarchies and Frameworks

In OOP, an inheritance hierarchy organizes classes according to a generalization-specialization rationale: the classes near the root of the hierarchy should be more abstract than the classes on the leaves of the inheritance tree. Accordingly, the abstract classes should be less fragile than the concrete classes, meaning that they should be less subject to impact caused by volatility, such as changes in the requirements or changes in the implementation details.

In general, Frameworks use inheritance hierarchies to provide reusable code that is not application-specific. Frameworks define abstractions and the relationships among them. It is the responsibility of the software developer to extend the framework, deriving concrete subclasses, and writing the implementation details to satisfy the requirements of his specific application.  These implementation details are obviously fragile, but the framework is not fragile, and may be reused with no changes on many applications.

Conclusions

Software systems are subject to volatility in the form of changing requirements. Concrete implementation details are fragile and directly affected by these changes. In order to reduce the fragility of the system as a whole, it is important to adopt a Barbell Strategy and define these concrete details as the specialization of higher-level abstractions. Proper abstractions should be robust, surviving the impact of changes. Thus, while the details will change over time, the system structure and organization will be preserved because it is based on abstractions.

Please notice that the examples discussed in this post (encapsulation, OCP, TDD, frameworks) mostly increase the robustness of software systems. Thus, in the original Triad we moved away from Fragility towards Robustness, but we still cannot claim we reached true Antifragility. This would require a system that benefits from volatility, or, in our case, that benefits from changes. This will be the subject of a future post.

Posted in Antifragility, OOD, OOP, Software Architecture, TDD | Tagged , , , , | 11 Comments

IASA Israel meeting – Atzmon Hen-Tov on the Adaptive Object Model

The International Association of Software Architects (IASA) in Israel organized a special event with the participation of Atzmon Hen-Tov, who talked about the “Adaptive Object Model”.

Title: Adaptive-Object-Model – Empower your users to evolve the system

Abstract: In this talk Atzmon Hen-Tov will introduce the Adaptive-Object-Model (AOM) architecture style.

Architectures that can dynamically adapt to changing requirement are sometimes called reflective or meta-architectures. We call a particular kind of reflective architecture an Adaptive Object-Model (AOM). An Adaptive Object-Model is a system that represents classes, attributes, relationships, and behavior as metadata.

Users change the metadata (object model) to reflect changes to the domain model. AOM stores its Object-Model in XML files or in a database and interprets it on the fly. Consequently, the object model is adaptive; when the descriptive information for the object model is changed, the system immediately reflects those changes.

“If something is going to vary in a predictable way, store the description of the variation in a database so that it is easy to change” — Ralph Johnson

Bio: Atzmon Hen-Tov is the VP R&D at Pontis, where he leads the development of the company’s Marketing Platform, iCLM™ on top of the ModelTalk AOM engine. Atzmon has led the development of the ModelTalk AOM Engine since its inception and is co-founder of the Ink AOM open-source framework. Atzmon is interested in Model driven software development and other methods for software development industrialization. Contact him at atzmon@ieee.org.

These are the original slides of Atzmon’s presentation:

Here is the video of the entire talk (in Hebrew):

And here is an animated version of the slides at Prezi.

This meeting was organized with the cooperation of ILTAM and sponsored by LivePerson. Special thanks to Dima Khalatov who helped with the organization.

To participate in our future meetings, please join the IASA IL group on LinkedIn.

Feel free to share your comments below. Thanks!

Posted in Adaptable Design, IASA Israel, Software Architecture | Tagged , , | Leave a comment

Lean Software Development: Before and After the Last Responsible Moment

The Last Responsible Moment (LRM) is a lean development principle defined as:

“A strategy of not making a premature decision but instead delaying commitment and keeping important and irreversible decisions open until the cost of not making a decision becomes greater than the cost of making a decision.”

Let tbe the specific point in time for a concrete design decision Di. The LRM principle tells us that tshould be deferred as much as possible. One of the reasons is to avoid making decisions based on assumptions, so that we should decide when we have optimal information about the requirements and there is a reduced risk that our decisions will need to be changed.

LRM1During the incremental software development process we will implement a set of concrete design decisions D1 … Dn, in which D1 … Di-1 are the decisions implemented before Di, and Di+1 … Dare the decisions made after Di. This order of decisions is not arbitrary and will follow some design rationale.

LRM2But of course these design decisions are not independent of each other. Thus, among the decisions D1 … Di-1 there will be some of them that depend on Di. For example, if each design decision Dj is encapsulated in a class Cj, we must know at least the interfaces of the other classes being used by Cj.

Before the Last Responsible Moment

This means that for each design decision Di there must be a point in time, before ti, in which we define an abstraction for Di. This abstraction allows the implementation of all design decisions that depend on Di but must be made before it. If Di is represented as a class, the abstraction would be its interface, and it should be defined at a time tAi < ti.

LRM3This definition of abstractions before implementations is exactly the result of Test-Driven Development (TDD). When we write unit tests for a class we are defining its interface, both at the syntactic and the semantic levels. By the time all unit tests are written for class Ci we have reached tAi, but we are still before ti, the last responsible moment in which we will decide how to implement Ci.

After the Last Responsible Moment

No matter how long we were able to defer a concrete design decision Di, there will probably be a moment in which it will need to be changed. The last responsible moment is the first time we provide an implementation for an abstraction, but certainly this does not mean that there is any guarantee that this implementation will never be modified.

LRM4For every design decision Di there may be a time tCi > ti in which its implementation is changed. We do not want this change to affect the modules that depend on Di, and this can only be done by preserving the interface. Therefore, the importance of abstractions is twofold: they allow concrete design decisions to be deferred until the LRM, and they also allow these design decisions to be changed after the LRM.

Conclusions

In order to adopt the Last Responsible Moment principle it is essential to focus on the definition of abstractions representing the diverse design decisions. These abstractions serve as the placeholders that will allow the concrete implementation details to be deferred as much as possible. Moreover, good abstractions are essential to support maintainability and allow these design decisions to be easily changed.

Posted in Lean Development, TDD | Tagged , | 1 Comment

The End of Agile: Death by Over-Simplification

hypeThere is something basically wrong with the current adoption of Agile methods. The term Agile was abused, becoming the biggest ever hype in the history of software development, and generating a multi-million dollar industry of self-proclaimed Agile consultants and experts selling dubious certifications. People forgot the original Agile values and principles, and instead follow dogmatic processes with rigid rules and rituals.

But the biggest sin of Agile consultants was to over-simplify the software development process and underestimate the real complexity of building software systems. Developers were convinced that the software design may naturally emerge from the implementation of simple user stories, that it will always be possible to pay the technical debt in the future, that constant refactoring is an effective way to produce high-quality code and that agility can be assured by following strictly an Agile process. We will discuss each one of these myths below.

Myth 1: Good design will emerge from the implementation of user stories

Agile development teams follow an incremental development approach, in which a small set of user stories is implemented in each iteration. The basic assumption is that a coherent system design will naturally emerge from these independent stories, requiring at most some refactoring to sort out the commonalities.

However, in practice the code does not have this tendency to self-organize. The laws governing the evolution of software systems are that of increasing entropy. When we add new functionality the system tends to become more complex. Thus, instead of hoping for the design to emerge, software evolution should be planned through a high-level architecture including extension mechanisms.

Myth 2: It will always be possible to pay the technical debt in the future

The metaphor of technical debt became a popular euphemism for bad code. The idea of incurring some debt appears much more reasonable than deliberately producing low-quality implementations. Developers are ready to accumulate technical debt because they believe they will be able to pay this debt in the future.

However, in practice it is not so easy to pay the technical debt. Bad code normally comes together with poor interfaces and inappropriate separation of concerns. The consequence is that other modules are built on top of the original technical debt, creating dependencies on the simplistic design decisions that should be temporary. When eventually someone decides to pay the technical debt, it is already too late: the fix became too expensive.

Myth 3: Constant refactoring is an effective way to produce code

Refactoring became a very popular activity in software development; after all it is always focused on improving the code. Techniques such as Test-Driven Development (TDD) allow refactoring to be performed at low risk, since the unit tests automatically indicate if some working logic has been broken by code changes.

However, in practice refactoring is consuming an exaggerated amount of the efforts invested in software development. Some developers simply do not plan for change, in the belief that it will always be easy to refactor the system. The consequence is that some teams implement new features very fast in the first iterations, but at some point their work halts and they start spending most of their efforts in endless refactorings.

Myth 4: Agility can be assured by following an Agile process

One of the main goals of Agility is to be able to cope with change. We know that nowadays we must adapt to a reality in which system requirements may be modified unexpectedly, and Agile consultants claim that we may achieve change-resilience by adhering strictly to their well-defined processes.

However, in practice the process alone is not able to provide change-resilience. A software development team will only be able to address changing system requirements if the system was designed to be flexible and adaptable. If the original design did not take in consideration the issues of maintainability and extensibility, the developers will not succeed in incorporating changes, not matter how Agile is the development process.

Agile is Dead, Now What?

If we take a look at the hype chart below, it is sure that regarding Agile we are after the “peak of inflated expectations” and getting closer to the “trough of disillusionment”.

gartner-hype-cycle

Several recent articles have proclaimed the end of the Agile hype. Dave Thomas wrote that “Agile is Dead”, and was immediately followed by an “Angry Developer Version”. Tim Ottinger wrote “I Want Agile Back”, but Bob Marshall replied that “I Don’t Want Agile Back”. Finally, what was inevitable just happened: “The Anti-Agile Manifesto”.

Now the question is: what will guide Agile through the “slope of enlightenment”?

In my personal opinion, we will have to go back to the basics: To all the wonderful design fundamentals that were being discussed in the 90’s: the SOLID principles of OOD, design patterns, software reuse, component-based software development. Only when we are able to incorporate these basic principles in our development process we will reach a true state of Agility, embracing change effectively.

Another question: what will be the next step in the evolution of software design?

In my opinion: Antifragility. But this is the subject for a future post

What about you? Did you also experience the limitations of current Agile practices? Please share with us in the comments below.

Posted in Agile, Refactoring, Software Evolution | Tagged , , | 22 Comments