In his book “Antifragile: Things That Gain From Disorder”, Nassim Taleb introduces the concept of Antifragility, which is the opposite of Fragility. The main question is how things react to volatility (such as randomness, errors, uncertainty, stressors and time). According to him, there is a Triad – Antifragility, Robustness, and Fragility:
- Fragile things are exposed to volatility, meaning that volatility is prejudicial to them.
- Robust things are immune to volatility, meaning that volatility does not affect them.
- Antifragile things enjoy volatility, meaning that volatility is beneficial to them.
In Taleb’s words:
“Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.”
In order to reduce fragility, Taleb proposes the adoption of a Barbell Strategy:
“Barbell Strategy: A dual strategy, a combination of two extremes, one safe and one speculative, deemed more robust than a ‘monomodal’ strategy.”
In the case of software systems, volatility appears in the form of changes over time. These changes are unavoidable, as expressed in one of the Lehman laws of software evolution:
“Continuing Change — A software system must be continually adapted or it becomes progressively less satisfactory.”
There are two main types of changes that are required in a software system over time:
- Functional changes: These are changes required to implement new requirements, or to implement modifications in the requirements. These changes have an impact on the system’s behavior and functionality.
- Non-functional changes: These changes are required to improve the quality of the design. They are normally the result of Refactoring and focus on the reduction of Technical Debt. These changes should not affect the system’s behavior or functionality.
Now that we have identified the sources of volatility (changes) the question is: What is the software equivalent of a Barbell Strategy?
The answer is: Abstraction. Software systems should have a structure and organization based on different levels of abstraction. In this case the duality of the Barbell Strategy is expressed as the separation between high-level abstract elements and concrete implementation details.
- Abstract elements: Are robust, and should not be easily affected by changes.
- Concrete implementation details: Are fragile, directly affected by changes.
In other words, software systems should be built in such a way that the volatility (changes) does not affect its structure and organization, preserving the main, high-level abstractions and requiring modifications only on the low-level, concrete implementation details.
Below are several examples of this duality between abstractions and concrete details.
Information Hiding and Encapsulation
Information Hiding was defined by David Parnas as a criterion to decompose a system into modules. According to him: “Every module … is characterized by its knowledge of a design decision which it hides from all others. Its interface or definition was chosen to reveal as little as possible about its inner workings.”
The mechanism of Encapsulation in Object Oriented Programming (OOP) is a direct application of this principle. When applying encapsulation, the interface of a class is separated from its implementation, and thus the implementation may be changed without affecting the clients of the class. In other words, thanks to encapsulation, the clients of a class become less fragile to changes in its implementation details.
Open-Closed Principle (OCP)
The Open-Closed Principle (OCP) was defined by Bertrand Meyer as: “Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.” In practice, the implementation of the OCP requires the definition of an abstraction that allows a module to be open to extensions, so that each possible extension is a specialization of this abstraction.
For example, in the Strategy design pattern, the Strategy itself should be defined as an abstract interface that may be implemented by several possible concrete subclasses. Each specific strategy is fragile and may need to be replaced, but the object using the strategy is not fragile, thanks to abstraction from implementation details.
Test-Driven Development (TDD)
In the case of Test-Driven Development (TDD) the fundamental idea is to write the tests before writing the code. This requires the definition of abstract interfaces for the classes being tested, with a well-defined behavior. Since TDD forces developers to thinks in terms of interfaces and behaviors before writing the code, it actually focuses on the early separation between abstractions and concrete implementation details. This allows the code to be easily refactored in the future, because the unit tests would catch any mistake resulting from changes in the implementation. In other words, the unit tests are less fragile to volatility (changes) because they are based on abstractions.
Inheritance Hierarchies and Frameworks
In OOP, an inheritance hierarchy organizes classes according to a generalization-specialization rationale: the classes near the root of the hierarchy should be more abstract than the classes on the leaves of the inheritance tree. Accordingly, the abstract classes should be less fragile than the concrete classes, meaning that they should be less subject to impact caused by volatility, such as changes in the requirements or changes in the implementation details.
In general, Frameworks use inheritance hierarchies to provide reusable code that is not application-specific. Frameworks define abstractions and the relationships among them. It is the responsibility of the software developer to extend the framework, deriving concrete subclasses, and writing the implementation details to satisfy the requirements of his specific application. These implementation details are obviously fragile, but the framework is not fragile, and may be reused with no changes on many applications.
Conclusions
Software systems are subject to volatility in the form of changing requirements. Concrete implementation details are fragile and directly affected by these changes. In order to reduce the fragility of the system as a whole, it is important to adopt a Barbell Strategy and define these concrete details as the specialization of higher-level abstractions. Proper abstractions should be robust, surviving the impact of changes. Thus, while the details will change over time, the system structure and organization will be preserved because it is based on abstractions.
Please notice that the examples discussed in this post (encapsulation, OCP, TDD, frameworks) mostly increase the robustness of software systems. Thus, in the original Triad we moved away from Fragility towards Robustness, but we still cannot claim we reached true Antifragility. This would require a system that benefits from volatility, or, in our case, that benefits from changes. This will be the subject of a future post.
Reblogged this on healthcare software solutions lava kafle kathmandu nepal lava prasad kafle lava kafle on google+ <a href="https://plus.google.com/102726194262702292606" rel="publisher">Google+</a>.
Pingback: The End of Agile: Death by Over-Simplification | Effective Software Design
I like this approach of Anti-fragility or adding value by adding variability.
David Wheeler is often quoted as saying “All problems in computer science can be solved by another level of indirection.” On top of that, a valid comment is that indirection is abstraction and the only problem that this approach cannot solve is: “Too Many Levels of Abstraction”. I like this quote and am using in while teaching the basics of programming and design.
But…
Quoting In the Conclusions: “In order to reduce the fragility of the system as a whole, it is important to adopt a Barbell Strategy and define these concrete details as the specialization of higher-level abstractions. Proper abstractions should be robust, surviving the impact of changes.”
Isn’t this is what a good Software Architecture supposed to achieve? As one of the myriad definitions goes: “A structure or structures of the system” — a stable structure that survives the change over time and implementation approaches, and is only enriched by adding new functionality and qualities?
And as usual, the real question is not “What?” but “How?” — How can we achieve the above mentioned “proper abstractions?” What are the criteria, tools and techniques do define what is “proper” and verify that?
And revisiting the topic of “Architecture vs. Design”, I’d rather say that Architecture is supposed to deal with abstractions and interfaces, while Design with data structures and algorithms. But this probably a topic for another blog post.
Thanks for your comment, Alex. I agree completely that architecture should deal with abstractions and interfaces, and that the main quality attribute of a good architecture is resilience to change.
Regarding your question of “how” to achieve that, my personal answer is Adaptable Design Up Front:
https://effectivesoftwaredesign.com/adaptable-design-up-front/
Hayim,
Good to see someone taking the ideas of Nassim and applying them to software.
Can you please help me? I believe I understand the barbell strategy. You put 90% of your in safe but low return investments, leaving the last 10% to bet on high risk/high return ventures.
When applying this to software I can imagine adopting a traditional well understood architecture for 90% of the software to get the job done. Then take risks with new technologies on the remaining 10%.
However I’m struggling to understand your leap from barbell to abstraction and concrete implementation details. Can you explain further please?
Andy.
Andy, the idea is that abstractions are “safe” and implementation details are “speculative”. We don’t know for how long the current implementation details will survive the changing requirements, therefore they are fragile. But if we created abstractions to capture the potential variability in the details, these abstractions should be robust.
Some systems are not structured around good abstractions. There is too much coupling, not enough cohesion, direct dependencies among concrete classes. The introduction of good abstractions should reduce coupling and increase cohesion. If you study the approach of Design Patterns, this is exactly what they do: Most of them replace dependencies among concrete classes by relationships among abstractions.
I guess I see this the other way around. With any abstraction you are taking a bet the changes you need to make are in the implementation. As long as this remains true your cost of change is small. However as soon as your abstraction is the thing that needs to change then your going to incur a big cost. In the language of Talib, you have a convex system or more simply put your fragile.
So sure having a good abstraction is key, but knowing what a good abstraction is before the event is hard (I believe in some complex domains impossible).
For me the focus should shift from the cost of producing the software, outward towards the benefits the software can bring. I believe it is only the you can create a concave system, one where the response to change delivers more value as the rate and size of change increases.
Having a “last responsible moment” attitude to architectural design helps in this respect. However I don’t believe anyone truly has the answers at the moment on how to create antifragile software.
I agree with you that sometimes the abstractions must be changed, but this does not contradict the fact that abstractions are more robust and stable than implementation details.
We must be pragmatic, not perfectionists. Building a system with good abstractions is the best we can do, even if it is not Black-Swan-proof.
The current state in the software industry is that many systems still suffer of strong coupling and weak cohesion. Structuring these systems around good abstractions is certainly a step in the right direction.
I certainly don’t disagree with that statement. A worthy cause indeed. However it is claiming it is anti-fragile that concerns me.
Correct. In the last paragraph of my article above I explain that the use of abstractions make the system more robust, but not yet Antifragile.
OK that makes sense then. I’d agree 100% with that statement.
Pingback: Antifragility and Component-Based Software Development | Effective Software Design
Pingback: Re-Post: The End of Agile: Death by Over-Simplification | Youry's Blog
Thanks for writing this! I have been a follower of Taleb’s work, and when I read the first few chapters of AntiFragile I had a similar realization. It “aint’ about agile it’s about anti fragile”. Nice stuff!
Pingback: Antifragile Software Design | Effective Software Design
Pingback: There's Nothing Less Agile Than Scrum - Marcus Technical Services
Pingback: There’s Nothing Less Agile Than Scrum
Pingback: There’s Nothing Less ‘Agile’ Than Scrum – Marcus Technical Services
Pingback: This New Design is Raising Quite a “Flapp” – Marcus Technical Services
Pingback: Xamarin.Forms: Half the Cost; Double the Speed – Marcus Technical Services
Pingback: Is Your Software Project Killing You? – Marcus Technical Services
Pingback: What is a Program? – Marcus Technical Services
Pingback: A Smarter DI Container for C# (and Xamarin) – Marcus Technical Services
Pingback: Taking Control of Variable Lifecycle – Marcus Technical Services
Pingback: The MVVM Framework Anti-Pattern – Marcus Technical Services
Pingback: Hurry Up(To Go Slow) – Marcus Technical Services
Pingback: Welcome to the Google Net – Marcus Technical Services
Pingback: Is the Internet driving us to distraction? – Marcus Technical Services
Pingback: Alain Kassabian – Digital Media | Antifragility in WordPress, React Bootstrap, and Grunt