In his book “Antifragile: Things That Gain From Disorder”, Nassim Taleb introduces the concept of Antifragility, which is the opposite of Fragility. Antifragile things are able to benefit from volatility. In a previous post, I explained how in the field of software development the main source of volatility are the changes in requirements, and how it is possible to make software systems more robust through the appropriate usage of abstractions.
However, robustness is not the same as Antifragility. A robust system may survive the impact of changes, but our goal is to develop systems that actually benefit and improve as a consequence of these changes. In this article we will analyze a strategy that may be adopted to develop such Antifragile systems.
It is common knowledge that systems should be divided into components. The reaction to a change in the environment should only affect a few components, and not the entire system. Thus, component-based systems are more robust than monolithic systems.
However, to be Antifragile, a system must be able to benefit from changes in the environment. This Antifragility may be achieved when several systems share the same components. Thus, when a specific component is improved, all systems using this component can benefit from this improvement.
One concrete example is AA batteries. During many years AA batteries were the standard for millions of different devices. Then, energy-consuming devices drove the development of rechargeable AA batteries. Now all devices can benefit from this improvement, and in this sense they are Antifragile.
Component-Based Software Development
Since the first years of software development it was understood that systems should be divided into components, and thus software engineers tried to identify the right principles to perform this division, which was once called modularity. Metrics such as coupling and cohesion and principles such as information hiding were defined to guide the decomposition of large software systems into modules.
Over the decades the advances in programming languages and software engineering practices were followed by better ways to perform this decomposition. Object-oriented programming languages allowed software developers to create classes of objects and organize them into hierarchies. Then, larger distributed systems were built based on Object-Request Broker (ORB) platforms. More recently Service Oriented Architectures (SOA) became popular and now the current trend is Microservices.
In all the examples above, the evolution was based on the same original principles: Focusing on low coupling, high cohesion and defining clear and abstract interfaces. All of them contributed to make software systems more robust. It is easier to change the implementation of a class than to modify a data structure and its associated procedures. Services can be deployed independently of each other, and without disrupting the rest of the system.
The next step, which is to develop Antifragile systems, going beyond robustness, will happen when many different systems share the same components. Then, if a particular component is improved to satisfy the requirements of a specific application, all the systems may enjoy the improved component at no additional cost. This is true Antifragility.
Consider for example a system that is based on a SOA architecture. At any point in time it is possible to deploy an enhanced version of one of the services without affecting the other ones, and thus the system is very robust. But if there are several systems based on shared services, each time one of these services is improved all the systems will be able to immediately benefit from the improvement. Thus while each system is robust, the collection of systems is Antifragile, because they benefit from the same changes at no added cost.
Another modern application of this approach is the idea of a Software Product Line (SPL), in which a series of software products are based on the same components and differ only on their configuration. In the case of SPLs there may be several coexisting versions for each component. Each time a new component version is created, all the products using previous versions may benefit through simple configuration updates.
A simple strategy for Antifragility is: Build component-based systems with shared components. When one of the shared components is improved all the systems will be able to benefit from this improvement at no additional cost.
What do you think? Have you experienced the benefits of building several systems on top of the same components? Please share your experience with us in the comments below.
Does component based development make slow the program?because of lots of classes
Not necessarily. In systems with high performance constraints there are architectural solutions that may be adopted without sacrificing the modularity and separation of concerns. Examples are the usage of caches, in-memory databases and server replication.
Thanks Hayim.I am new in component based development.Would you advice books names on component based development?
Good to see that this truly amazing book is being discussed, and also applied. In my opinion, though, it may not be Taleb’s view that a system in which improvements in one part automatically improves another, can be termed ‘Anti-fragile’. For something to be so termed, it should benefit from chaos, and not from improvement in a sub-system. Taleb has gone on to recognising which systems are anti-fragile, and not how to make them so. Of course, if that can be done, it’ll surely be wonderful.
Thanks for your comment, Alok. If you look at a single system, the ease of modifying components make it robust. But if you consider a collection of many systems with shared components, they together become Antifragile, because they can benefit from changes at no additional cost. This is in accordance with Taleb’s idea of an Antifragile population of fragile individuals.
I still don’t see how this is Antifragile. To use your own words “To benefit from changes at no cost” is not the same as to benefit from chaos and volatility. A true Antifragile computer system would for example benefit from system crashes, programming errors and random events.
In my opinion, computer systems that are compiled and not designed to change while running can not be Antifragile.
I think we will see Antifragile behaviour in systems based on nerual networks and other branches of Artificial Intelligence. Such systems have by it’s nature the ability to change behaviour and its internal structure based on external events.
You define volatility as “crashes, programming errors and random events” but I define volatility as “changes in requirements” so we are clearly not talking about the same thing. This is the reason you need adaptability at run-time and I look for adaptability as a design attribute.
I reviewed this book before http://bahmutov.calepin.co/review-antifragile-by-nassim-nicholas-taleb.html and how it applies to software development. In a nutshell: you want to open the system to low levels of stress: module reuse is one of the way to stress the module. Another is open sourcing the module. By improving the software in response to feedback from low level stress, you make it more likely to withstand unpredictable and low probability high stress events.
Well – this begins to sound like hand implemented genetic algorithms. A contradiction in terms.
In the late 80s we were all taken with the modularity and robust nature of Xwindows. Trouble was that any system that ran it , ran like a pig. We waited for a number of years to get systems fast and large enough to run it well.
This is really no different – software problems decomposed to the point that they become truly generic – modular. If they are really that generic – they will run like a pig.
Sorry – we still have that nasty hardware layer that will eventually bite you. Oh well, if you never venture below the system call interface then I’m sure it looks like a fine idea.
Like driver less cars , I suspect this is a little further off then you might imagine.
Pingback: Antifragile Software Design: Abstraction and the Barbell Strategy | Effective Software Design
Yo Hommie! Remember 90s? The DLLs? The OLEs? The COMs? They all were this component based software engineering, DLLs were supposed to be updated to give you better functionality over time and better implementations too. But where it led to? DLL Hell! straight out of Compton! Shared components (services or local libraries) can and will create the unwanted dependencies, sometimes in the worst possible ways. History is repeating now with SOA, only it is much worse this time around…
Unlike 90s, now because of functionalities in services cann’t be ‘statically linked’ to your app and is under control of possibly another party, you can get really fragile systems which will break at the most unexpected times. The meta data of SOA can indeed define the interface syntactically but not semantically. Perhaps we should have a hybrid of TDD and SOA enforced at the much broader level. Meaning, you not only publish your interface in the form of a meta data but also a bunch of tests which will be used as invariants and specify the behaviour of your service. In this way I can be sure that a service whose behaviour was to return a list of say umm… friends doesn’t suddenly becomes a service which returns a list of friends who are using your app.
The DLL Hell was not due to use of independently upgradeable components. It was due to inability to use multiple versions of the same component in systems of multiple applications.
When an application can specify a component by name+version, it can upgrade to a more recent version at its own time, presumably after proper integration testing. This is what we see in Un*x based systems such as Linux.
While I agree with some of what you stated, but I must say that antifragility is not always good, or to put it that way anti-fragile systems wait for chaos to prosper. The cannot reach best performance unless there is disruption and havoc around them. That is the definition of Anti-fragile, e.g. ISIS and terrorism are anti fragile, since they need anarchy, chaos in a region to settle down. Crime is anti fragile since more the disorder in the society the more successful the gangs would be and the list goes on and on. So systems are better off being robust than being anti-fragile
Yea.. What about the case when improvement of one shared component causes bugs in many others? I remember upgrading a common component with a new version of VFS library dependency.. Later this turned out to introduce a security threat to 3 different components developed by 3 different teams.. Often maintaining such libraries requires extraordinary skills and experience, and a lot of caution. Refactoring and developing new features in such a common component is very limited and has to be performer carefully. After some time it becomes impossible and nobody is brave (or stupid) enough to touch it. So we either just decide to live with their quirks, or start writing new ones, which will solve our current problems. I’m not against abstraction and reusability – these are the basic rules of software development, but I wouldn’t call them antifragile. In terms of volatility there are many uspsides but also equal downsides.
It would be antifragile if one could buy an insurance policy against bugs in the code or changing requirements.. Seems maybe only hacking is truly antifragile, cuz the more complex the software is, the more ways there are to break it.
Pingback: Antifragile Software Design | Effective Software Design