As software developers, we know that our systems will evolve with time. We must understand the forces that drive this evolution, in order to design systems that are easily evolvable. Unfortunately, many programmers have misconceptions about the real drivers of software evolution and about their own ability to predict how a system will evolve.
In my many years of experience as a professional software developer I had the luck to work on many projects from start. But on other occasions I joined projects that were already mature and deployed to real users. In these projects I observed the difficulty to maintain systems that were not planned for change. I condense my personal experience below in the form of four myths of software evolution.
Myth 1 – Changes are driven by improvements in implementation
Very often programmers believe that they will only have to change their code if they think about a better way to implement it, but this is not true. Most modifications are caused by new requirements originating from real users, who are not satisfied with the current functionality. For this reason, it is always more important to invest time understanding the user requirements than optimizing your algorithms.
Reality: Changes are driven by better understanding of user requirements.
Myth 2 – If you did your best, you will not need to change it
Many programmers are perfectionists; they will try to do their best and only be happy with a piece of code if they feel they cannot improve it. However, any code can be improved when these improvements originate from new user requirements. The idea of a “best possible code that cannot be improved” is a dangerous illusion. It is dangerous because it focuses on the quality of the implementation details, and not on the satisfaction of user needs.
Reality: Even if you did your best, you can still improve it.
Myth 3 – After many improvements, the code will not be changed again
Some programmers believe that if they have already invested a considerable amount of time improving a piece of code, it has reached a “stable” state in which it will not be necessary to change it again. However, the truth is that the number of changes is always proportional to the usefulness of the code. When people use your code they discover the need for new functionalities, which is the main source of changes. Thus if your code has been changed a lot in the past, it is probable that it will need to be changed again.
Reality: The more a code is used, the more it will need to be changed.
Myth 4 – You can predict which parts of the system will need to be changed
It is not possible to predict the new requirements originating from real users. The best we can do is to identify the components that play the most central role in the functionality of the system, and to design them to accommodate easily the future changes in requirements. Of course the system may also need to be changed to improve its performance, but this should be done after appropriate profiling of real usage and identification of the bottlenecks.
Reality: Evolution cannot be predicted. The users will always be able to surprise us.
The conclusion from the myths above is that when you know that a piece of code is important and will have many users, the best you can do is to design for maintainability. In other words, if a component is likely to be changed, it is more important to design it to be easily modified than to implement it perfectly using the more efficient algorithms.
Now you can ask two questions:
1) How do I design for maintainability? I think that the response is to focus on the Separation of Concerns, keeping low coupling and high cohesion.
2) Which methodology makes evolution easier? I think that Iterative Software Development is the best way to handle constant software evolution.
What do you think about these four myths of software evolution? Do they fit your own experience? Please share your comments below.
I agree with most of what you said but Myth 4 may not very serious. Categorization and reuse of domain, boundary, control and design classes over time do provide a basis to relatively predict what parts of the system change.
Another factor that enables better management of software development and myths is “Creating / Updating / Expanding a Repository ALL VALIDATED artifacts and REUSING as many of them as possible”. This can be done within a company / domain and also across them in some cooperative / commercial arrangement. Write to me for details.
Hi Putcha, thanks for your comments. You are right that this post is not scientific and was intended to be a bit funny also. On myth 4 I wanted to say that the users will always be able to surprise us. I will add this to the original post.
Great post…myths 1-3 confirm something I’ve always told those working with me: perfection is unattainable, because the definition of perfect changes. Optimal for the current context and flexible are the preferred goals.
Thanks for your comment, Gene. I’ve been reading your blog as well, and I really enjoy it!
In support of myth 1 “Changes are driven by improvements in implementation”…
Have you ever witnessed the conversation….User says “it should do xxx” and developer says “it doesn’t work like that” indicating a fundamental difference in point of view. One from “what I need” and one from “how the code works”.
In support of myth 2 “If you did your best, you will not need to change it”…
Have you ever developed some code and 6 months later when you open it up you’re aghast @#$%. Terrible variable naming, code structure, etc. Perfection is reserved for concepts only – never the code.
Slightly differnt spin on myth 3 “After many improvements, the code will not be changed again”…
Perhaps the developr is thinking “After many improvements, I WILL NOT change the code again” in line with myth 1 and 2. But really this is a alarm bell that says “time to refactor”.
In support of myth 4 “You can predict which parts of the system will need to be changed”…
Ever designed the system in layers, expecting only the upper layers will need change, only to find change has a ripple effect through the architecture and even low level routines often really need to be changed too. Sometimes we do the right thing and sometimes we make a workaround.
What I absolutely agree with is the concluding remarks – design for maintainability. This carries the most value. Most developers still think expertise means “clever code”. Wrong. Unless you are never going to get sick of being asked to maintain a piece of software your whole career you better design for maintainability. And use comments that assist!!! Pet hate: comments that merely explain what the code is doing. It ranks up there with Help for data entry screens that explain you can scroll a list, enter a reference number in the field labelled Reference and push the Submit button when your done. Pleeeeaasee. Explain the code mission, its purpose, effects of change, etc, not what instructions are used – I can see that!!!
Thanks for your comment, Paul! The examples you’ve provided are very illustrative.
I agree – with the “myths” – but let’s me share my view of reality
Reality: Changes are driven by better understanding of user requirements.
to an extent true – but on the long term – the change comes from a CHANGE in the requirements (thats why it’s called evolution)
The term “requirements” is often used to mean a specification of what the system will do – this is not the same as defining what the uses requires – i.e. what he needs – what is missing in his life that he is willing to pay money to solve. In other words – the needs are a “hole” – and the system that you build (i.e. defined the requirements for) – is the cork that will fill that hole. Think of a Keg of beer with a hole in the side that needs to be stopped up by a cork.
So why does change happen ? For many reasons:
1) When you buid a system, even if you could fully manage to understand the requirements, you need to compromise, decide tradeoffs, make assumptions and live with the limitations of technolgy, schedule, man power and company politics and other pragmatics. The cork never really fills the hole perfectly – in the best case it’s flexible and you can use a hammer to change it till the hole is plugged – that’s what we call a patch.
2) The world changes – as computers get faster, memory cheaper and larger – we can do more – so our expectations of the world change – a simple example is how airplane simulators evolved because they could do better graphics and the simulations got better. It was cheaper to make beer Kegs from metal and screw on caps became a better solution than corks.
3) Sometimes, the system impacts the world of the user to such an extent – that the users requirements CHANGES – and the system needs to change to remain useful. When word processers were invented – initially they mimicked type writers – but then some one decided that we could use back space instead of “Typex” – then someone had a brainwave and discovered that using various keys we can scroll, make changes in the text in other places – and – cut/copy/paste – how far Word is from typewriters.
Following our analogy – metal Beer kegs were too heavy to lift, so the shape needed to be changed, and handles were added.
Thats partly behind your observation that “The more a code is used, the more it will need to be changed.”
“Reality: Even if you did your best, you can still improve it”
You’re not improving the code, at the simplest level you are improving your understanding of the requirments – but what your’e really improving – is the way your system meets the changing needs of it’s user
Another reality – In the real world – you rarely improve code (except perhaps to make it more efficient) – you can’t afford – it – you fix bugs, you add new features to keep up with the competition.
(Yes you can refactor – if you have time and resources).
4) With all these requirement changes – and the need to keep up with the changes quickly and economically – the architecture breaks down – all that lovely low coupling and high cohesion degrades – bugs are introduced – and more change is required. You bring in more programers to keep up the pace – they don’t understand the code (even with all those lovely comments) – so they patch – and the bugs keep coming.
Inevitably – eventually it is more economical to kill the system – create a brand new one – and start all over again.
You conclude with “the best you can do is to design for maintainability”
I’ll conclude with
– you design the system to allow “evolution”
– You “maintain” the clean structure of the system – the low coupling and high cohesion
Google “Lehmans Laws of Software Evolution” for more on this.
Thank you for your comments, Yonatan, I appreciate your insights. The subject of software evolution certainly deserves more attention from the developers.
Pingback: Do SOLID design principles make code slow? | Effective Software Design