My first computer, 30 years ago…

CP 500So we have reached the year 2013, and this reminds me of my first computer, which I got in 1983, or 30 years ago. It was a Brazilian CP-500, compatible with the original TRS-80 model III. It had 64K of RAM, two drives for 5¼ floppy disks and a monochromatic monitor. Each floppy disk could store 180K on each side, and this was enough for saving several programs.

I was 13 when my father bought me this computer. Of course, it was a gift for my Bar-Mitzvah, and I was one of the first boys in my class to have a computer at home. I was very excited, but did not have any software to run on it. So I did what I could: I read the two manuals that came with the computer, and tried all the commands.

The first manual was for the operating system, the TRS-DOS. I learned everything about manipulating files on the disk. This was useful, but not very fun.

The second manual was for the BASIC programming language. I immediately started writing my own tiny programs, copying the examples from the book and making small modifications. This was much more fun, but still very limited.

SinclairThings got more interesting when one of my friends gave me a book with printed listings of source code for games in BASIC. I tried to copy them, typing line after line, but got many errors. I soon understood that these games were written for the Sinclair computer, which was not compatible with mine.

Then I decided to fix these games so they would run on my computer. I remember spending many hours trying the different instructions, especially the ones related to the graphic display, which was completely different in the two computers.

In the end I was able to convert all games and had them running properly on my machine. It was a double win: Now I had games to play with, and now I had learned to program in BASIC. And this was the very start of my career as a software developer.

I was very effective in my early years as a teenage programmer. I was extremely curious and looking for challenges. My first computer was a great source of new things to learn and problems to solve, and I had lots of free time.

Today I’m still curious and looking for challenges, but now of course I do not have the free time I had as a teenager. Thus I’m very happy to be still learning new things and solving interesting problems at work. Currently, Recommender Systems for millions of users instead of small games in BASIC.

I feel lucky for having this passion for programming, which is already 30 years old. What can be better than your hobby becoming your profession? Is there anything nicer than being paid to do what you love?

Posted in Programming | Tagged | 4 Comments

IASA IL meeting with Prof. Rick Kazman

KazmanThe International Association of Software Architects (IASA) in Israel organized a special event with the participation of Prof. Rick Kazman, who talked about ”The Metropolis Model for Software Development”.

Dr. Rick Kazman is a Professor at the University of Hawaii and a Visiting Scientist at the Software Engineering Institute. His primary research interests are software architecture, design and analysis tools, software visualization, and software engineering economics. He is the author of over 100 papers, and co-author of several books, including “Software Architecture in Practice“, and “Evaluating Software Architectures: Methods and Case Studies“. Kazman was one of the creators of the SAAM (Software Architecture Analysis Method) and the ATAM (Architecture Tradeoff Analysis Method). Dr. Kazman received B.A. and M.Math degrees from the University of Waterloo, a M.A. from York University, and a Ph.D. from Carnegie Mellon University.

The Metropolis Model for Software Development

Abstract

We are in the midst of a radical transformation in how we create our information environment.  This change—the rise of large-scale cooperative efforts, peer production of information—is at the heart of the open-source movement but open source is only one example of how society is restructuring around new models of production and consumption.  This change is affecting  not only our core software platforms, but every domain of information and cultural production.  The networked information environment has dramatically transformed the marketplace, creating new modes and opportunities for how we make and exchange information.  “Crowdsourcing” is now used for creation in the arts, in basic research, and in retail business.  These changes have been society-transforming.  So how can we prepare for, analyze, and manage projects in a crowdsourcing world?  Existing software development models are of little help here.  These older models all contain a “closed world” assumption: projects have dedicated finite resources, management can “manage” these resources, requirements can be known, software is developed, tested, and released in planned increments.   However, these assumptions break in a crowdsourced world.  In this talk we will present principles on which a new system development model must be based.  We call these principles the Metropolis Model.

These are the original slides of Prof. Kazman’s presentation:

Here is the video of the entire talk and the Q&A session:

This event was hosted by LivePerson at Raanana.

The audience was composed of experienced software professionals, who clearly identified with the subject and asked many questions.

To participate in our future meetings, please join the IASA IL group on LinkedIn.

Feel free to share your comments below. Thanks!

Posted in IASA Israel, Software Architecture | Tagged , | Leave a comment

On Information Hiding and Encapsulation

This month I participated in IBM Haifa’s Programming Languages and Software Engineering (PLSE) Seminar. There I had the opportunity to have lunch with David Parnas, one of the world pioneers in the field of Software Engineering. Parnas is the father of Information Hiding, a term he coined and which became popular through his seminal paper “On the Criteria to Be Used in Decomposing Systems into Modules“, published in 1972.

My personal conversation with Parnas at lunch was mostly about places to visit in Israel, but during the seminar it was great to hear him talking about the challenges of modern software development. He expressed his view that software developers must have a solid education on the principles of software engineering, and focus less on particular technologies which will soon become obsolete. This is my motivation to write this post about Information Hiding and Encapsulation.

In the conclusion of his above-mentioned paper, Parnas writes: “… it is almost always incorrect to begin the decomposition of a system into modules on the basis of a flowchart. We propose instead that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others. Since, in most cases, design decisions transcend time of execution, modules will not correspond to steps in the processing.”

This idea that modules should not “correspond to steps in the processing” can be considered one of the principles in object-oriented design. In OOD, each class has some responsibility and different classes collaborate with each other, as proposed by the class-responsibility-collaboration (CRC) approach. The methods provided by a class may be used at different times, on diverse parts of a system. Thus it is clear today that in OOD we do not decompose the system according to the steps in the processing, but in 1972 this was a revolutionary idea.

In another section of his paper, Parnas describes the Information Hiding decomposition criterion: “Every module … is characterized by its knowledge of a design decision which it hides from all others. Its interface or definition was chosen to reveal as little as possible about its inner workings.”

The ideas expressed in these two lines are the essence of what today is called encapsulation. We know that in OOP each class should have a clear and well-defined interface that hides its implementation details. Modern object-oriented languages provide constructs which allow the programmer to define the visibility of members of a class, such as public or private.

When a programmer uses encapsulation to hide the “inner workings” of a class, he is assuring that this class will be:

  • Easier to maintain, because changes will not be propagated to other parts of the system.
  • Easier to extend, by adding new functionality while preserving the original interface.
  • Easier to reuse, since this class will not depend on the details of the rest of the system.

These benefits of encapsulation are well-understood by programmers today. It is natural to hide the details of a class when using object-oriented programming languages. But in the past people did not have appropriate abstractions to implement it.

Because Information Hiding is a principle, it was true back in 1972, it continues to be true today and it will still be true in the future. It could be applied in technologies that are now obsolete, it is an essential part of modern object-oriented technologies, and it certainly will continue to play an important role in new technologies in the future.

Software Developers must learn about the principles of Software Engineering. This is a principle. Specific technologies are good for the implementation details.

Posted in OOD, OOP, Software Architecture | Tagged , , | 9 Comments

Three rules to keep your sanity: Avoiding trouble on Facebook

Facebook is a wonderful tool for communication. It allows you to keep in touch with people whom otherwise you would have lost contact completely. It’s great to have news from our old friends and to see pictures from relatives who live far from us.

However Facebook may also be very dangerous. It may cause you to waste your time and distract you from tasks that should have much higher priority. If you are not careful, you may also have some very unpleasant experiences on Facebook.

Summarizing my own experience, here are three simple rules to avoid trouble and keep your sanity on Facebook.

Never post anything that may be offensive to someone

When posting on Facebook, you should behave as if you were talking to a public. Imagine that you are on a wedding and someone has invited you to make a toast. Of course you would never say something that may be offensive to someone.

Don’t behave as if it’s your own party, assuming that you are the host and you can say whatever you want. It’s true that you are posting on your own wall, but your posts will appear on the news feeds of your contacts. Did your contacts ask your opinion?

The more contacts you have, the more difficult it will be not to offend someone. Among your friends there may be liberals and conservatives, homosexuals and homophobes, Jews and Muslims. And if you have many such “friends”, most probably you simply don’t know them well enough to be sure if they will be offended or not.

Never talk to strangers

Do you talk to strangers in the street? Probably not. But people on Facebook frequently have discussions with strangers when commenting on the post of a mutual friend. Unfortunately, this kind of conversations may easily become unpleasant.

First of all, when talking to a stranger you may never predict how he/she will react. You may be surprised to discover that this particular person has very radical views about the subject. Worse yet, this stranger may be emotionally unstable and become aggressive.

But you may claim that “friends of friends” are not really strangers, right? Wrong. You have no idea what kind of friend you are talking to. He/she may be just an old acquaintance of your friend, or a co-worker. Be careful.

Never post negative comments or criticisms

If you don’t have something positive to say, it’s better not to say anything. Some people simply don’t know how to take criticisms, even if you make an effort to be polite. And on Facebook this is much worse because your negative comments and criticisms are public.

For most people, when they post something on Facebook, their main motivation is to collect positive feedback. They hope to get as many “likes” as possible, as well as many friends telling them how they are smart or funny.

If you make a negative comment or criticism, you are spoiling everything, and may make your friend very upset. There is a reason Facebook does not have a “dislike” button. People go to social networks in search of support and reinforcement.

In summary

In summary, if you want to avoid trouble and keep your sanity on Facebook, you must make an extreme effort to always be the nice guy. As nice as possible. Good luck!

Posted in Social Networks | Tagged | 2 Comments

The Psychology of Reviews: Distinction Bias, Evaluability Hypothesis and the Framing Effect

Design Reviews are one of the most important activities in the software development process. If a bad design is approved and implemented, it is very expensive to correct that afterwards. Therefore, we want to have high confidence in our decisions when we are reviewing a proposed design.

Suppose someone is presenting us the design for a server-side system, whose non-functional attributes are described as follows:

The server is stateless and session data is stored in a shared database. Each instance of the server will hold a cache for 10,000 active sessions, being able to handle 400 transactions per second (TPS), with average latency of 150 milliseconds. The service is expected to be up 99% of the time.

How does that sound? If you have some experience with server-side development, this numbers probably appear to be quite standard. If you are reviewing such a design, the two main questions are:

  • Are these performance attributes reasonable?
  • Do they satisfy the requirement of the Service Level Agreement (SLA)?

If your answer to both questions is affirmative, then you would probably approve it.

But what would be your opinion about a service described as follows:

The server is stateless and session data is stored in a shared database. Each instance of the server will hold a cache for 15,000 active sessions, being able to handle 300 transactions per second (TPS), with average latency of 100 milliseconds. The service is expected to be up 99.9% of the time.

This description sounds very similar to the previous one, right? Actually, they are so similar that if you have approved the first one you would approve this one also, and vice-versa. But when we compare the two descriptions we may ask very interesting questions. To help this comparison I will present the two options in a table:

Attribute Option 1 Option 2
Cache Size (# sessions) 10,000 15,000
Throughput (TPS) 400 300
Latency (ms) 150 100
Robustness (up-time) 99% 99.9%

Here are some of the questions raised by comparing these two options:

  • What is the expected number of active sessions at any point time? If this number is 30,000 then the first option would require 3 instances of the server, while on the second option two instances are enough.
  • What is the expected TPS per active session? In the first option the ratio is 400/10,000 = 4/100 while in the second option it is 300/15,000 = 2/100. So the ratio in the first option is twice as bigger as in the second option.
  • What is more important, the latency or the throughput? There is a clear trade-off here, because Option 1 has better throughput while Option 2 has better latency.
  • How do you compare the Robustness of the two options? If you think about up-time, there is a 0.9% improvement in Option 2: From 99% to 99.9%. But if you think about down-time, Option 1 could be down 1% of the time, while Option 2 can only be down 0.1% of the time. So you could say that the second option is ten times better. In practice, for an entire year, the first option could be down during 3 days and 15 hours, while the second option can be down during less than 9 hours.

I believe that it is clear in this example that comparing two options is different from reviewing each option separately. In other words, the reasoning involved in the review of a single option is not the same as in the combined analysis. This is not a special case, and actually these differences have been studied by psychologists who reached interesting conclusions and defined some general phenomena, such as:

  • The Distinction Bias
  • The Evaluability Hypothesis
  • The Framing Effect

Now let’s take a look at each one of these theories and observe how they relate to our concrete design review example.

Distinction Bias

The Distinction Bias is an important concept that was studied in the context of Decision Theory. This is a definition from Wikipedia:

“Distinction bias is the tendency to view two options as more dissimilar when evaluating them simultaneously than when evaluating them separately.”

In our example above, the two design options initially looked very similar, but when they were compared side-by-side their differences became more apparent.

There are situations in which the Distinction Bias may cause bad choices. For example, this happens when we decide to pay more for a product that has a feature which we don’t really need. But in the case of design reviews, I believe that the Distinction Bias is normally beneficial, because the people doing the reviews are professionals that should understand the real meaning and importance of the diverse attributes.

Evaluability Hypothesis

The Evaluability Hypothesis describes another phenomenon that was observed by psychologists when studying how people choose between options. The basic observation is that there are attributes that are easy to evaluate in isolation, while others are hard. These hard-to-evaluate attributes need to be compared to something else to be understood.

When people assess options independently, they tend to focus on the easy-to-evaluate attributes. But when comparing two options they can take in consideration the hard-to-evaluate attributes. This may cause an Evaluation Reversal: The option ranked as the best one in isolated assessments may be considered the second-best in a joint-assessment.

In our previous example, if we asked software engineers to rank the two options in isolation, it is likely that the second option would be considered the best. It was better in three of four attributes: Bigger cache, lower latency and higher up-time. But there is an attribute that may be hard to evaluate in isolation: The TPS per active session, or the rate between the throughput and the cache size. As we saw, it is much higher in the first option than in the second, and thus in a joint evaluation the first option could be chosen.

Framing Effect

The Framing Effect is a cognitive bias in which people make different decisions when presented with equivalent options depending on the way these options are expressed. More specifically:

“People react differently to a particular choice depending on whether it is presented as a loss or as a gain.”

In our previous example, we observed this phenomenon when analyzing the robustness of the two design options, which can be expresses either as a small increase in the up-time or a big decrease in the down-time.

Possible questions that could cause this Framing Effect would be:

  • Are you willing to pay $X to increase the up-time from 99% to 99.9%?
  • Are you willing to pay $X to decrease the down-time from 1% to 0.1%?

Or also:

  • How do you rate a system that is up 99% of the time?
  • How do you rate a system that is down 1% of the time?

In both cases the questions are logically equivalent. But if you present them to groups of people, you can expect that the answers will not be consistent.

Conclusions

The understanding of the psychological biases behind people decisions should be used to improve our processes. These are in summary the lessons we can learn:

  • Distinction Bias: For better insight, we should always compare alternatives instead of reviewing a single option.
  • Evaluability Hypothesis: To really understand the meaning of hard-to-evaluate attributes, we need diverse options.
  • Framing Effect: The answer to a question depends on how it is formulated. Different questions with the same logical meaning may get different answers.

Thus, by making sure we have diverse options and by asking the right questions it is possible to greatly increase the efficacy of our design reviews.

What do you think? What has been your personal experience with design reviews? Have you observed these psychological biases in practice? Please share your comments below.

Did you like this post?

Posted in Efficacy, Psychology of Programming, Software Architecture | Tagged , , | 10 Comments

Illusory Superiority: Are you a good programmer?

Programmers are known to be proud of their work. Some developers even feel that writing elegant code is a form of art, and thus they call themselves “software craftsmen”. I am sure that the desire to perform outstanding work is normal in any profession, but in the case of software development the programmers are constantly exposed to pieces of code written by their colleagues. If everyone can see your code, you naturally feel the need to write well.

So now I ask you: Are you a good programmer? Are you above average as a software developer? Perhaps you are in the top 20%, or even in the top 10%?

I’m almost sure that your answer is that you are above average. Most programmers feel like that. But, of course, if most programmers think they are above average, many of them are wrong…

Illusory Superiority

This general phenomenon of feeling “above average” is called Illusory Superiority, and it has been studied by social psychologists. Here is a definition from Wikipedia:

“Illusory superiority is a cognitive bias that causes people to overestimate their positive qualities and abilities and to underestimate their negative qualities, relative to others.”

Here are some concrete examples that were observed by researchers in this field:

“87% of MBA students at Stanford University rated their academic performance as above the median.”

“For driving skill, 93% of the US sample put themselves in the top 50%.”

So MBA students and drivers are just two examples, but the same phenomenon has been observed in diverse environments. If you have experience with software development, you probably agree that programmers are no exception.

But what is wrong about this Illusory Superiority? Self-esteem is certainly a good think. A good professional should have confidence in his abilities to handle his tasks. No one would like to work with a person who is insecure.

The problem with Illusory Superiority is when someone underestimates the capacity of others. This is particularly dangerous when we expect people to work as a team, but someone in this team believes he is much more skilled than his colleagues.

Respect and Recognition

What is the cure for this illusory feeling of superiority? How can we make the programmers in a team respect each other, and even admire their peers because of the recognition of their experience and skills?

I believe that the key for respect and recognition is joint work, as close as possible. And in this case the Agile methods provide more opportunities for cooperation than traditional approaches.

Perhaps the closest form of joint work is Pair Programming. In this case, software developers work in pairs, writing the code together. Several studies have demonstrated that Pair Programming has a positive impact in the quality of systems. But in my opinion another important benefit is the ability to strengthen the bonds among programmers working as a team.

In the case of Scrum, the daily stand-up meetings are an opportunity for each person to learn about the progress of the other members of the team. Even if each programmer is working alone on his individual tasks, he is able to follow the implementation of the system as a whole. In other words, we could say that Scrum promotes a holistic view, allowing each team member to appreciate the challenges being faced by the others.

Real Superiority

But what happens when you are really superior? How should you behave when you are the most experienced developer in a team, or when you are the only one with particular skills? What happens when in your team there are programmers writing poor code?

I believe that when you are the most experienced developer in a team, this should give you a special kind of responsibility. Or, as in the famous quote from Spider-Man: “With great power comes great responsibility”.

In a working environment in which there is real cooperation between team members, your outstanding capacity will be soon recognized and respected. Then you can assume a natural role of leadership, at least in the areas in which you have special skills. Assuming this technical leadership role means:

  • Teaching: If you are the only one with a particular skill, teach others.
  • Sharing: If you are the most experienced, share your knowledge.
  • Reviewing: If other programmers are writing poorly, review their work.
  • Helping: If you can boost productivity, help people to handle their tasks.

But you must always remember that, even if you are “superior”, you can always learn from others. As it’s written in the ancient Jewish book “Ethics of the Fathers”:

“Who is wise? One who learns from every man.”

In this spirit, I will be happy to learn your opinion in the comments below. Thanks!

Posted in Agile, Efficacy, Psychology of Programming | Tagged , , | 11 Comments

Continuous Learning: Keeping up-to-date and acquiring new skills

According to the Bible, after Adam sinned and ate the forbidden fruit, God said to him: “By the sweat of your face you shall eat bread” (Genesis 3:19). Hard work has been the reality of humanity during known history, however things got worse recently: Everything we know, including our professional skills, is rapidly becoming obsolete. This means that it’s not enough to work. We must be constantly updating our knowledge and acquiring new skills.

A recent article in the New York Times, entitled “To Stay Relevant in a Career, Workers Train Nonstop”, discusses the problem of obsolete professional skills. According to them, “virtually everyone whose job is touched by computing are being forced to find new, more efficient ways to learn as retooling becomes increasingly important not just to change careers, but simply to stay competitive on their chosen path”.

This situation is particularly problematic for software developers. I have been working as a programmer for more than 20 years. I got my B.Sc. in Computer Sciences in 1992. Almost everything I learned then is now obsolete. And almost all technologies I’m using now did not exist then (one exception is UNIX).

Now the question is: How do I keep up-to-date? How do I learn new skills?

Below are some strategies that I have been applying to keep constantly learning:

Make Sure You Are Learning at Work

The best place to learn is at work, by putting your knowledge in practice. However, even in the most dynamic workplaces it is easy to get too accommodated by becoming an expert on some field while ignoring new developments. Thus, to be constantly learning at work, you should either join new projects from start or apply new technologies in existing projects.

In the worst case, if you feel you do not have the opportunity to learn anything relevant in your current job, you should consider seriously moving to another company. Then, you should investigate and select potential workplaces according to the technologies being used at these companies. Very often it is more interesting to apply to a job in which you will have the opportunity to learn new skills than looking for a position in which you will mostly use your current knowledge.

Develop Your Own Pet Projects

If there is some technology that you really want to learn and if you do not have the opportunity to apply this technology at work, then you should invent your own project to use it and develop this project during your free time. Luckily for you, for almost all technologies that you may be interested in, there should be Open-Source platforms providing it. Also, your personal computer at home certainly is strong enough to develop interesting projects.

Learn from Online Courses

Today there is a great diversity of free online courses. Sites such as CourseraUdacity and edX offer many interesting courses organized by known professors of some of the best Universities in the world. These courses are completely free, and besides material such as videos and slides they may include real home works and programming assignments. They are planned to extend during several weeks, and thus are able to cover the proposed subjects in relative depth.

Go to Technical Meetings

Programmers like to meet to discuss new technologies and share their experiences. If you live in a place with a big concentration of hi-tech companies, you certainly will be able to find groups of software developers organizing such professional meetings. Besides the interest in discussing technical issues, these meetings are a great opportunity for networking and allow you to learn what is being done in other companies. You can search for meetings in sites such as Meetup and Eventbrite.

Participate in Online Forums

Online Forums are a great way to communicate with other developers that may be located very far from you, but even so they share exactly the same interests. These forums are the ideal place to promote discussions, exchange opinions and ask for advice. For example, there is a great diversity of interest groups on LinkedIn. In the most active groups you can find new discussions every day, and even job announcements. Another great place for debate is Quora, in which discussions are organized as questions and answers.

Read Technical Blogs

You are already reading this blog, but: is reading technical blogs a habit? Ideally, you should reserve some time every day to read posts from your favorite blogs. Now you can ask: “Where can I find relevant blogs to read?” Of course you can always use a search engine, but it is a good idea to follow software development gurus on Twitter, as well as enthusiastic programmers that like to share their favorite posts.

See Presentation Slides

Good presentation slides are a very concise way to transmit information. If you want to get some initial idea about a technology or platform, and if you do not have much time to invest on it, then finding introductory slides is an easy and fast solution. Sites such as SlideShare have a huge quantity of such professional slides. After you get an introduction, you can decide if it’s interesting to look for more in-depth material.

Watch Videos

For the more popular subjects, it is easy to find videos on YouTube or Vimeo. These may be recorded lectures in Universities, presentations in conferences or talks in group meetings. Videos are very suitable to get a first taste of new subjects, sometimes giving you the opportunity to get out of your comfort zone. For example, TED talks are known for their ability to provide inspiration and make watchers think.

Use Question-and-Answer Communities

If you have a technical problem, then it’s very probable that someone before you already had the same problem. Thus, you should try Q&A Communities such as StackOverflow to search for a solution. If you cannot find an existing question that fits your needs, you can always ask a new question yourself. And sometimes, even if you do not have your own doubts, you can learn a lot by checking what other people have been asking recently.

In Summary

Today, thanks to the Web, it is very easy to find material about any subject you may want to learn, and it is also easy to contact other professionals that share your interests. There is absolutely no reason you should become obsolete: with dedication and discipline you can continuously acquire new skills by following the strategies above. Ah, and of course you can always read an old-fashioned book as well. Good luck!

Do you have other strategies for continuous learning? Please share your experience in the comments below.

Posted in Efficacy, Social Networks | Tagged , | 19 Comments

Four Myths of Software Evolution

As software developers, we know that our systems will evolve with time. We must understand the forces that drive this evolution, in order to design systems that are easily evolvable. Unfortunately, many programmers have misconceptions about the real drivers of software evolution and about their own ability to predict how a system will evolve.

In my many years of experience as a professional software developer I had the luck to work on many projects from start. But on other occasions I joined projects that were already mature and deployed to real users. In these projects I observed the difficulty to maintain systems that were not planned for change. I condense my personal experience below in the form of four myths of software evolution.

Myth 1 – Changes are driven by improvements in implementation

Very often programmers believe that they will only have to change their code if they think about a better way to implement it, but this is not true. Most modifications are caused by new requirements originating from real users, who are not satisfied with the current functionality. For this reason, it is always more important to invest time understanding the user requirements than optimizing your algorithms.

Reality: Changes are driven by better understanding of user requirements.

Myth 2 – If you did your best, you will not need to change it

Many programmers are perfectionists; they will try to do their best and only be happy with a piece of code if they feel they cannot improve it. However, any code can be improved when these improvements originate from new user requirements. The idea of a “best possible code that cannot be improved” is a dangerous illusion. It is dangerous because it focuses on the quality of the implementation details, and not on the satisfaction of user needs.

Reality: Even if you did your best, you can still improve it.

Myth 3 – After many improvements, the code will not be changed again

Some programmers believe that if they have already invested a considerable amount of time improving a piece of code, it has reached a “stable” state in which it will not be necessary to change it again. However, the truth is that the number of changes is always proportional to the usefulness of the code. When people use your code they discover the need for new functionalities, which is the main source of changes. Thus if your code has been changed a lot in the past, it is probable that it will need to be changed again.

Reality: The more a code is used, the more it will need to be changed.

Myth 4 – You can predict which parts of the system will need to be changed

It is not possible to predict the new requirements originating from real users. The best we can do is to identify the components that play the most central role in the functionality of the system, and to design them to accommodate easily the future changes in requirements. Of course the system may also need to be changed to improve its performance, but this should be done after appropriate profiling of real usage and identification of the bottlenecks.

Reality: Evolution cannot be predicted. The users will always be able to surprise us.

In Summary

The conclusion from the myths above is that when you know that a piece of code is important and will have many users, the best you can do is to design for maintainability. In other words, if a component is likely to be changed, it is more important to design it to be easily modified than to implement it perfectly using the more efficient algorithms.

Now you can ask two questions:

1) How do I design for maintainability? I think that the response is to focus on the Separation of Concerns, keeping low coupling and high cohesion.

2) Which methodology makes evolution easier? I think that Iterative Software Development is the best way to handle constant software evolution.

What do you think about these four myths of software evolution? Do they fit your own experience? Please share your comments below.

Posted in Requirements Specification, Software Evolution | Tagged , | 9 Comments

Planning Poker: Avoiding Fallacies in Effort Estimates

Many years ago I was working as a software developer in a team with three other programmers. We once had a meeting in which our Team Leader said:

“You are late again! All of you are late! Actually, you are always late! We cannot continue like that!”

Then I replied to him:

“If all of us are late, and if we are always late, then there must be a problem in the way we do effort estimates…”

In this case, the Team Leader himself was the one doing the estimates. He did it alone, based only in his own judgment, without consulting us, the developers. When he assigned new tasks to us, it was something like:

“You now must do this task, and it must be ready in X days.”

Certainly this is not a very nice way to talk to people working with you, but the main problem here is that it is not effective. For any person, it is very difficult to do precise effort estimates. Thus, we should always avoid doing estimates alone and without asking for other opinions.

Planning Fallacy: Why Estimates Are Hard

The difficulty of people to make good effort estimates is a well-known phenomenon, and as such it was the subject of research. Scientists have called it the Planning Fallacy:

“The Planning Fallacy is a tendency for people and organizations to underestimate how long they will need to complete a task, even when they have experience of similar tasks over-running.”

The Planning Fallacy is discussed by Daniel Kahneman, Israeli Nobel Laureate in Economics, in his book “Thinking, Fast and Slow”. According to him, plans and forecasts suffering from this effect have two main characteristics:

  • “are unrealistically close to best-case scenarios”
  • “could be improved by consulting the statistics of similar cases”

Psychologists have tried to explain the causes for the Planning Fallacy. Some claim that the main reason is that people have a tendency to focus on the most optimistic scenario, actually believing that this optimistic scenario is more probable than it really is. Others claim that this is just another example of wishful thinking. This means that people think about what they would like to happen, and not about what is likely to happen.

I think that in the workplace there are some other reasons for the Planning Fallacy:

  • If the manager is the one doing the estimates, he has an interest to make them hard to achieve, in order to push people to work more productively. Managers always want to see the work completed as fast as possible.
  • If the worker is the one doing the estimates, he will be ashamed to assign too much time to himself. People are afraid of doing pessimistic estimates for their own tasks, because they may appear to be lazy or inefficient.

Thus, it is quite clear that there should be other people participating in the estimates, and not only the manager and the worker who will perform the task. People estimating from the outside, who do not have a personal interest in the task and will not be responsible for it, can be more objective.

Planning Poker

The methodology of Agile software development has addressed the problem of poor estimates through the concept of the Planning Poker, in which all members of the team participate and contribute. Each member of the team has a set of numbered cards, with the numbers representing the effort to implement a task. For each feature in the backlog, each person makes an independent estimate using the cards. Then, the cards are simultaneously shown and people can compare and discuss their estimates.

The goal of the Planning Poker is to reach consensus, in which all participants agree with an estimate after they have discussed it and understood each other opinions. There are several reasons that cause this process to provide more accurate estimates:

  • There are more people participating in the estimates, including team members that will not be directly responsible for performing the task.
  • People are not influenced by others when making their original estimate, since they do that simultaneously and independently.
  • People have to justify their estimates, especially if they are too low or too high.

I think this is a great approach for addressing the Planning Fallacy, and it also has the advantage of being fun and strengthening the bonds among team members.

Do you have experience with the Planning Poker? Please share your thoughts in the comments below.

Posted in Agile, Efficacy, Psychology of Programming | Tagged , , | 20 Comments

Industrial Projects at the Technion

The faculty of Computer Sciences at the Technion has an Industrial Projects initiative in which undergraduate students work under the supervision of software professionals. I think this is a great idea, a real win-win situation:

  • The students have the opportunity to gain experience working on real projects, under the supervision of industry veterans.
  • The companies proposing the projects gain access to talented students to develop prototypes or proofs-of-concept at no cost.
  • The CS faculty gains access to many industry professionals each semester that effectively become part of their training ranks, also at no cost.
  • The early connection between companies and students may help the students to find a job in the future, as well as help the companies to find good candidates.

In the last semester I had the great pleasure to be the supervisor of a project entitled “Movie Quotes Search Engine”, together with my friend Oren Somekh. We were very lucky to “hire” for this project two brilliant students: Meytal Bialik and Zvi Cahana. The results of their work were very impressive, and above our expectations.

You can check the Movie Quotes Search Engine project to learn more about their work. I hope this serves as inspiration for other students looking for practical experience. It should also serve as motivation for other software companies to cooperate with the Technion in this important initiative.

What do you think about that? I will be happy to have your comments below.

Posted in Yahoo! | Tagged | 6 Comments