Effective Design Reviews

This is the definition of Design Review from Wikipedia:

A design review is a process whereby a design is tested against its requirements prior to implementation.

There are at least two problems with this definition:

The first problem is that it says we are going to review “a design.” In other words, there is a single design alternative being presented. In this case, the designer is probably expecting to convince all other participants that he did a good design. At most, he is willing to hear a few comments about how his solution could be possibly improved. It is very unlikely that his design will be rejected, because almost certainly none will be able to suggest a different option.

The participants of a design review should not be expected to imagine alternative solutions. Only if the feature being implemented is very simple it would be possible to propose entire new designs during the review. Any interesting feature requires a considerable amount of effort to obtain a good design. The consequence is that in this “single design” approach the solution being “reviewed” will almost always be approved with minor changes.

Therefore, effective design reviews must include the presentation of several alternatives. The discussion should not be if a particular design satisfies or not the requirements. Correctness is the most fundamental attribute of a solution, so the diverse design choices must differ on some other attributes that can be compared. In other words, each option has its advantages and disadvantages, and by choosing one of them we are actually deciding which attributes are the most important ones, but all options must be correct.

In this “multiple-choice” approach, the goal of the design review is to analyze several alternatives and understand their different implications. Then, if there is a solution that is clearly better than all others, according to some criteria, it should be the chosen one. Otherwise, we should try to look for additional options.

But here is the second problem with the previous definition of design review: It says that the “design is tested against its requirements.” In most cases this means only the functional requirements, the specific demands related to the feature being implemented. This definition assures that the design must be correct, but it does not address the question of how the new feature being implemented will affect the existing system architecture. In other words, this definition does not take in consideration the non-functional requirements.

Diverse design alternatives will probably have very different implications for non-functional requirements, and this should be used as the basis for comparison when selecting one of them.

For example, one design alternative could have the user’s session data stored on memory, while another would store the same data in a database. The first alternative will probably allow all operations to be performed more efficiently, thus reducing latency and increasing throughput. However, the second alternative improves resilience and fault-tolerance, because if a process crashes no data will be lost and it will be possible to restore the user’s session from the database. In this example, the participants in the design review must decide which attributes are more important, and select the best solution accordingly.

In summary, if there are no alternatives, there is no real review being done. This can be best illustrated by a story of Alfred P. Sloan, then CEO of General Motors:

At one meeting of his top executives, Sloan said: “Gentlemen, I take it we all are in complete agreement on the decision we’ve just made.” Everyone nodded.

Then,” said Sloan, “I propose we postpone further discussion until our next meeting, to give ourselves time to develop disagreement and perhaps gain some understanding of what the decision is all about.

So let’s hope we will have more disagreement in our next design reviews!

About Hayim Makabee

Veteran software developer, enthusiastic programmer, author of a book on Object-Oriented Programming, co-founder and CEO at KashKlik, an innovative Influencer Marketing platform.
This entry was posted in OOD, Software Architecture and tagged , . Bookmark the permalink.

4 Responses to Effective Design Reviews

  1. sfridman says:

    Interesting piece with quite a few valid points. I liked the approach to multiple designs, true the nature of the word design prior to computer science: industrial designers, artists, civil eng. or architects pitching models before embarking on a new chair, sketch, bridge or another white elephant like the British Millennium Dome. It also brings the notion of brainstorm, which some formal design review methodologies do not like so much. Clearly there are cases to use multiple alternatives review, but I suspect not always – there are instances where this is not realistic, or effective.

    You finished your article with a hope for disagreement – so you might as well get it right now…

    In the point you might meant with the non functional requirements (NFR) missing, I believe that you make an assumption (a double one in fact) that requirements do not include those, and also that they are well formed. I think both are not necessarily right. As I believe you share as well, requirements suffer from contradictions, incompleteness and ill prioritization – it is inherent. Hence a good panel in a design review commonly includes a “veteran”, “architect”, or other horizontal or broader capacity, especially with features that are easily spotted as effecting the integrity of the all system. This capacity often takes into consideration NFR, as well as uses judgement and sense to understand the requirements, and interpret them justifiably. The fact that requirements need judgement is disturbing, I agree. But I haven’t been able to get away from that yet. Your mileage may vary.

    This leads to the second assumptions that all alternatives are “correct”. As mentioned above, you cannot satisfy “correctly” a set of assumingly self-contradictory (to a degree) or incomplete requirements. The judgement I referred to, basically does this, by interpreting the key requirements. The understanding of the system allows implications of generuc vs. specific, completing missing expected requirements, and looking at capacity, scale, BW, latency, or what have you.

    Lastly, sometimes a method of peer review (often done to code but can be done to design as well as a “minimum” design review) lends itself more effective to some features. This distinction than calls for the option to have “please improve my almost finished soon to be approved design” done by the peer pointing to various errors, or weaknesses in my design, vs. a more comprehensive “brainstorming design review” when an issue is debated from the outset before even embarking on details with many an alternatives.

    Between those two there is a kind of muddy waters, where alternatives are useful but hard to come by – unless the reviewers are very experienced or fast (or lucky), and a constructive criticism that tends to approve an improved version of the presented (as you indicated) is often indeed the best way forward. Rejection should not really happen – if it does, hopefully it should have been done before in a PDR (preliminary design review) and not in a CDR (critical design review) borrowing the US army terms.

    Keep on writing those Hayim, I enjoy those – these are all good stuff and thought provoking.

  2. Pingback: Attention Agile Programmers: Project Management is not Software Engineering | Effective Software Design

  3. Pingback: Why Project Management is not Software Engineering? - Smardart

  4. Pingback: Design Reviews - Software Design Reviews In Software Engineering

Leave a comment