Conference Talk: Dr. Amir Tomer on “Extracting Quality Scenarios from Functional Scenarios”

poster14This week at the Twentieth International Conference of the Israel Society for Quality in Tel-Aviv, Dr. Amir Tomer gave a talk about “Extracting Quality Scenarios from Functional Scenarios”.

Title: Extracting Quality Scenarios from Functional Scenarios

Abstract: 

Requirements specifications usually focus on the functional requirements of the system, whereas the non-functional requirements (aka Quality Attributes) are defined generally, vaguely or tacitly. This talk introduces an approach to reveal quality attributes within structured functional scenarios (in Use-Case specifications) and defining them as quality scenarios which then are added to the overall system functionality. The expansion of the scenarios also provides a basis for adapting the software architecture to the entire system requirements.

Bio: Dr. Amir Tomer obtained his B.Sc. and M.Sc. in Computer Science from the Technion, and his Ph.D. in Computing from Imperial College, London. Between 1982 and 2009 Amir was employed at RAFAEL as software developer, software manager, systems engineer and Corporate Director of Software and Systems Engineering Processes. Amir is currently affiliated as the head of the Software Engineering department at Kinneret Academic College and as a Senior Teaching Fellow for Software and Systems Engineering at the Technion, Haifa, Israel and other academic institutes. Amir has various professional certifications, including PMP, CSEP and CSQE.

These are the original slides of Amir’s presentation:

Here is the video of the talk (in Hebrew):

This session was organized with the cooperation of ILTAM.

To participate in our future meetings, please join the IASA IL group on LinkedIn.

Feel free to share your comments below. Thanks!

Posted in IASA Israel, Requirements Specification, Software Architecture | Tagged , , | Leave a comment

Conference Talk – Hayim Makabee on Software Quality Attributes

poster14This week I participated in the Twentieth International Conference of the Israel Society for Quality in Tel-Aviv, giving a talk about “Software Quality Attributes”.

Title: Software Quality Attributes

Abstract: 

The quality of software systems may be expressed as a collection of Software Quality Attributes. When the system requirements are defined, it is essential also to define what is expected regarding these quality attributes, since these expectations will guide the planning of the system architecture and design.

Software quality attributes may be classified into two main categories: static and dynamic. Static quality attributes are the ones that reflect the system’s structure and organization. Examples of static attributes are coupling, cohesion, complexity, maintainability and extensibility. Dynamic attributes are the ones that reflect the behavior of the system during its execution. Examples of dynamic attributes are memory usage, latency, throughput, scalability, robustness and fault-tolerance.

Following the definitions of expectations regarding the quality attributes, it is essential to devise ways to measure them and verify that the implemented system satisfies the requirements. Some static attributes may be measured through static code analysis tools, while others require effective design and code reviews. The measuring and verification of dynamic attributes requires the usage of special non-functional testing tools such as profilers and simulators.

In this talk I will discuss the main Software Quality attributes, both static and dynamic, examples of requirements, and practical guidelines on how to measure and verify these attributes.

Bio: Hayim Makabee was born in Rio de Janeiro. He immigrated to Israel in 1992 and completed his M.Sc. studies on Computer Sciences at the Technion. Since then he worked for several hi-tech companies, including also some start-ups. Currently he is a Research Engineer at Yahoo! Labs Haifa. He is also a co-founder of the International Association of Software Architects (IASA) in Israel. Hayim is the author of a book about Object-Oriented Programming and has published papers in the fields of Software Engineering, Distributed Systems and Genetic Algorithms. Hayim’s current research interests are Data Mining and Recommender Systems.

These are the original slides of Hayim’s presentation:

Here is the video of the talk (in Hebrew):

This session was organized with the cooperation of ILTAM.

To participate in our future meetings, please join the IASA IL group on LinkedIn.

Feel free to share your comments below. Thanks!

Posted in IASA Israel, Programming, Requirements Specification, Software Architecture | Tagged , , , | 1 Comment

The Minimum Viable Product and Incremental Software Development

the-lean-startup-bookThe concept of the Minimum Viable Product (MVP) was popularized by Eric Ries in his bestselling book “The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses”. According to Ries, the MVP is the “version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.” The MVP should have the core features which allow it to be deployed to some real customers in order to get initial feedback, but not more.

build-measure-learnThe MVP is used in the context of the “build-measure-learn” feedback loop. From the Lean Startup site: “The first step is figuring out the problem that needs to be solved and then developing a minimum viable product (MVP) to begin the process of learning as quickly as possible. Once the MVP is established, a startup can work on tuning the engine. This will involve measurement and learning and must include actionable metrics that can demonstrate cause and effect question.”

Now the question is: What is the right way to build the MVP? Of course we should adopt some form of incremental development, in which the MVP should be concluded after a few initial iterations. Then, the next iterations should be used to add new features based on the conclusions of the “build-measure-learn” feedback loop. Thus it is very important to plan the incremental development process so that it will fit the progress of the feedback loop.

Incremental Software Development

Any approach that divides the software development process in small iterations can be called “incremental.” However, there are several possible ways to build a product incrementally. This was very well illustrated by John Mayo-Smith in his article “Two Ways to Build a Pyramid”.

In this metaphor, the traditional way to build a pyramid is to start with the base and then, with each increment, add a new level on top of the previous one. This is certainly an incremental process, but only after the last iteration we obtain a real pyramid. During all other steps of the process we may have at most an “under-construction” pyramid.

flat_pyramid

An alternative approach is to start with a small pyramid and then use each additional iteration to increase its size. In this case after each step we have a full pyramid, which keeps growing in size incrementally.

growing_pyramid

Of course, if we want to build a MVP as part of the “build-measure-learn” feedback loop, the second approach is the right one. The second approach allows us to have the MVP (small pyramid) very fast, and then have a deployable product with new features (bigger pyramid) after each iteration.

In summary

The Minimum Viable Product should have enough features to collect feedback that will drive the implementation of additional functionality through the “build-measure-learn” loop. The MVP should be built using an incremental approach, which should be planned to have a deployable version after each iteration.

What do you think? Have you used similar approaches at your company? Please share your experience using the comments below.

Posted in Agile, Lean Development, Software Evolution | Tagged , , | Leave a comment

The Psychology of Agile Software Development

gear_team_420Why is Agile so successful? It is a fact that Agile methods have many enthusiastic practitioners, who are firm believers that the adoption of Agile processes has revolutionized the way they build software systems, making them much more productive and effective as software developers. But this is not only a question of being able to write more code in less time. Software developers are very often enthusiastic about Agile practices, reporting that they are having much more fun when doing their work.

This is a phenomenon that certainly deserves to be investigated: How come people are having so much fun from a set of work processes and techniques mostly related to project management?

The answer is that Agile processes and techniques have a direct influence on cognitive aspects and change the dynamics of teamwork. Thus, it is important to analyze and understand which psychological aspects are being affected. Below is a list of topics that illustrate the effects of Agile on the behavior of programmers in a software development team.

Social Nudges and Burndown Charts

Our behavior is greatly driven by Social Nudges, which are forces that result from our interactions with other people. Agile processes increase the frequency of social interactions among team members, for example through daily stand-up meetings and retrospectives. Agile methods also provide more visibility on the progress and effectiveness of each team member, for example through Burndown and Velocity charts. These Social Nudges together help avoid negative phenomena such as procrastination and enable real self-managed teamwork. Read more.

Illusory Superiority and Pair Programming

Illusory Superiority is a cognitive bias that causes people to overestimate their positive qualities and abilities and to underestimate their negative qualities, relative to others. What is the cure for this illusory feeling of superiority? How can we make the programmers in a team respect each other, and even admire their peers because of the recognition of their experience and skills? The key for respect and recognition is joint work, as close as possible. And in this case the Agile methods provide more opportunities for cooperation than traditional approaches, for example through Pair Programming and daily stand-up meetings. Read more.

The Planning Fallacy and Planning Poker

The Planning Fallacy is a tendency for people and organizations to underestimate how long they will need to complete a task, even when they have experience of similar tasks over-running. In the workplace there are some other reasons for the Planning Fallacy: If the manager is the one doing the estimates, he has an interest to make them hard to achieve. If the worker is the one doing the estimates, he will be ashamed to assign too much time to himself. The methodology of Agile software development has addressed the problem of poor estimates through the concept of the Planning Poker, in which all members of the team participate and contribute. Read more.

TDD, Flow theory and Gamification

According to the Flow theory, a person can only enjoy a task if there is a match between his abilities and the complexity of the task. In traditional testing methodologies there was a mismatch between the ability of software developers and the complexity of the testing task. The consequence is that programmers could not enjoy testing. But TDD changes this picture because it transforms testing in just another type of programming task. Gamification is the use of game design techniques and mechanics to solve problems and engage audiences. TDD uses several game mechanisms, including immediate feedback and measurable progress. Read more.

In summary, understanding the reasons of the success of Agile methods allow us to become a learning organization and adapt Agile processes and techniques to our own particular environment and culture.

Posted in Agile, Psychology of Programming | Tagged , | 5 Comments

On Agile Architecture, Emergent Design and Framework-Based Design

toolsI recently read the very interesting Ph.D. thesis of Michael Waterman on the topic of Agile Software Architecture. Michael investigated how professional software engineers in the industry are applying Agile principles to software architecture and design. His conclusions are that there are five main strategies:

  1. Respond to Change
  2. Address Risk
  3. Emergent Architecture
  4. Big Design Up-Front
  5. Use Frameworks and Template Architectures

These are Michael’s definitions for these five strategies:

“Responding to change is a strategy in which a team designs an evolving architecture that is modifiable and is tolerant of change.”

“Addressing risk is a strategy in which the team does sufficient architecture design to reduce technical risk to a satisfactory level.”

“An emergent architecture is an architecture in which the team makes only the bare minimum architecture decisions up-front, such as selecting the technology stack and the high-level architectural patterns. In some instances these decisions will be implicit or will have already been made, in which case the architecture is totally emergent.”

“The big design up-front requires that the team acquires a full set of requirements and completes a full architecture design before development starts.”

“Using frameworks and template architectures is a strategy in which teams use standard architectures such as those found in development frameworks, template or reference architectures, and off-the-shelf libraries and plug-ins. Standard architectures reduce the team’s architecture design and rework effort, at the expense of additional constraints applied to the system.”

In his thesis Michael provides many details about each one of the strategies above. Reading his work provided me some very interesting insights. The most important of them I would like to present and discuss below.

Emergent Design vs. Framework-Based Design

I personally do not believe in Emergent Design, and thus I have always been surprised to see so many people saying that they were very successfully doing Emergent Design. But after reading Michael’s thesis, I understood that most people who claim they are doing Emergent Design in practice are doing Framework-Based design.

Today there are many categories of software systems for which there are very sophisticated and rich frameworks that already implement most of the hard work. For example, most people building Web sites do not need to understand anything about the underlying communication protocols; they just need to define the site structure, its content and some simple business logic. As a consequence, we all have heard stories about some very nice sites that were built in a few days by people with no degree on Computer Sciences or Software Engineering.

The same is true about applications for smartphones. There are frameworks that make the task of creating a new app extremely simple, to the point that the developers may completely ignore the underlying operating system and use the same code for Android or iPhone apps. There is no need for much development experience or sophisticated programming skills, and as a consequence there are teenagers creating apps and games that are used by millions of people worldwide.

This same reality applies to many domains: Software developers are not doing much design, but the reason is not that design became an obsolete activity. The reason is that developers are adopting ready-made designs; they are following design decisions that were already made by someone else and thus they don’t need to make their own decisions. They are not conceiving neither implementing their own architecture, they are basically “filling holes” with small pieces of relatively simple business logic.

Programming in the Large and Programming in the Small

The distinction between “programming in the large” and “programming in the small” was coined in the 70’s. At that time, it represented the difference between developing software systems and implementing simple programs. The term “programming in the small” characterized tasks that could be done by a single programmer. But at that time there were not sophisticated application frameworks that could be extended.

Today it is important to take in consideration the ratio between “new code” and “reused code”. It is a characteristic of modern software systems that this ratio is relatively small, meaning that most of the code is being reused from existing frameworks, platforms and libraries. In this case, it is possible to create complex systems with small amounts of new code. This is sometimes called “glue code” because it just connects existing components.

For example, Apache is the most popular HTTP server, with a 60% market share, and its implementation has more than 1.7 million lines of code. In contrast, the average Web page has only 295 kB of scripts. Assuming that each line of script has in average 30 characters, this means that the average page has 10,000 lines of scripts. Of course a Web site has multiple pages, and there is also code running on the server side. But on the other hand most sites also use other platforms such as MySQL. Thus, for the average Web site the ratio between new code and existing code is very small.

In another example, an analysis of the top 5000 most active open-source software (OSS) projects has shown that OSS written in Java has in average almost 6 millions lines of code. In contrast, the average iPhone application has in average just a few hundred thousand lines of code. In this sense, OSS is “programming in the large” and iPhone apps are “programming in the small”. But thanks to advanced frameworks, even small iPhone apps can have a very sophisticated and impressive user interface.

Adaptable Design Up Front

screwdriver-setWhat should be done in situations in which there is no framework ready to be used for a particular application? Of course in this case the software developers should be responsible also for defining the architecture, and in my opinion this always requires a certain amount of design up front.

I personally adopt the “Respond to Change” strategy, which I call Adaptable Design Up Front (ADUF). In my opinion, in an Agile environment in which the requirements are constantly changing, this is the only strategy that guarantees the successful evolution of software systems. The idea is basically: If you don’t have a framework to build on top of it, first create your own framework and then extend it.

In one of my previous articles I describe how the ADUF approach combines Domain Modeling and Design Patterns. In another post I explain how ADUF is an application of the Open/Closed principle to high-level design. I’ve also prepared a set of slides to introduce the ideas behind ADUF.

In Summary

Most people claiming that they are doing Emergent Design in practice are doing Framework-Based design in which they don’t need to make high-level design decisions, because they are adopting decisions that were already made by someone else.

In modern software systems the ratio between new code and reused code is quite small. Thanks to advanced frameworks, platforms and libraries, software developers may build sophisticated applications with a relatively small effort.

For the situations in which there are no existing frameworks that can be easily extended, the approach of Emergent Design does not work. It is necessary to do some amount of design up front. My suggested approach is called Adaptable Design Up Front (ADUF).

What do you think? Do you know projects that claimed to do Emergent Design and were not based on existing frameworks? Please share your experience in the comments below.

Posted in Adaptable Design, Agile, Software Architecture, Software Evolution, Software Reuse | Tagged , , , , | 7 Comments

Antifragility and Component-Based Software Development

AntifragileIn his book “Antifragile: Things That Gain From Disorder”, Nassim Taleb introduces the concept of Antifragility, which is the opposite of Fragility. Antifragile things are able to benefit from volatility. In a previous post, I explained how in the field of software development the main source of volatility are the changes in requirements, and how it is possible to make software systems more robust through the appropriate usage of abstractions.

However, robustness is not the same as Antifragility. A robust system may survive the impact of changes, but our goal is to develop systems that actually benefit and improve as a consequence of these changes. In this article we will analyze a strategy that may be adopted to develop such Antifragile systems.

It is common knowledge that systems should be divided into components. The reaction to a change in the environment should only affect a few components, and not the entire system. Thus, component-based systems are more robust than monolithic systems.

However, to be Antifragile, a system must be able to benefit from changes in the environment. This Antifragility may be achieved when several systems share the same components. Thus, when a specific component is improved, all systems using this component can benefit from this improvement.

One concrete example is AA batteries. During many years AA batteries were the standard for millions of different devices. Then, energy-consuming devices drove the development of rechargeable AA batteries. Now all devices can benefit from this improvement, and in this sense they are Antifragile.

Component-Based Software Development

Since the first years of software development it was understood that systems should be divided into components, and thus software engineers tried to identify the right principles to perform this division, which was once called modularity. Metrics such as coupling and cohesion and principles such as information hiding were defined to guide the decomposition of large software systems into modules.

Over the decades the advances in programming languages and software engineering practices were followed by better ways to perform this decomposition. Object-oriented programming languages allowed software developers to create classes of objects and organize them into hierarchies. Then, larger distributed systems were built based on Object-Request Broker (ORB) platforms. More recently Service Oriented Architectures (SOA) became popular and now the current trend is Microservices.

In all the examples above, the evolution was based on the same original principles: Focusing on low coupling, high cohesion and defining clear and abstract interfaces. All of them contributed to make software systems more robust. It is easier to change the implementation of a class than to modify a data structure and its associated procedures. Services can be deployed independently of each other, and without disrupting the rest of the system.

The next step, which is to develop Antifragile systems, going beyond robustness, will happen when many different systems share the same components. Then, if a particular component is improved to satisfy the requirements of a specific application, all the systems may enjoy the improved component at no additional cost. This is true Antifragility.

Consider for example a system that is based on a SOA architecture. At any point in time it is possible to deploy an enhanced version of one of the services without affecting the other ones, and thus the system is very robust. But if there are several systems based on shared services, each time one of these services is improved all the systems will be able to immediately benefit from the improvement. Thus while each system is robust, the collection of systems is Antifragile, because they benefit from the same changes at no added cost.

Another modern application of this approach is the idea of a Software Product Line (SPL), in which a series of software products are based on the same components and differ only on their configuration. In the case of SPLs there may be several coexisting versions for each component. Each time a new component version is created, all the products using previous versions may benefit through simple configuration updates.

In summary

A simple strategy for Antifragility is: Build component-based systems with shared components. When one of the shared components is improved all the systems will be able to benefit from this improvement at no additional cost.

What do you think? Have you experienced the benefits of building several systems on top of the same components? Please share your experience with us in the comments below.

Posted in Antifragility, Software Architecture, Software Evolution, Software Reuse | Tagged , , , | 10 Comments

Agile Practices and Social Nudges in the Workplace

Nudge-coverIn their best-selling book “Nudge: Improving Decisions About Health, Wealth, and Happiness“, Richard Thaler and Cass Sunstein propose the adoption of interventions to “attempt to move people in directions that will make their lives better.” A nudge “alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives.”

One classic example of a nudge is the “fly in the urinal”. Many public bathrooms face the problem of men’s lack of attention. To solve this problem, they place a sticker with the picture of a fly inside the urinal, and the result is that men aim at the fly as a target. Trials showed that this simple idea reduced spillage by 80%!

A special kind of nudge is the “Social Nudge”, which results from the interaction with other people. According to Thaler and Sunstein:

“Social influences come in two basic categories. The first involves information. If many people do something or think something, their actions and their thoughts convey information about what might be best for you to do or think. The second involves peer pressure. If you care about what other people think about you, then you might go along with the crowd to avoid their wrath or curry their favor.”

I believe that the success of Agile practices may be partially attributed to the Social Nudges created by changes in the team dynamics and by the richness of interactions among team members. Let’s consider the two social influences categories as discussed above:

Exchange of Information

Agile practices increase the amount of interactions among team members and also the depth of information exchanged among them. For example:

  • Daily stand-up meetings allow all team members to know what their peers are doing and to become aware of their particular contribution to the project.
  • Retrospective meetings allow the team members to think about ways to improve their own processes based on past experience.
  • Code reviews allow team members to learn from each other and develop improved coding standards and shared patterns.

In this environment rich of information exchange opportunities, the natural tendency for the team members is to adopt best practices and increase both their productivity and their satisfaction.

Peer Pressure

Agile practices increase the visibility of the work being performed by each team member, and thus increase also the peer pressure. For example:

  • Burn-down charts display clearly if a team member is making progress as originally estimated or if he is late in his work.
  • Velocity charts show the productivity of the team and will be negatively impacted if there is a team member who is working slower than others.
  • In the daily stand-up meetings each team member must say what he is going to do today, which serves as a public commitment to perform his task.

In this environment in which there is so much visibility of the progress and the commitments of each team member, a person certainly will not feel comfortable if he is under-performing in comparison to other members of the team.

In summary

Agile practices promote more opportunities for the exchange of information and for peer pressure, and thus create social nudges that result in improved performance and satisfaction.

What do you think? Did you experience these social nudges? Please share your opinion in the comments below.

Posted in Agile, Efficacy, Psychology of Programming | Tagged , , | 1 Comment