In the last week we followed the story of the Costa Concordia disaster, the cruise ship that partially sank after hitting a reef off the Italian coast. When I first heard about it, I could not understand how this was possible: We know that this kind of ships has a very well planned route, which should leave no risk for such a disaster. And we also know that with current technology it is very easy to provide alerts in the case a ship deviates from its planned route.
But then we learned that the captain had intentionally modified the ship’s route, apparently just to please some of his friends. And he certainly also ignored the alarms that followed this change in the route:
“Costa Cruises CEO, Pier Luigi Foschi, stated that the company’s ships have computer-programmed routes and alarms, both visual and sound, if the ship deviates by any reason from the stated route as stored in the computer and as controlled by the GPS, but that these alarms can be manually overridden.”
Finally, it was also reported that the evacuation of the passengers was delayed for no clear reason, which is another violation of the defined procedures. Actually, in the beginning the passengers were not even informed that there was a problem, probably to avoid panic.
This sad story makes me think about failures in software projects. As experienced software developers, we know that there are many risks involved when implementing a new system. We try to avoid failures by having:
- A well-defined execution plan.
- Mechanisms to measure progress with alerts for potential problems.
- Recovery plans in the case something goes wrong.
However, what the Costa Concordia disaster teaches us is that these plans and alarms are only effective if the people executing them have discipline. A successful execution cannot be guaranteed by good plans alone; it depends on the ability and skills of the people implementing it. And for this reason, when choosing the right software developers for your team, you should always look for the best.
Good observation but the conclusion asks for highly disciplined, alert, best professionals. That shifts the focus from systems and processes to people…who CANNOT be the BEST always. Fail-safe, robust engineering systems and processes actually are NOT CRITICALLY dependent on people. If responsible captain deliberately overrides the safe procedures and alarms…the fault is strictly his… not of the systems and processes. Unfortunately we do not have anything remotely approaching robust software engineering systems and processes—though we do have great software products and systems.
I took the time to thoroughly read the details of the event. The most striking is the lack of professionalism, in this case as far as criminal actions and neglect.
Luckily, most software projects are not as dramatic as this. Rarely our projects are directly resposible for saving or losing lives.
But ignoring warning signs, and avoiding early actions is lack of professionalism just the same.
In my view, making detailed plans early is a waste. Responding to evolving conditions is a better practice. Yet, I agree with having the right tools to alert when things go wrong, and reacting promptly.
Burndown chart, EVM, SPC, CFD, are excellent such tools. A professional software craftsperson uses them regularly for decision making.
In cases like these (and many other cases as well) I think that best method is to add another layer of supervision which is not in direct contact with the Captain or anyone on the ship.
The ship’s computers should automatically notify HQ about any major change of course and there a qualified person can contact the ship for why and how.
If anyone rank on the ship, even the captain, knows his actions are being monitored he will think twice about making irresponsible actions.
You will find that businesses – particularly startups and smaller companies or divisions – fail for the same reason: The captain couldn’t care less what the rest of the crew have to say, do or think.
After suffering from shipwrecks for years, I’ve adopted the rule of always speaking up – and never assuming that the captain is wiser than myself.
At the very least I get the pleasure of being able to say “I told you so”. Sometimes I may even be part of the reason another shipwreck was averted.
The best products (and trips) are made when a manager (or a captain) deviates from the beaten route for whatever unreasonable reason. Yes, he should not be a selfish moron, yes, he should know his way in the ocean, and yes, he must listen to his men (from time to time). BUT, he should not be micro-managed by the HQ, he should not make decisions on a democratic basis, and he should not allow a heap of documentation to obstruct his vision. Any mention-worthy project requires an element of risk, otherwise you are destined to end up with a mediocre product (or a boring voyage).
We can’t learn much about software development from examining shipwrecks. Software is very complex and the process of develoment very different from sailing. If we want to understand software disaters we should look at the specifics. By detailed examination of the evidence one can glean certain trends. That’s science. Trying to import ideas from other realms (such as sailing, or the history of disasters in general) is attractive to our minds because we always think metaphorically. Metaphorical thinking, attractive as it is, is no substiture for scientific thinking.
In short, this is a deluded post (one of very few here).