Posts Tagged ‘accidents’

Resilience Engineering has emerged from an approach towards accident modeling and prevention that is based on answering the question, “What makes for sustained safety and success?” The idea being focusing only on what went wrong is important but may give only half the picture.

This all sounds well and good and makes so much sense one would wonder what is so special about this approach. How else would one look at accidents? Turns out in the history of accident modeling one of the first models used took a very different approach, a very personal one. It is an approach that leans towards shame and blame should failure occur and is alive and well in certain areas of the management world.

Don’t Hit That First Domino!

Herbert Heinrich developed the Domino Theory of accidents in 1929 while working at Travelers Insurance. The model has 4 components or dominos and the idea was the damage that resulted from an accident could be analyzed in terms of a series of events that cascaded like a set of dominos. The dominos are:

  • Genetics
  • Individual personality and/or character flaws
  • The Hazard
  • The Accident

“Bad seed” might be the best way to describe the genetic or ancestral contribution to accidents.  The individual is doomed, in a way, before they get started so it would be best to just not hire them.

“Bad habits” would sum the second point where the individual is innately predisposed in day-to-day behaviors to engage in risk-taking activities. Think, “strong urges.” An example would be having the urge to surf the web indiscriminately.

“Risky behavior” is synonymous with the hazardous activity, e.g., surfing the web without antivirus protection and downloading a Trojan horse that takes over one’s computer.

The crashing of a site by having one’s computer unwittingly participating in a denial of service attack would be an example of the damage caused by all 4 dominos falling.

The idea behind using this model would be to remove the dominos or space them far enough apart so that they could not have a chain reaction. For example, pre-screen and avoid hiring anyone who has those bad genetics or predisposition. This avoids the problem completely. If you do hire a person with the undesirable character traits then don’t let them work in areas where their flaws would play out in the work place and cause an accident, e.g., deny them the ability to surf the web at work.

You can see this takes a rather dim view of human nature. It also has another major shortcoming, i.e., failure to take into account the dynamics of the situation and the broader view required to actually determine what causes failure.  Is it that simple? Is my computer participating in a denial-of-service attack simply my responsibility alone? Should I be punished since I must be defective?

Unfortunately, this approach can be very tempting for a manager to use when the heat is on to find out what went wrong in a situation, especially a complex one. On April 14, 2011, Frank Krakowski resigned as the head of the FAA’s Air Traffic Controller Organization because of a series of events where controllers were asleep in their towers.

“Heads must roll!” would be the simple way to sum up the so-called solution to the situation. I have to believe most people know that Krakowski’s resignation had little effect but human nature intensely wants to play into the domino model. Find the bad guy and punish him or root him out! And if you can’t get to the bad guy soon enough then punish the guy who hired him!

The thing to keep in mind in all this is resilience engineering is about determining what it takes to work safely in a complex environment. The air traffic control situation is a very complex socio-technical problem. Simplistic solutions just don’t cut it. Aaaaah! But they feel so good, and for a brief moment give a shot of narcotic allowing one to drift away and pretend to be in control.

In the next blog we’ll look at the next evolutionary step in accident modeling and see what it has to offer.