Human error: models and management
Created on 2025-02-09T21:57:32-06:00
- By James Reason, 2000
Glossary
Highly Reliable Org: One with less than its "fair share" of error.
Error: An inevitable, undesired outcome
Person focus: Errors as the outcome or character, stupidity, fucking around and finding out
System Focus: Analysing what defenses prevent or mitigate error and why those mechanisms failed
Who fails
- High performer's still fail
- Some jobs are more prone to failure
- failures are inevitable
- Most failures are small or can be mitigated to be small
• Need clear lines of fault and faultless failures
- Accept failure and mitigate
- generalize the problem
Types of failure
Active failure: someone commits a stupid; Immediate damage, typically known as an incident
Latent conditions: Creates or exasterbates opportunity for error; poor training, unclear tales, low trust environment, lack of maintenance
Organization Modes
Routine Mode
- Traditional human heirarchy environment
- Sets unambiguous goals
Dynamic Mode
Defers to judgement of field experts
Highly reliable orgs
- Often have ex-military
- set clear goals and roles everyone understands
- Designates experts as owners of a crisis and grants autonomy
- Actively concerned with the possibility of failures
- Speculate on future failures to train against
Quotations
Person approach
focuses on the unsafe acts—errors and procedural violations—of people at the sharp end: nurses, physicians, surgeons, anaesthetists, pharmacists, and the like. It views these unsafe acts as arising primarily from aberrant mental processes such as forgetfulness, inattention, poor motivation, carelessness, negligence, and recklessness
Followers of this approach tend to treat errors as moral issues, assuming that bad things happen to bad people—what psychologists have called thejust world hypothesis.
System approach
humans are fallible and errors are to be expected
Errors are seen as consequences rather than causes, having their origins not so much in the perversity of human nature as in “upstream” systemic factors. These include recurrent error traps in the workplace and the organisational processes that give rise to them. Countermeasures are based on the assumption that though we cannot change the human condition, we can change the conditions under which humans work
When an adverse event occurs, the important issue is not who blundered, but how and why the defences failed.
Blaming individuals is emotionally more satisfying than targeting institutions.
In aviation maintenance—a hands-on activity similar to medical practice in many respects—some 90% of quality lapses were judged as blameless.
Effective risk management depends crucially on establishing a reporting culture
Trust is a key element of a reporting culture
requires the existence of a just culture—one possessing a collective understanding of where the line should be drawn between blameless and blameworthy actions.
focusing on the individual origins of error it isolates unsafe acts from their system context
Firstly, it is often the best people who make the worst mistakes—error is not the monopoly of an unfortunate few.
Secondly, far from being random, mishaps tend to fall into recurrent patterns. The same set of circumstances can provoke similar errors, regardless of the people involved.
The Swiss cheese model of system accidents
Their function is to protect potential victims and assets from local hazards.
In an ideal world each defensive layer would be intact. In reality, however, they are more like slices of Swiss cheese, having many holes—though unlike in the cheese, these holes are continually opening, shutting, and shifting their location.
Active failures are the unsafe acts committed by people who are in direct contact with the patient or system.
virtually all such acts have a causal history that extends back in time and up through the levels of the system.
Latent conditions are the inevitable “resident pathogens” within the system.
Latent conditions—as the term suggests—may lie dormant within the system for many years before they combine with active failures and local triggers to create an accident opportunity.
We cannot change the human condition, but we can change the conditions under which humans work
active failures are like mosquitoes. They can be swatted one by one, but they still keep coming. The best remedies are to create more effective defences and to drain the swamps in which they breed.
Error management has two components: limiting the incidence of dangerous errors and—since this will never be wholly effective—creating systems that are better able to tolerate the occurrence of errors and contain their damaging effects.
comprehensive management programme aimed at several different targets: the person, the team, the task, the workplace, and the institution as a whole
High reliability organisations—systems operating in hazardous conditions that have fewer than their fair share of adverse events—
Such a system has intrinsic “safety health”; it is able to withstand its operational dangers and yet still achieve its objectives.
safety sciences know more about what causes adverse events than about how they can best be avoided
Berkeley and the University of Michigan has sought to redress this imbalance by studying safety successes in organisations rather than their infrequent but more conspicuous failures
Most managers of traditional systems attribute human unreliability to unwanted variability and strive to eliminate it as far as possible.
high reliability organisations, on the other hand, it is recognised that human variability in the shape of compensations and adaptations to changing events represents one of the system's most important safeguards
Paradoxically, this flexibility arises in part from a military tradition—even civilian high reliability organisations have a large proportion of ex-military staff.
Military organisations tend to define their goals in an unambiguous way and, for these bursts of semiautonomous activity to be successful, it is essential that all the participants clearly understand and share these aspirations.
Perhaps the most important distinguishing feature of high reliability organisations is their collective preoccupation with the possibility of failure. They expect to make errors and train their workforce to recognise and recover them.
They continually rehearse familiar scenarios of failure and strive hard to imagine novel ones.