💚Chandigarh Call Girls Service 💯Piya 📲🔝8868886958🔝Call Girls In Chandigarh No...
Artigo: Safety culture and crisis resource management in airway management
1. Best Practice & Research Clinical Anaesthesiology
Vol. 19, No. 4, pp. 539–557, 2005
doi:10.1016/j.bpa.2005.07.005
available online at http://www.sciencedirect.com
1
Safety culture and crisis resource management
in airway management: General principles
to enhance patient safety in critical
airway situations
Marcus Rall*
Director, Anaesthesiologist
Peter Dieckmann
Psychologist, Instructor Trainer
Centre for Patient Safety and Simulation (TuPASS), Department of Anaesthesiology and Intensive Care Medicine,
University Hospital Tuebingen, D-72076 Tuebingen, Germany
Airway management is a cornerstone of patient safety in anaesthesiology and in emergency
and critical care medicine. Deficiencies in airway management could have catastrophic results
for the patient. In anaesthesia patients, in particular, a high level of safety should be expected.
It has been proven in other high-risk and complex industrial fields that obtaining very high
levels of safety requires special strategies and safety philosophies in order to guarantee long-
term low-risk production. The concept of safety culture has invaded many industries, more
recently including medicine. Concepts of the high reliability organizations (HROs) are now
ready to be adapted to medicine and offer promising improvements in health care. This paper
applies some of the HRO principles to airway management and illustrates how to transform
more general strategies to practical application in the clinical world. This includes the use of
key elements of crisis resource management (CRM) and the development of a checklist for
safety in airway management.
Key words: system safety; safety culture; high reliability organization (HRO); root cause
analysis (RCA); failure mode effects analysis (FMEA); incident reporting system (IRS); airway
management.
* Corresponding author
E-mail address: marcus.rall@med.uni-tuebingen.de (M. Rall).
1521-6896/$ - see front matter Q 2005 Elsevier Ltd. All rights reserved.
2. 540 M. Rall and P. Dieckmann
The safety margin in airway management can be greatly improved by a first and very
important rule: always be prepared for the expected and unexpected difficult airway. In
the future we need to investigate tools that help health-care professionals to act safely
while securing the airway, including the best use of simulation training.
AIRWAY MANAGEMENT AS A CRITICAL MARKER FOR SAFETY
CULTURE
Airway management (AM) is a cornerstone of safe patient treatment in anaesthesia, but
also in intensive care and emergency medicine.1–12 Airway management is the ‘A’ in the
ABC chain of patient management, from resuscitation to trauma algorithms.11,13–15
Hypoxia, the common final consequence of failures in airway management, may cause
irreversible brain damage within 5 minutes. Thus securing the airway is, next to early
defibrillation in ventricular fibrillation, extremely time-dependent and potentially life-
threatening, making it a very stressful and critical task. AM is therefore a typical high-
risk high-reliability procedure in medicine. Consequently the safety culture in AM can
be considered as a prominent marker for the overall safety culture of an organization.
Safety culture is defined by the NASA as ‘a collection of characteristics and attitudes in
an organization, promoted by its leaders and internalized by its members, that makes
safety an overriding priority.’16 AM is therefore a good possibility for starting and
implementing modern safety culture strategies in a health-care department.
AM is a prototypical example of a predictable critical situation, although the precise
occurrence of difficult airway management is not known. Everyone who takes care of
the airway in patients knows that there can always be an unpredicted (or overlooked)
difficult airway. From the patient’s perspective (and his lawyer’s), too, difficult AM is a
typical, quantifiable and almost predictable complication. Patients do expect health-care
providers to be sufficiently prepared for dealing with an expected or unexpected
difficult airway situation. And because difficult airway situations do not occur frequently
in routine care, the patient must not only assume that it is generally expected, but that
the team is also trained for handling the acute complications once they arise (e.g.
simulation training17, 21 and proper technical equipment).
How can modern safety culture concepts be applied to airway management? Which
principles of the high reliability organization’s (HROs) theory are of special importance
for AM, and how can the concept of crisis resource management (CRM)18,19 be
adopted to prevent and manage difficult airway situations?
Actually there is not much of an evidence base for the benefits of these concepts for
airway management. Nevertheless, many of the concepts of CRM or HRO have been
proved to be successful safety strategies in other high-risk domains (e.g. aviation,
nuclear power industry, etc.). They should be carefully adapted to the field of medicine
in general and to airway management in particular.20
THE HIGH RELIABILITY ORGANIZATION (HRO): APPLIED SAFETY
CULTURE AT ITS BEST, AND MORE
HRO theory was described by La Rochlin, Weick, Roberts and others.22–31 Several
high-risk but high-reliability domains—such as commercial aviation, the nuclear power
industry, and especially military aircraft carriers—can be seen as examples of HROs.
3. Safety culture and crisis resource management 541
The theory of HRO offers interesting perspectives for modern risk management and is
already being tackled by the anaesthesia community.32
Anaesthesiology has traditionally been strongly interested in systematic safety
strategies, particularly in redundancy and technical safety measures. More recently,
research in anaesthesiology has focused on patient safety questions.33 Actually there is
a growing interest among anaesthesiologists in the importance of the so-called soft
skills (‘human factors,’ communication, CRM)34–38 and in the implementation of HRO
philosophies and techniques in anaesthesiology practice. In 2003 the Anaesthesia
Patient Safety Foundation got involved in high reliability peri-operative health care.39
Table 1 gives a summary of the key points of an HRO in medicine.
In HROs safety is a predominant issue, with the highest priority for everybody in the
organization at any time. The safety culture philosophy is one of its cornerstones. In a
safety culture environment the first question after an accident is not ‘who is to blame?’
but rather ‘why was this possible and what can be done to prevent it the next time?’.
The key elements of a safety culture in health care are shown in Table 1.
In safety culture, ‘culture’ is made up of values (what is important), beliefs (how
things should work) and norms (the way things work).32 Figure 1 gives an overview of
the critical elements of a safety culture.
As can be seen in Table 1, safety culture is a very important part of HRO, but there
are other important issues in an HRO:
† optimal organizational structures and procedures;
† initial and recurrent training and practice in routine and emergency simulations;
† thorough organizational learning.
The safety culture can be seen as the framework and prerequisite for the
implementation of the other aspects of HRO theory.
Some tools which are part of HRO deserve to be highlighted in the view of patient
safety:
† incident-reporting systems (IRS)40–59;
† root cause analysis (RCA)60–65; and
† failure mode effects analysis (FMEA).66–68
Some of them have been extensively used in other industries for decades, but have
only recently found their way into medicine. Because of their (future) importance,
these tools are briefly described in Tables 2–4.
Safety culture and HRO in contrast to the ‘old medical view’
In a safety culture (and HRO) safety is not seen as something, which is existent in the
organization until an accident happens. Safety is seen as a dynamic non-event69–72 which
is not automatically given, but has to be actively achieved every day. In HROs, in
contrast to the traditional view of health care personnel, it is assumed that something
unwanted will happen unless somthing is done to prevent it. HROs look actively for
error possibilities and then try to find ways to reduce error frequency. Error detection
is enhanced and everybody is prepared for the management of unwanted negative
consequences of errors. A central message is that it will never be possible to avoid all
errors, and therefore it is necessary to design the work system to be as error-friendly
as possible by reducing an error’s negative consequences as much as possible.
4. 542 M. Rall and P. Dieckmann
Table 1. Key elements of a high reliability organization (HRO) in medicine.
Part 1: Proactive Safety Culture
Values
† Safety as THE most important goal, overriding production or efficiency
† Preoccupation with possible ‘failures’ rather than past ‘successes’96
† Necessary resources, incentives, and rewards are provided for optimal safety, not only for optimal
production
Beliefs
† Safety must be actively managed
† Processes and routines of care are as (or more) important to safety than individual dedication, skill, or
effort (CRM principles are implemented in routine work)
† Openness about safety and errors is essential; learning from normal and adverse events should be
through e.g. incident reporting (IRS), root cause analysis (RCA), and failure mode effects analysis (FMEA)
Norms
† Low-ranking personnel raise safety issues and challenge ambiguity regardless of hierarchy or rank
† Calling for help is encouraged and occurs frequently, even (or especially) by experienced personnel
† Explicit communication is frequent
† The hierarchy is flat: leaders listen to juniors, juniors are encouraged to speak up, calling for help is
routine regardless of rank
† People are rewarded for rationally erring on the side of safety, even when their credible concerns turn
out to be wrong
Part 2: Optimal Structures and Procedures:
† Decision-making rests with those with greatest knowledge or experience about specific issues,
regardless of rank or job type
† The unit integrates crews from different departments (e.g. cardiac surgery, cardiac anaesthesia, OR
nursing, perfusion, ICU) into a coherent clinical team. Teamwork and resilience is emphasized
† There are formal procedures to maximize information transfer to all team members before a case (e.g.
briefings or time-out procedures); CRM elements are enforced to reduce human factor-related errors
† Schedules are designed to keep work hours and duty periods to reasonable levels, avoiding undue
fatigue. Personnel under excessive stress are supported or replaced as needed
† Standardized procedures, techniques, and equipment are adopted wherever possible so that similar
tasks or operations are performed similarly regardless of personnel involved; conversely, when
necessary (in an emergency or adverse event) the team is resilient and responds as needed to the
situation without slavish dependence on standard routines
† The use of pre-planned algorithms, checklists and cognitive aids is actively encouraged
† Easy access to current information systems is available at all times and all locations
Part 3: Training and Practice in Routine Cases and Simulations
† Debriefings are conducted after each case
† Non-punitive assessment instruments are used on a regular basis to provide current feedback and
identify elements requiring special training
† There is initial and recurrent simulation-based single-discipline and multidisciplinary training in crew
resource management (CRM)
† Actual clinical crews and teams conduct periodic drills or simulations of critical situations in the real
OR, PACU, and ICU
† Resident training uses guided curriculum; training goals and level of responsibility assigned to resident
matches current proficiency level of trainee with complexity of case
Part 4: Organizational Learning
† Robust mechanisms are in regular use for organizational learning, both prospectively (considering in
advance how to optimize protocols and procedures, e.g. FMEA) and retrospectively (from analysing
reports of adverse events, near misses, or problems, e.g. Incident Reporting and RCA)
† Problems are analysed primarily to determine what can be improved rather than who to blame. Altered
procedures are assessed and adopted as appropriate. Process changes reflect appropriate analysis
From Rall and Gaba (2005, Miller’s Anesthesia 6th edition; Philadelphia: Elsevier Churchill Livingstone, pp
3021–3072) with permission.
5. Safety culture and crisis resource management 543
Safety Culture n
HR
p pe
O-
Pr ha
inc do
ipl rs
es nt
Te Erro que
Stim am Tr fre
ulat ainin are
ion g rs
/ CR ro
M Er
Prospective Stop the
FMEA + RCA Culture of Blame
Lea
nt r
(po n from
me
age sitiv
kM
an e a Errors
Ris spe
g E ct)
rtin Hu duc
o ma ati
ep n F on i
tR ac n
iden tor
s
Inc Commitment
Figure 1. The circle of safety culture. The common prerequisite is commitment to safety. HRO, high
reliability organization; CRM, crisis resource management; FMEA, failure mode effects analysis; RCA, root
cause analysis.
In medicine, however, health-care providers tend to close their eyes to the possibilities of
errors, hope for the best, and do not even want to think about the occurrence of the worst
case. Practitioners are assumed to be error-free in performancing of tasks and making
decisions, and they overestimate their performance under stress (physical and
psychological).73–77 In HRO it is accepted that everybody makes errors78, that even the
best make (sometimes the worst) errors, and that performance levels decrease with all kinds
of stress. Countermeasures and strategies to deal with these issues are implemented.79–83
SAFETY CULTURE AS SEEN FROM THE 4-P MODEL
Many ideas and concepts concerning safety and the analysis of problems in health-care
delivery may be found in the aviation industry. An interesting model is the 4-P model
from Degani and Wiener84, in an adapted version from Wiegmann and Shappell85,
where the four Ps represent philosophy, policies, procedures and practices. The first three
Ps are normative in nature, and only the last one, practices, is descriptive. That is to say
while practice describes what is actually done in daily business, all the other Ps stand for
norms for how the daily business should function. In Table 5 examples are given for
every P for a ‘safe’ HRO-like hospital and a ‘traditional’ error-neglecting hospital
(Table 5). Thus it may be suggested that the reader looks at safety from a different
perspective by analysing similarities and differences of their own organisation with
characteristics of an HRO (Table 1). For an explanation of the 4-P model see Table 5.
The 4-P model can provide a general framework for the analysis of incidents and
accidents in an organization. It promotes the idea that many issues which may lead to an
error at the sharp end of patient care may be rooted in latent organizational factors.
6. 544 M. Rall and P. Dieckmann
Table 2. Prerequisites and key elements for an effective incident reporting system (IRS).
†Promotes a positive safety culture
†The system is non-punitive for the reporting individual or institution Confidential or anonymous
reporting but with the possibility of anonymous reporting (with active further anonymization by experts)
†The reporting system is independent from regulatory authorities
†Both near-misses and actual accidents can be reported
†Reporting is simple but provides adequate information
†Reports are analysed by experts; analysis is oriented to systems and human factors
†Direct reactions of the organization following reports
†Warnings, safety precautions and system changes follow from the analysed reports
†Relevance of the reports (triggering RCA, FMEA)
†Information documented about all reports
†Organizational changes due to reports
†Education of personnel in human factors and error development
†Team members are provided with incentives for reporting
The implementation of effective risk management strategies makes an active and effective IRS
indispensable. The literature on IRS is extensive, and there are many systems in place.30–37 The Swiss
critical incident reporting system (CIRS; https://www.cirsmedical.ch) started early and is available as an
‘open source’. A new large-scale system is the patient safety reporting system (PSRS) of the Veterans
Affairs Hospitals run by the National Aeronautics and Space Administration (NASA) (http://psrs.arc.nasa.
gov) and the system of the NPSA in the UK (www.npsa.nhs.uk). The first author chairs a committee to
implement a national IRS in Germany. Both authors run a closed-loop system for different institutions in
Germany (preliminary information at www.pasis.de).
When analysing incidents and accidents it is important to distinguish precisely between
errors and violations.
Failures, errors and violations and their impact on systematic patient safety
Failure78: ‘I did the right thing, but failed to reach the intended outcome’
A failure happens if somebody planned the right actions, used the right tools and
procedures, but was not successful in the outcome. For AM this could mean, for example,
Table 3. Key elements of root cause analysis (RCA).
†Retrospective analysis of factors and causes that lead to error/failure
†Repeated ‘why’ questions, which go deeper and deeper along the chain of ‘accident’ causation
†It is important to not only describe (‘what?’) but to analyse (‘why?’)
†There might be many root causes
†Causal connection of latent conditions and factors with the incident
†Every real root cause has to be a conditio sine qua non (an essential precondition)
†The search for factors must not be stopped too early (on the surface of the event), because otherwise it
will not find the real cause
†To be performed by a team of error experts and involved personnel
†Ideally guided along a set of possible root causes (e.g. Veterans Affairs RCA booklet)
†Root causes, in particular, have to be acted upon
The RCA is a tool to find underlying reasons, causes and contributing factors of an incident or accident. It
is a retrospective way to learn as much as possible from events that have already occurred. It is done by
posing repeated ‘why’ questions to the case.
7. Safety culture and crisis resource management 545
Table 4. Key elements of failure mode effects analysis (FMEA).
†Proactive, prospective analysis of potential error sources (critical areas)
†Priorities chosen according to a risk matrix (wfrequency x risk)
†Teamwork (interdisciplinary)
†Involvement of high-level support
†Inclusion of root cause analysis of the prospectively identified causal factors
†Report to the executive level and feedback to the personnel
†Identified problems and insufficiencies have to be improved. If no changes follow an FMEA, the activities
and motivations will dry up
The FMEA is a proactive way of identifying problems and error-prone conditions in systems or devices
before they occur.25–29 Good human factors engineering design includes FMEA in the development
process of devices. As our health-care systems have been more ‘grown’ than ‘designed,’ FMEA offers a
huge potential for improving processes on a systematic basis and becomes a true primary prevention of
errors and harm to patients.
‘I knew the patient with the neck carcinoma could be a difficult airway patient, I prepared
the fibre-optic, but despite this I was not able to place the tube in the trachea.’
Error78,86,87: ‘I didn’t do the right thing’
There are different types of errors: e.g. rule-based, skill-based and knowledge-based.
Errors can happen in the planning phase or in the execution phase of an action. An
example would be if somebody planned the right action and even prepared the right
tools, but then did not apply them correctly.
For AM this could mean, for example, ‘I prepared the fibre-optic and alternative
airway devices, but then in the stress situation I forgot to apply and use them.’
Violation69,88: ‘I knew it, but didn’t want to do the right thing’
A violation happens if someone knows what should be done and how it should be done,
but deliberately chooses not to do it or not do it in the right way; the reasons for this
decision might be manifold.
For AM this could mean, for example, ‘I know that I should prepare the fibre-optic
for patients with a history of pharyngeal carcinoma, but I think in this case I won’t need
it and will start with a conventional intubation without any back-up.’
Regarding the 4-P model, most failures and errors are related to procedures and
practice, whereas violations are more likely to be related to policies and philosophy. This
differentiation is important when investigating the real causes of a problem. It is not
very likely that someone who experienced a failure or an error has a problem
concerning his general attitude towards safety (philosophy and policies). However,
someone who violated an existing rule might not have a lack of knowledge of the
related procedures or practices, but may have a problem with the safety culture
principles. However it is important to note that many violations are “committed” with
the best intentions. People try to be faster, better, more secure.
Nature of violations
Especially regarding violations, it might be very difficult to find out why an individual did
something which s(he) knew was not the correct thing to do. It might be helpful to look
8. 546 M. Rall and P. Dieckmann
Table 5. The 4-P model applied to medicine with positive/negative culture examples.
The safe hospital: HRO/CRM The ‘production pressure’ hos-
applied pital: short-term vision
† Philosophy: the general ideas, Safety has an overriding priority If nothing happens, it is safe
aims and culture within the enough
organization of how to
conduct its business
‘Safety first’ principle We can only provide that safety
level which is paid for by the
insurance
Patient safety is the most We earn our money with
important factor for a long- patient turnover; if we get less
term success of the institution money, the safety level will go
down, too
‘If you think safety is expens- You have to take chances
ive—try accidents’
There is a blame-free error Money counts in a business
culture in order to learn from
mistakes and find underlying
systems failures
† Policies: the specific aims and Safety is part of all policies and Patient turnover is key for all
the way the organizational plays an outstanding role policies
operations are to be done in throughout patient care
general
Safety is rewarded, risky beha- Safety is not a special feature; it
viours are discouraged even if is assumed that everything is
nothing happened safe if people take care
Violations are taken seriously; Violations are tolerated as long
the reasons for their occur- as nothing bad happens; unless
rence are carefully sought and no patient is harmed it is speed
attempts made to eliminate which counts on the market
them
Non-punitive incident report- IRSs are not in place, because
ing systems (IRSs) are in place they would suggest to the
and have a high value customer (patient) that the
institution is not error-free
Feedback is fast and open to all Critical incidents are to be
members of a team hidden as much as legally poss-
ible
Effective countermeasures are
taken in time
Procedures: the way and Safety is inbuilt in all procedures Procedures are designed to
environment in which certain provide fastest patient turnover
actions have to be performed
Failure mode effect analysis Safety is solely the responsibil-
(FMEA) has been performed ity of the health-care provider
proactively to adopt pro- at the sharp end; they ‘have to
cedures to all safety risks to be take care’
expected
9. Safety culture and crisis resource management 547
Table 5 (continued)
The safe hospital: HRO/CRM The ‘production pressure’ hos-
applied pital: short-term vision
Safety should not depend on If errors happen, the individual
individuals alone; there should is blamed and sanctioned;
be double layers and safety errors should not happen,
margins for the prevention of therefore nothing will be
harm, the early detection of planned if they do happen
safety-critical conditions and
the management (amelioration)
of crisis situations (CRM)
Practices: the way in which It is assumed that the practices It is assumed that the motivated
actions are actually done on the will deviate from the theoretical health-care provider adheres
sharp end of the clinical reality philosophy, policies and defined 100% to the policies and pro-
procedures. The deviation will cedures defined (by the execu-
be handled systematically in tives on the table top).
order to adapt the policies to Deviations will be attributed to
the real-world requirements lack of compliance to the
and to try to improve the management and sanctioned
practice towards the planned (even if the individual wanted to
ideal do the best). If it seems hard for
the workforce on the sharp end
to comply with the procedures,
more pressure will be applied;
only rarely will the procedures
be adapted
Organizational learning is in IRSs are not in place
place during routine situations.
Typical and rare emergencies
are trained for using simu-
lations wherever possible.
Calling for help is routine, even
from experienced people
Critical incidents are managed The gap between what execu-
with non-punitive IRSs and root tives think is going on at the
cause analysis (RCA) sharp end and what is really
going on is large. This will
repeatedly lead to misconcep-
tions and erroneous policies
Executives are very well aware
of the real working conditions
and problems at the sharp end
It is known that safety is the
goal, but that it can never be
reached by 100%
for the pros and cons of committing the violation. One needs to keep in mind that
violations are done because the apparent benefits for the individual are greater than the
potential costs of the violation. Nevertheless, this analysis will be difficult because the
reasons may be related to many dimensions of patient care, e.g. organization
and personality of the health-care provider. Many violations are committed with the
10. 548 M. Rall and P. Dieckmann
best will for the patient or the organization. Thus it would be wrong to assume that all
health-care providers who commit violations in practice are demotivated, lazy or
careless people. The opposite may be true!
Furthermore, the motivation for the violation of rules may result from other direct
requirements of the company. The following example may illustrate this. A train driver
in Japan who was responsible for the deaths of 100 people committed the violation of
disregarding the speed limit of his train in order to be on schedule and thus to meet the
train company’s goals. The penalties for being delayed were tremendously high for this
company.
In general (non-commercial) aviation it was shown that about three times as many
fatal accidents are related to violations of safety-relevant rules as compared to non-fatal
accidents 85 .
How violations become errors
The development of violations is an important issue. In a first step a health-care
provider commits a violation (for whatever reason). Following the mechanisms of the
‘normalization of deviance’89 this violation can consequently be transformed into a new
(unwanted) routine operation. If another practitioner then performs this ‘violation,’ it is
no longer a real violation, it is then an error (because the deviation from the correct
protocol is no longer intended). Again, these aspects are very important and have to be
understood in detail when investigating incidents in order to improve the safety culture
systematically.
It should be pointed out that intentional wilful patient harm and other criminal acts
are not addressed in this article. Such acts are obviously not tolerated by any safety
culture and have to be dealt with other means.
CRISIS RESOURCE MANAGEMENT APPLIED TO DIFFICULT AIRWAY
MANAGEMENT
Many errors in medicine are due not to insufficient medical knowledge but to the
difficulties in transforming the theoretical knowledge into meaningful clinical actions
under the real-world conditions of patient care. In complex working systems such as
anaesthesia, decisions have to be made under uncertainty and ‘production’ pressure.
Working in multiprofessional teams requires coordination and communication skills,
which are not taught at all in medical school. Up to 70% of all errors can be attributed to
problems in the so-called human factors. One possibility for reducing human factor-
induced errors is the CRM concept.
What is CRM?
Crisis resource management (CRM) was introduced in aviation91 and proved successful
in many other industries; it was transferred and adapted to medicine by Gaba and
colleagues as Anaesthesia Crisis Resource Management.18 CRM means coordinating,
using and applying all available resources to protect and help the patient in the best way
possible. Resources include all humans involved, with all their skills, abilities and
attitudes as well as their limitations. Resources also include machines and devices. The
CRM key points — a selection of the most important of which have been published in a
11. Safety culture and crisis resource management 549
Table 6. Key points of crisis resource management (CRM).32
1. Know the environment 9. Prevent and manage fixation errors
2. Anticipate and plan 10. Cross (double) check
3. Call for help early 11. Use cognitive aids
4. Exercise leadership and followership 12. Re-evaluate repeatedly
5. Distribute the workload 13. Use good teamwork
6. Mobilize all available resources 14. Allocate attention wisely
7. Communicate effectively 15. Set priorities dynamically
8. Use all available information
From Rall and Gaba (2005, Miller’s Anesthesia 6th edition; Philadelphia: Elsevier Churchill Livingstone, pp
3021–3072) with permission.
revised version recently32— are shown in Table 6. The key points are derived from
cognitive science, human factor and error research. In the ideal assumption,
the professional application of all these key points would have prevented most of the
errors, which have happened to date.
How does CRM work?
CRM begins before the crisis. All the principles that help to deal with an acute crisis also
help to avoid the crisis. CRM intends to detect errors as soon as possible and to
minimize the negative consequences of errors, which have already occurred by effective
crisis management. The CRM concept acknowledges that humans are error-prone in
some aspects and have limitations in others. The aim is to best use the strengths of
human beings and have strategies at hand to deal with their limitations and situational
‘error traps’. The CRM concept is larger than team and communication skills, and it
covers individual cognitive aspects such as fixation errors, allocation of attention and
anticipation.
Traditionally, the CRM principles are instructed and trained in realistic simulations
using video debriefing for self-reflection. During the simulations the complex and
delicate balance between the three factors involved—humans, technology and
organization—becomes visible and can be discussed. Different outlines of the
anaesthesia crisis resource management (ACRM) courses developed by Gaba et al
are now a gold standard for simulation trainings worldwide.93–95
CRM examples for airway management
Several CRM key points in the context of airway management are shown in Table 6. It is
thus illustrated that CRM principles have to be activated and stimulated in the specific
clinical setting (Table 7). The main goals of good CRM applied to airway management
should be: (1) prevention of the ‘unexpected difficult airway,’ and (2) management of the
unexpected difficult airway and its complications.
In the following, a short checklist for safe airway management is presented. This
checklist does not claim to be complete and has to be adapted to the specific institution
setting, the available abilities, and clinical requirements. It is intended to give anaesthesia
care providers something at hand to start work with. It may help to check the
institutional safety standard and improve it systematically step by step.
12. 550 M. Rall and P. Dieckmann
Table 7. Examples of Crisis Resource Management (CRM) Key Points applied to Airway Management.
Point 1: know the environment You should know how to operate the difficult
airway equipment, including back-up devices. You
have to know which equipment is available and
how long it takes to get it. How do you get help at
the different times of duty and in what time
frame?
Point 2: anticipate and plan Be prepared for the worst. Assess your patient
carefully. Make plans with your team for
emergency situations. Always have a powerful
suction device ready and checked. Organize your
back-up
Point 3: call for help early Calling for help is not equal to a lack of
qualification. The best call for help, get a second
opinion, ensure that enough hands are available. If
you are in the middle of an emergency, it is
unlikely that you will think about calling for help
because you will be too involved. Make it a habit
to call for help early, before you are too deeply
absorbed
Point 6: mobilize all available resources Use what you have (personnel and equipment).
Have the fibre-scope prepared and on standby.
Have the CO2 calibrated and connected. Call the
ENT surgeons to stand by for a possible
emergency tracheotomy, if you are in doubt.
Have a second anaesthetist in the room
Point 8: use all available information Evaluate the airway with all available means (e.g.
common scores, fibre-scope) and perform a risk
assessment. Check the patient records for
information about the upper and lower (!) airway.
Try to get first-hand information about previous
intubations. Look at the X-rays and CT scans
Point 9: prevent and manage fixation errors Humans are very susceptible to mental model
errors. If we have found a first plausible
explanation (diagnosis), we tend to be satisfied
and try to keep this mental model of the
situation. In airway management two (fixation)
common errors are:
(a) the assumption that we intubated the trachea
(I saw the tube passing the chords); and
(b) the diagnosis of ‘bronchospasm’
For (a) we should rather assume that we always
intubate the oesophagus and then prove to
ourselves that it was in fact the trachea (CO2)
For (b) we should be aware that many
oesophageal intubations or other mechanical
airway problems are (unsuccessfully) treated with
the diagnosis ‘severe bronchospasm,’ so bronch-
ospasm should be a very careful and well-checked
diagnosis
13. Safety culture and crisis resource management 551
Table 7 (continued)
Point 10: cross (double) check ‘Fast checks’ (glances) have the risk of equating
with ‘no check’. Real checks of monitoring
devices require some time to read out the
numbers and correlate them. Use your finger to
read the monitor step by step. Do you have a
CO2 curve after intubation? Did you set the
ventilator/vapour correctly? Is the propofol
running at the desired rate? Check it ‘hands-on’.
Don’t assume anything, double check everything
Point 12: re-evaluate repeatedly Many decisions in medicine are made on a weak
basis, with uncertain or missing information. Tell
your team that you might have to change the plan
again. Do not stick to your opinion because you
are the leader! Be in doubt. Re-check your
mental model, revise the patient using all
information channels, ask your team their
opinion. What is the greatest risk for the patient
now?
Point 15: set priorities dynamically Feel free to change your mind whenever it seems
appropriate and as often as it appears necessary.
In dynamic environments, decisions and priorities
have to be set dynamically. The patient does not
need any special airway device inserted, but he
needs oxygen. Don’t get absorbed by technical
problems and complications. Ensure adequate
ventilation/oxygenation at all times. Don’t wait
too long with life-saving measures. Cricotomy
should be performed before asystole
THE SAFETY CHECKLIST FOR AIRWAY MANAGEMENT
Key elements of safety culture and other general conditions for the prevention and
management of the difficult airway are:
† Accept that errors will happen to everybody and that they are frequent.
† Prevention is more important than management.
† Call for help early (this is a prominent element of safety culture).
† Analyse the needs and tasks of airway management in your institution.
† Perform a proactive failure mode and effects analysis.
† Remember that the impact of AM errors on patient safety and risk is very high.
† Establish an expert team (AM task force) which defines equipment and training
needs.
† Have commonly accepted guidelines and algorithms readily available at all locations
and distributed to all personnel (not only doctors).
† Define emergency procedures (including how to get help).
† Ensure that all the needed equipment (checked repeatedly) is available at all
locations at all times and is ready to use.
† Select the right personnel for the right clinical areas.
14. 552 M. Rall and P. Dieckmann
† Establish routine training requirements adapted to the clinical needs and divided
into:
— initial education and training;
— recurrent training;
— specialist training;
— train-the-trainers programmes.
† Cover theoretical, technical and skills training (skill parcours) as well as human
factors (CRM) training.
† Use realistic simulation training as much as possible for the whole team for all
emergency situations and rarely used techniques.
† Use the original equipment for training, train on the sharp end.
† Adapt your procedures and equipment to the findings and problems discovered
during realistic simulation training.
† Establish a learning atmosphere in the routine work (every case performance is
training).
† Emphasis on human factors, CRM and team success.
† Enforce critical situation management principles (CRM) in daily routine.
† Establish a positive error atmosphere (climate is ‘in’):
— make sure everybody knows that safety has the highest priority;
— establish the principle of ‘safety first’;
— if something goes wrong, look for the error, not for the ‘guilty’ person;
— emphasize that everybody makes errors.
† Implement an anonymous incident reporting system.
† If problems do occur, perform a root cause analysis.
† Establish immediate reactions towards detected safety critical events and realize
prolonged efforts to improve safety conditions.
† Perform detailed case discussions (prospective and retrospective).
† Adapt your guidelines and algorithms to lessons learned in practice.
† Try to change the organizational system, not the individual.
† Look over your own horizon—exchange ideas—invite ‘experts’—get expertise by
learning from others.
We are only at the begining of understanding the connections and interplay of safety
culture, crisis resource management (CRM) and HRO theory. Huge efforts in reseacrch
and practice are needed to further improve patient safety. Incident reporting sytems
with professional analysis may help to target the efforts to the most critical areas. We
hope that our article will help.
LINKS
http://www.apsf.org
http://www.patientsafety.gov (VA)
http://www.npsa.nhs.uk
http://www.jcaho.org/
http://anesthesia.stanford.edu/VASimulator
http://www.sesam.ws
15. Safety culture and crisis resource management 553
http://www.medizin.uni-tuebingen.de/psz/english
http://www.npsf.org/
http://www.pasis.de
http://www.apsf.net.au
http://www.gasnet.org/societies/apsf
and
Practice points
† airway management is a prominent feature in patient safety and can serve as a
marker of how good the safety culture is in a clinical environment and whether
health-care providers are working in a high reliability organization or not
† the risk of errors in airway management is very high, so it is worth starting to
implement safety culture concepts in this field
† when analysing a high-risk system for potential errors it is important to
consider the intense interplay between factors on all levels of the organization
and within every department
† errors are common and have plenty of reasons. It is not possible to avoid
errors at a 100% level. Learning from errors using incident reporting systems
(IRS) and root cause analysis (RCA) is essential to reduce the incidence of
errors. Moreover it is important to try to reduce the root causes of errors and
to minimize the consequences of errors
† the use of realistic simulation training as much as possible for difficult airway
management is of utmost importance. No one can be expected to perform
routine actions at a high professional level in emergency situations without
proper and recurrent team training
† safety is a dynamic non-event; safety does not automatically exist in an
organization, it has to be actively achieved on a daily basis
Research agenda
† up to 70% of errors in patient care are due to ‘human factors,’ including systems
safety and safety culture; until now almost no research on safety questions has
been done in the critical field of airway management
† besides errors (that cannot be eliminated at a 100% level), violations of safe
procedures are a threat to patient safety in airway management. However,
violations are often committed because health-care providers think that they
are acting in a good way. There is a need for further investigations on the cost
and benefits of violations and safe procedures in airway management
16. 554 M. Rall and P. Dieckmann
REFERENCES
1. Berkow LC. Strategies for airway mangement. Best Pract Res Clin Anaesthesiol 2004; 18: 531–548.
2. Brambrink AM, Meyer RR & Krez FJ. [Management of pediatric airway—anatomy, physiology and new
developments in clinical practice]. Anaesthesiol Reanim 2003; 28: 144–151.
3. Brambrink AM & Koerner IP. Advanced trauma life support: how should we manage the airway, and who
should do it? Crit Care 2004; 8: 3–5.
4. Clancy M & Nolan J. Airway management in the emergency department. Emerg Med J 2002; 19: 2–3.
5. Heidegger T, Gerig HJ & Keller C. Comparison of algorithms for management of the difficult airway.
Anaesthesist 2003; 52: 381–392.
6. Hillman PR & Platt PR. Upper airway during anaesthesia. Br J Anaesth 2003; 91: 31–39.
7. Kuczkowski KM, Reisner LS & Benumof JL. Airway problems and new solutions for the obstetric patient.
J Clin Anesth 2003; 15: 552–563.
8. Mayhew JF. Airway management for oral and maxillofacial surgery. Int Anesthesiol Clin 2003; 41: 57–65.
9. McNiece WL & Dierdorf SF. The pediatric airway. Semin Pediatr Surg 2004; 13: 152–165.
10. Munnur U & Suresh MS. Airway problems in pregnancy. Crit Care Clin 2004; 20: 617–642.
11. Rosenblatt WH. Preoperative planning of airway management in critical care patients. Crit Care Med 2004;
32: S186–S192.
12. Wheeler M. Management strategies for the difficult pediatric airway. Middle East J Anesthesiol 2004; 17:
845–873.
13. Ezri T, Szmuk P, Warters RD. et al. Difficult airway management practice patterns among
anesthesiologists practicing in the United States: have we made any progress? J Clin Anesth 2003; 15:
418–422.
14. Mort TC. Emergency tracheal intubation: complications associated with repeated laryngoscopic
attempts. Anesth Analg 2004; 99: 607–613.
15. Rosenblatt WH. The Airway Approach Algorithm: a decision tree for organizing preoperative airway
information. J Clin Anesth 2004; 16: 312–316.
16. Columbia Accident Investigation Board. Columbia Accident Investigation Board Report Vol 1. 2003.
http://www.nasa.gov/columbia/home/CAIB_Vol1.html.
17. Rall M & Gaba D. In Miller R (ed.) Patient Simulators, Miller’s Anesthesia, 6th edn. Philadelphia: Elsevier
Churchill Livingstone, 2005, pp 3073–3104.
18. Gaba DM, Fish KJ & Howard SK. Crisis Management in Anesthesiology. New York: Churchill Livingstone
1994.
19. Howard SK, Gaba DM, Fish KJ et al. Anesthesia crisis resource management training: teaching
anesthesiologists to handle critical incidents. Aviat Space Environ Med 1992; 63: 763–770.
20. Kohn LT, Corrigan JM & Donaldson MS. To Err is Human—Building a Safer Health System. Washington:
National Academy Press; 1999.
21. Gaba DM. Improving anesthesiologists’ performance by simulating reality (editorial). Anesthesiology 1992;
76: 491–494.
22. Weick KE, Sutcliffe KM & Obstfeld D. Organizing for high reliability: processes of collective mindfulness.
Res Organ Behav 1999; 21: 81–123.
23. Klein RL, Bigley GA & Roberts KH. Organizational culture in high reliability organizations: an extension.
Hum Relat 1995; 48: 1–23.
24. Weick KE. South canyon revisited: lessons from high reliability organizations. Wildfire 1995; 4: 54–68.
25. Roberts KH, Stout SK & Halpern JJ. Decision dynamics in two high reliability military organizations. Manag
Sci 1994; 40: 614–624.
26. Roberts KH, Rousseau DM & La Porte TR. The culture of high reliability: quantitative and qualitative
assessment aboard nuclear powered aircraft carriers. J High Technol Manag Res 1994; 5: 141–161.
27. Roberts KH. Some characteristics of high reliability organizations. Organ Sci 1990; 1: 160–177.
28. Roberts KH. Managing high reliability organizations. Calif Manag Rev 1990; 32: 101–114.
29. Roberts KH. New challenges in organizational research: high reliability organizations. Ind Crisis Quart
1989; 3: 111–125.
30. Rochlin TR & La Porte KH. Self-designing high reliability organization: aircraft carrier flight operations at
sea. Nav War Coll Rev 1987; 42(Autumn): 76–90.
17. Safety culture and crisis resource management 555
31. Weick KE. Organizational culture as a source of high reliability. Calif Manag Rev21987; 112–127.
32. Rall M & Gaba D. In Miller R (ed.) Human Performance and Patient Safety, Miller’s Anesthesia, 6th edn.
Philadelphia: Elsevier Churchill Livingstone, 2005, pp 3021–3072.
33. Gaba DM. Anaesthesiology as a model for patient safety in health care. Br Med J 2000; 320: 785–788.
34. Flin R, Fletcher G, McGeorge P, et al. Anaesthetists’ attitudes to teamwork and safety. Anaesthesia 2003;
58: 233–242.
35. Singer SJ, Gaba DM, Geppert JJ, et al. The culture of safety: results of an organization-wide survey in 15
California hospitals. Qual Saf Health Care 2003; 12: 112–118.
36. Berwick DM. Patient safety: lessons from a novice. Adv Neonatal Care 2002; 2: 121–122.
37. Shojania KG, Wald H & Gross R. Understanding medical error and improving patient safety in the
inpatient setting. Med Clin North Am 2002; 86: 847–867.
38. Rall M, Manser T, Guggenberger H, et al. Patient safety and errors in medicine: development,
prevention and analyses of incidents. Anasthesiol Intensivmed Notfallmed Schmerzther 2001; 36:
321–330.
39. Gaba DM. Gaba: Safety first: ensuring quality care in the intensely productive environment—the HRO
model. APSF Newsletter 2003; 18 [www.apsf.org].
40. Dunn D. Incident reports—correcting processes and reducing errors. AORN J 2003; 78: 212 [see also
214–20, passim].
41. Takeda H, Matsumura Y, Nakajima K et al. Health care quality management by means
of an incident report system and an electronic patient record system. Int J Med Inf 2003; 69:
285–293.
42. Thomas AN, Pilkington CE & Greer R. Critical incident reporting in UK intensive care units: a postal
survey. J Eval Clin Pract 2003; 9: 59–68.
43. Dunn D. Incident reports—their purpose and scope. AORN J 2003; 46: 46–49.
44. Yong H & Kluger MT. Incident reporting in anaesthesia: a survey of practice in New Zealand. Anaesth
Intensive Care 2003; 31: 555–559.
45. Wu AW, Pronovost P & Morlock L. ICU incident reporting systems. J Crit Care 2002; 17: 86–94.
46. Kluger MT & Bullock MF. Recovery room incidents: a review of 419 reports from the Anaesthetic
Incident Monitoring Study (AIMS). Anaesthesia 2002; 57: 1060–1066.
47. Webster CS & Anderson DJ. A practical guide to the implementation of an effective incident reporting
scheme to reduce medication error on the hospital ward. Int J Nurs Pract 2002; 8: 176–183.
48. Wu AW, Pronovost P & Morlock L. ICU incident reporting systems. J Crit Care 2002; 17: 86–94.
49. Lawton R & Parker D. Barriers to incident reporting in a healthcare system. Qual Saf Health Care 2002; 11:
15–18.
50. Staender S. Incident reporting as a tool for error analysis in medicine. Z Arztl Fortbild Qualitatssich 2001;
95: 479–484.
51. Secker-Walker J & Taylor-Adams S. In Vincent C (ed.) Clinical Incident Reporting, Clinical Risk Management-
Enhancing Patient Safety, 2nd edn. London: BMJ Books, 2001, pp 419–438.
52. Sinclair M, Simmons S & Cyna A. Incidents in obstetric anaesthesia and analgesia: an analysis of 5000 AIMS
reports. Anaesth Intensive Care 1999; 27: 275–281.
53. Billings CE. Appendix B. Incident Reporting Systems in Medicine, and Experience with the Aviation Safety
Reporting System. Chicago: National Patient Safety Foundation; 1998.
54. Buckley TA, Short TG, Rowbottom YM & Oh TE. Critical incident reporting in the intensive care unit.
Anaesthesia 1997; 52: 403–409.
55. Beckmann U, Baldwin I, Hart GK & Runciman WB. The Australian incident monitoring study in intensive
care: AIMS-ICU. An analysis of the first year of reporting [see comments]. Anaesth Intensive Care 1996; 24:
320–329.
56. Williamson JA, Webb RK, Sellen A, et al. The Australian incident monitoring study. Human failure: an
analysis of 2000 incident reports. Anaesth Intensive Care 2000; 21: 678–683.
57. Runciman WB, Webb RK, Lee R & Holland R. System failure: an analysis of incident reports. Anaesth Intens
Care 2000; 21: 684–695.
58. Runciman WB, Webb RK, Lee R & Holland R. The Australian incident monitoring study. System failure: an
analysis of 2000 incident reports. Anaesth Intensive Care 2000; 21: 684–695.
59. Billings CE & Reynard WD. Human factors in aircraft incidents: results of a 7-year study. Aviat Space
Environ Med 1984; 55: 960–965.
18. 556 M. Rall and P. Dieckmann
60. Bagian JP, Gosbee J, Lee CZ, et al. The veterans affairs root cause analysis system in action. Jt Comm J Qual
Improv 2002; 28: 531–545.
61. Carroll JS, Rudolph JW & Hatakenaka S. Lessons learned from non-medical industries: root cause analysis
as culture change at a chemical plant. Qual Saf Health Care 2002; 11: 266–269.
62. Shinn JA. Root cause analysis: a method of addressing errors and patient risk. Prog Cardiovasc Nurs 2000;
15: 24–25.
63. Amo MF. Root cause analysis. A tool for understanding why accidents occur. Balance 1998; 2: 12–15.
64. Root Cause Analysis RCA VA NCPS. http://www.va.gov/ncps/rca.html. 2005.
65. Root Cause Analysis RCA Toolkit NPSA. http://www.npsa.nhs.uk/health/resources/root_cause_analysis/
conditions. 2005.
66. An introduction to FMEA. Using failure mode and effects analysis to meet JCAHO’s proactive risk
assessment requirement. Failure modes and effect analysis. Health Dev 2002; 31: 223–226.
67. Passey RD. Foresight begins with FMEA. Delivering accurate risk assessments. Med Device Technol 1999;
10: 88–92.
68. VA NCPS. Healthcare Failure Mode and Effect Analysis Course Materials (HFMEAe) 2005 [http://www.va.gov/
ncps/HFMEA.html].
69. Maurino DE, Reason J, Johnston N & Lee RB. Beyond Aviation Human Factors. Aldershot, England: Ashgate
Publishing Limited 1995.
70. Reason JT, Carthey J & de Leval MR. Diagnosing ‘vulnerable system syndrome’: an essential prerequisite
to effective risk management. Qual Health Care 2001; 10(supplement 2): ii21–ii25.
71. Reason J. Human error: models and management. Br Med J 2000; 320: 768–770.
72. Reason J. Managing the Risks of Organizational Accidents. Aldershot, England: Ashgate Publishing Limited
1997.
73. Sexton JB, Thomas EJ & Helmreich RL. Error, stress, and teamwork in medicine and aviation: cross
sectional surveys. Br Med J 2000; 320: 745–749.
74. Howard SK, Rosekind MR, Katz JD & Berry AJ. Fatigue in anesthesia: implications and strategies for
patient and provider safety. Anesthesiology 2002; 97: 1281–1294.
75. Gaba DM & Howard SK. Patient safety: fatigue among clinicians and the safety of patients. N Engl J Med
2002; 347: 1249–1255.
76. Gaba DM, Singer SJ, Sinaiko AD, et al. Differences in safety climate between hospital personnel and naval
aviators. Hum Factors 2003; 45: 173–185.
77. Grissinger M & Rich D. JCAHO: Meeting the standards for patient safety. Joint Commission on
Accreditation of Healthcare Organizations. J Am Pharm Assoc (Wash.) 2002; 42: S54–S55.
78. Reason JT. Human Error. Cambridge: Cambridge University Press 1990.
79. Rosekind MR, Boyd JN, Gregory KB, et al. Alertness management in 24/7 settings: lessons from aviation.
Occup Med 2002; 17: 247–259.
80. Dismukes K. Cockpit interruptions and distractions. ASRS Directline 1998; 10(4–9): 4–9.
81. Helmreich RL. On error management: lessons from aviation. Br Med J 2000; 320: 781–785.
82. Helmreich RL & Merritt AC. Culture at Work in Aviation and Medicine. Aldershot, UK: Ashgate Publishing
Limited, 1998, pp 172–174.
83. Helmreich R. Managing threat and error (syllabus/powerpoint).
84. Degani A & Wiener EL. Procedures in complex systems: the airline cockpit. IEEE Trans Syst Man Cybern A
Syst Hum 1997; 27: 302–312.
85. Wiegmann DA & Shappell SA. A Human Error Approach to Aviation Accident Analysis. Burlington: Ashgate,
2003.
86. Reason J. Human error: models and management. Western J Med 2000; 172: 393–396.
87. Rasmussen J. Skills, rules, knowledge: signals, signs and symbols and other distinctions in human
performance models. IEEE Trans Sys Man Cyber 1983; SMC-13: 257–267.
88. Reason J. Managing the Risks of Organizational Accidents. Aldershot: Ashgate, 1997.
89. Perrow C. Normal Accidents. Princeton: Princeton University Press, 1999.
91. Wiener E, Kanki B & Helmreich R. Cockpit Resource Management. San Diego: Academic Press; 1993.
93. Reznek M, Smith-Coggins R, Howard S, et al. Emergency Medicine Crisis Resource Management
(EMCRM): Pilot Study of a Simulation-based Crisis Management Course for Emergency Medicine. Acad
Emerg Med 2003; 10: 386–389.
19. Safety culture and crisis resource management 557
94. Gaba DM, Howard SK, Fish KJ, et al. Simulation-based training in anesthesia crisis resource management
(ACRM): a decade of experience. Simulat Gaming 2001; 32: 175–193.
95. Kurrek MM & Fish KJ. Anaesthesia crisis resource management training: an intimidating concept, a
rewarding experience. Can J Anaesth 1996; 43: 430–434.
96. Weick KE. Organizational culture as a source of high reliability. Calif Manag Rev 1987; XXIX:
112–127.