Tuesday 16 December 2014

ASPiH 2014 annual conference: The plus and the delta (by M Moneypenny)

The 2014 Association for Simulated Practice in Healthcare (ASPiH) annual conference took place in Nottingham between the 11th-13th November. The following is not a definitive overview of the conference and if you feel strongly that something has been missed out, then please do comment at the bottom!

The plus

The keynote lecturers were very good, with lots of food for thought and a few lightbulb moments.

ASPiH Laerdal keynote lecture by Hege Ersdal (SAFER, Norway): Low-dose high-frequency simulation: saving lives of babies on a global scale

Hege gave a moving keynote looking at work carried out in Tanzania. A 1-day course was delivered which tried to teach the importance of the "golden minute" after birth. She showed promising results of a 50% decrease in neonatal deaths in the region, however when further analysed it showed that the 50% figure hid a significant disparity between centres, some had a much greater decrease, some had none. Hege and her team found that local implementation was hampered at some sites and that frequent onsite training was the major determinant of improvement (rather than having attended a 1-day course). The team therefore implemented mandatory low-dose high-frequency training in the labour ward, supported by local leaders and the hospital management. This led on to the argument that patient outcome depends on three factors:

  1. Science
  2. Educational Efficiency
  3. Local implementation

Hege also explained that their simulation was carried out using a technically low-fidelity mannequin (see Al May's great tweet) which makes the point that you don't need a SimNewB (or equivalent) for every labour ward.

In terms of "take home messages" for high-income countries, Hege felt that frequent in-situ simulation may be under-utilised.


Day 1 Closing keynote by Mark Gilman (Public Health England): Changing the bigger picture - the real driver to support new behaviours and lifestyles: lessons from addiction


Mark, a criminologist, talked about a smorgasbord of topics surrounding addiction. Unfortunately no amount of text can replace the experience of hearing Mark talk passionately about his work.  He discussed how PHE was managing to keep people with addictions alive, keep them out of prison, control blood-borne viruses and, at times, rehabilitate them. Mark discussed how the "big picture" has a tremendous effect on rehabilitation. If you have no friends, no job and no money then what is the incentive to stay sober or drug-free? He finished by telling us about the "5 ways to wellbeing" which applies to everybody.



Keynote lecture by Justin Moseley (National Air Traffic Services): Towing the Iceberg: Can education and training change the culture of professional practice?

Justin started his lecture with an impressive video showing the air traffic over Europe in a 24-hour period. He then told us about an equally impressive simulated training programme, with a mandatory annual component as well as on an ad-hoc basis for new equipment and procedures.

NATS has a dedicated team of expert investigators of "incidents", of which there are >2000/year in the UK. The team can release an immediate safety notice if necessary, as well interview all involved parties and creates a report (full or abbreviated) for every incident. Reporting incidents at NATS is now natural and commonplace, it used to be "I'm going to file a report on YOU!"

According to Justin, despite being a part of the "aviation" industry, TRM in air traffic control is relatively new but a TRM annual assessment will be a European Aviation Safety Agency requirement in the near future.

In terms of advice for others, Justin recommends sharing incidents, near misses and experience. He also encourages a "Safety II" approach, trying to do less "Don't do that!" and more "Do more of this!" Lastly he talked about how a "Just Culture" (which is not a "No Blame" culture) underpins Safety II and echoes Sidney Dekker by calling for "the line" (which must not be crossed) to be drawn by those who are actively involved in the job.


Day 2 Closing Keynote by Sebastian Yuen: Engaging your community: Being the change you wish to see

Sebastian talked about the rise (or arrival) of the #SocialEra with personalised healthcare and a fall in the power of big business. He talked about the power of communities and connections, and the need to engage with patients and people. He also talked about the effect of behaviour and showed a slide which  referred to a statement from Public Health England. It said:
"Our effectiveness depends on how we behave so we will:

  • consistently spend our time on what we say we care about
  • work together, not undermine each other
  • speak well of each other, in public and in private
  • behave well, especially when things go wrong
  • keep our promises, small and large
  • speak with candour and courage"
Which seems not a bad set of principles for simulation centres and hospitals.


The delta
Other than my lost suitcase there was very little I would change. One area of concern is the SimHeroes concept… With pre-conference heats, then semi-finals and finals it brings perhaps some excitement to the conference. However the notion of rating performance (in particular non-technical skills) makes one wonder about rater training, reliability, validity, etc. In addition, the focus on the performance in the simulation rather than the debrief perhaps emphasises the wrong aspect of simulation-based education. On the plus side, if the concept continues then we will quickly get a glimpse of what the future of gaming/beating the simulator will look like, as teams attempt to win first place.


See you next year

The next ASPiH annual conference is in Brighton, 3rd-5th November 2015. See you there?!



Friday 28 November 2014

Book of the month: Managing Maintenance Error (A Practical Guide) by Reason and Hobbs

About the authors

James Reason is Emeritus Professor of Psychology at the University of Manchester and one of the best-known names in the field of human factors (he came up with the Swiss cheese model of system failure).
Alan Hobbs is a Senior Research Associate at NASA's Human Systems Integration Division with a background as a human performance investigator at the Australian Bureau of Air Safety Investigation.

 

Who should read this book?

The authors' target readership is "people who manage, supervise or carry out maintenance activities in a wide range of industries." Simulation technicians and managers may therefore find some useful advice about maintenance of sophisticated equipment such as mannequins and audiovisual systems. The book will also appeal to human factors enthusiasts as it explores the unfamiliar world of the routine (maintenance) rather than the more familiar world (to simulation educators) of the crisis. The book will also be of interest to anybody who is involved in creating a "safety culture" or in analysing errors. Lastly, surgeons and other healthcare professionals who carry out maintenance-style tasks may enjoy this book. The authors talk of performing tasks in "poorly lit spaces with less-than-adequate tools, and usually under severe time pressure"; this may strike a chord with some...


In summary

The authors main argument is that maintenance error is a major, but under-investigated, cause of system failure. As automation increases, the maintenance of the automated systems, which is still primarily carried out by human beings, can lead to failure by omission (not stopping a fault) or commission (introducing a fault).

The book consists of 12 chapters:

  1. Human Performance Problems in Maintenance
  2. The Human Risks
  3. The Fundamentals of Human Performance
  4. The Varieties of Error
  5. Local Error-provoking Factors
  6. Three System Failures and a Model of Organizational Accidents
  7. Principles of Error Management
  8. Person and Team Measures
  9. Workplace and Task Measures
  10. Organizational measures
  11. Safety culture
  12. Making it Happen: The Management of Error Management


In a bit more detail

Chapter 1: Human Performance Problems in Maintenance
Reason and Hobbs make their case about the importance of maintenance error and its effects (including Apollo 13, Three Mile Island, Bhopal, Clapham Junction and Piper Alpha). They give us the "good news" that errors are generally not random. Instead, if one looks, one can find "systematic and recurrent patterns" and error traps.

Chapter 2: The Human Risks
The authors inform us that errors are "universal… unintended… (and) merely the downside of having a brain". They argue against trying to change the human condition (human nature) and ask organisations to focus their efforts on changing the conditions in which people work. They also tell us that errors are and should be expected and, in many cases, foreseeable.

Chapter 3: The Fundamentals of Human Performance
Using the activity space to define the 3 performance levels (p.29)
This chapter details human performance and its limitations in terms of attention,  vigilance, fatigue and stress. The authors also mention the "paradox of expertise", where highly skilled people can no longer describe what they are doing. (Teaching your teenage son or daughter how to drive the car might come to mind for some.) The authors explain automatic and conscious control modes (Kahneman's system 1 and system 2), Rasmussen's knowledge-/rule-/skill-based performance taxonomy and then combine them into a useful diagram. Reason and Hobbs also lay out the principal stages in skill acquisition and show how fatigue and stress cause us to revert back to more effortful ways of performing. The Yerkes-Dodson "inverted U-curve" is also referred to and explained.

Chapter 4: The Varieties of Error
Reason and Hobbs' definition of error is:
"An error is a failure of planned actions to achieve their desired goal, where this occurs without some unforeseeable or chance intervention."
They divide error into 3 types:

  1. Skill (failure at action stage): may be a recognition failure, memory failure or slip
  2. Mistake (failure at planning stage): may be rule-based (involving incorrect assumption or bad habit) or knowledge-based (involving failed problem-solving or lack of system knowledge)
  3. Violation (may be routine, thrill-seeking or optimising, and situational)
They then go on to look at the major types of unsafe acts that occur in maintenance. 

Chapter 5: Local Error-provoking Factors
Reason and Hobbs argue that, although there are many possible local factors, there are only a few  which are implicated in the majority of maintenance errors:

  • Documentation
  • Housekeeping and Tool Control
  • Coordination and Communication
  • Tools and Equipment
  • Fatigue
  • Knowledge and Experience
  • Bad Procedures
  • Procedure Usage
  • Personal Beliefs
The authors also provide us with a useful diagram showing the link between errors (Ch4) and contributing factors (Ch5). Thus, slips are linked with equipment deficiencies (too many similar-looking dials), while knowledge errors are linked with inadequate training.

Chapter 6: Three System Failures and a Model of Organizational Accidents
In this chapter Reason and Hobbs introduce us to latent conditions (which they compare to dormant pathogens in the human body) and active failures. They then go on to analyse three maintenance-involved incidents: The crash of an Embraer 120 aircraft in 1991, the Clapham Junction railway collision in 1988 and the Piper Alpha oil and gas platform explosion, also in 1988. They end the chapter by talking about the system defences, either to detect errors or to increase the system's resilience.

Chapter 7: Principles of Error Management
Reason and Hobbs provide us with a set of guiding principles of error management, including:
  • Human error is both universal and inevitable
  • Errors are not intrinsically bad
  • You cannot change the human condition, but you can change the conditions in which humans work
  • The best people can make the worst mistakes
  • Errors are Consequences rather than Causes
They complete the chapter by explaining that error management has 3 components: 1) error reduction, 2) error containment and 3) managing the first 2 so that they continue to work.

Chapter 8: Person and Team Measures
Red flag signals
In this chapter Reason and Hobbs discuss error management strategies directed at the person and the team. This includes providing people with knowledge about human performance and "red flags" which should alert the individual to the potential for error. These could be perhaps considered with reference to Reason's 3 bucket model of person, task, context. I.e. if you are tired, carrying out an unfamiliar task and constantly being interrupted, the risk of making a mistake is high.
The authors also stress the importance of the unseen mental changes required before behavioural changes (e.g. breaking a bad habit) are evident, and that these take time.

Chapter 9: Workplace and Task Measures
This chapter looks at environmental and task factors implicated in errors, including: fatigue, task frequency, equipment and environment design etc. 

Chapter 10: Organizational Measures
In this chapter, the authors look at reactive outcome measures and proactive process measures can be used to look for systemic and defensive weaknesses. They also explain how trust and convenience of reporting is essential in order to develop a safety culture. The Maintenance Error Decision Aid (MEDA) is used to show how information regarding events can be gathered and used to identity failed defences as well as potential solutions. While Managing Engineering Safety Health (MESH) is provided as an example of a proactive process measure.


Chapter 11: Safety Culture
Reason and Hobbs call this "most important chapter in the book. Without a supportive safety culture, any attempts at error management are likely to have only very limited success." They subdivide the safety culture into 3 sub-components:

  1. Reporting culture (the most important prerequisite for a learning culture)
  2. Just culture
  3. Learning culture
They discuss how it is very difficult or impossible to change people's values but much easier to change practices. They use smoking as an example of a practice which has changed because of a change in controls. Exhortations to stop smoking on national TV made little difference, but banning smoking in public places has had a much greater effect. This chapter also introduces us to some tests for determining culpability: the foresight test (would an average person have predicted that the behaviour was likely to lead to harm?) and the substitution test (could an average person have made the same mistake?)

Chapter 12: Making it Happen: The Management of Error Management
The authors discuss Safety and Quality Management systems and the difference between quality assurance and quality control. (Quality assurance ensures quality is engineered into the product at every stage, quality control is about testing the end product, when it's often too late to rectify mistakes). They also discuss organisational resilience which, they say, is a result of three Cs: commitment, competence and cognisance.


I haven't got time to read 175 pages!

The paragraph entitled "Looking ahead" on page 17 provides an overview of the book. In addition, reading through the useful summary at the end of each chapter will tell you if that chapter is worth reading in detail. Personally I found Chapter 7: Principles of Error Management particularly informative as it covered or put into words some concepts I had not yet seen elsewhere, such as "Errors are Consequences rather than Causes."


What's good about this book?

In the Preface the authors state their intention to "avoid psychobabble" and they are true to their word. Also, some useful concepts (e.g. vigilance decrement (p.24), error cascade (p.43), latent conditions (p.77), 5 stages in breaking a bad habit (p.109)) are explained and placed within the wider context of error.

The summaries at the end of every chapter are quick to read but sufficiently detailed to act as an aide-mémoire. 

Although this is a book about "human" error, Reason and Hobbs underline the fact that people are often the solution to problems and that if we had evolved to be "super-safe" and risk averse we probably would not exist as a species. ("What do you mean, you want to leave the cave?")

Lastly, the authors use real-world examples to illustrate the theory. They also provide practical techniques for tackling error, e.g. "ten criteria for a good reminder" (p.131), while stressing that there is no one best way and that people are capable of devising their own solutions.

What's bad about this book?

Nothing… Honestly cannot find fault with this book, it may not be relevant to everyone but otherwise it is worth the time spent with it.

Final thoughts

One would hope that a book co-authored by Reason would be a good read and this book does not disappoint. For the human factors expert perhaps there is nothing new here, but for the rest of us it is worth reading.



Monday 24 November 2014

Human factors and the missing suitcase (by M Moneypenny)

The 2014 ASPiH conference took place at the East Midlands Conference Centre in Nottingham. The conference hotel was located a stone's throw away. The free Wi-fi, clean rooms and provision to print out your boarding cards made staying at this award-winning establishment a nice experience. Until the missing suitcase that is…

A timeline of events

Like many hotels, the Orchard Hotel offered a luggage storage facility. I handed in my suitcase and was given a small paper tab, the number on this matched the tag placed on my luggage. For additional security my name was written on the luggage tag. (Fig. 1)

Fig 1: Ironic luggage tag

The suitcase was then taken to a storage area, to be collected at the end of the conference. So far, so normal…

At the end of the conference I wandered over to the hotel reception, luggage tab in hand and was slightly dismayed to find that all the suitcases had been placed in the hotel lobby. "Not great security", I thought. My dismay turned into slight panic when I couldn't find my suitcase amongst the twenty or so that were left. Where was my "Very Important Package"? I asked the front of house manager, who was standing at reception, and she went off to look for it. After about ten minutes she returned to tell me that those were all the suitcases from the conference and was I sure it wasn't there? I was sure… At this stage there was only one suitcase left (which bore only fleeting resemblance to mine) and (by looking inside it) the front of house manager was able to identify the owner.

Fig 2: @TheRealAlMay springs into action
With the power of social media (Fig 2; thanks for the RTs) and Google, we were able to obtain contact details of the supposed lapse-maker. By the time I touched-down in Scotland there was an apologetic email in my inbox. The other person had a similar suitcase at home and had been distracted looking for their coat. They hadn't realised they had the wrong suitcase until they opened it up to do the washing… (No comment).


After a couple of unreturned phone calls I managed to speak to the general manager (GM) of the hotel the next day, to find out how they would endeavour to return the suitcase to me. To my surprise the GM told me that had this been their "fault" they would've made sure a courier had picked it up and returned it to me, but because it wasn't their responsibility they would be willing to pay 50% of the cost. I did my best to explain that if the suitcase had not been placed in the foyer (and what was the point of the luggage tag system anyway?) then it wouldn't have been taken in error. After a polite discussion the GM asked me to leave it with him.

Thankfully my suitcase (and the laptop inside) arrived the next day and I could get back to writing my MD, blog, etc.


Human factors

  1. The luggage tag system I: This is a relatively robust system if the "rules" are followed. You get your tab, you go back with your tab, hand it to the receptionist and tell them your name (as an additional check) and he/she gets your suitcase, having checked the tab and your name with the tag.
  2. The luggage tag system II: This is a very slow system. Especially when over 200 delegates want to pick up their luggage at the same time, which is why the luggage was placed in the foyer for people to "pick your own".
  3. The lapse: It's the end of a long (but engaging) day, you want to catch the train and get back to your family. There is a bit of a problem with finding your coat but you've got your suitcase and you're rushing out to the taxi. (Would the error pass Reason's substitution test? It sure would.)
  4. Blaming the sharp end: The hotel general manager was very keen to point out that this person had walked off with my suitcase and that they (the hotel) was not at fault. Blaming the person at the sharp end is a symptom of poor organisational culture.

Lessons learned

My suitcase now has a very distinctive red and white ribbon to make it look more "unique". Unfortunately this probably also makes it stand out more for opportunistic thieves…..

Friday 3 October 2014

Sound the alarm!

US Secret Service under scrutiny

On 19th September 2014, an intruder armed with a knife scaled a fence surrounding the White House in Washington, DC. He managed to run across the North lawn, past a guard posted in the entrance hall and the stairs to the living quarters, before being tackled in the East Room. There were a number of factors involved in his success in accessing one of the most iconic buildings in the world, as detailed in this Washington Post article

One suggested contributing factor was the muting of an intruder alarm which would have alerted the guard in the entrance hall that the perimeter had been breached. According to the Washington Post, "White House usher staff, whose office is near the front door, complained that they were noisy" and "were frequently malfunctioning and unnecessarily sounding off."

Alarms, or their equivalents, have a number of everyday uses such as waking us up in the morning, telling us we've left the car headlights on, or informing us that we've burned the toast again. In healthcare, alarms are meant to draw our attention to occurrences which need to be acknowledged or require action. However, just as happened with the Secret Service, alarms (and their misuse) can cause their own problems.

The 4 most common abuses of alarms

(This is a list derived from personal experience and observations in the simulation centre, it is not definitive)

1) Alarm not switched on
Some healthcare devices allow alarms to be set but these are not the default option. For example the "low anaesthetic gas" alarm is often switched off by default by the manufacturer of the anaesthetic machine. Their reasoning may be that the alarm will sound inappropriately during the wash-in phase of anaesthesia as the anaesthetic gas increases from 0% to the set concentration. The down-side is that inattention by the anaesthetist may lead to patient awareness under anaesthesia if the anaesthetic gas falls below an appropriate level. A contributing factor to the lack of attention paid to the anaesthetic gas level may be a (not unreasonable) assumption that the machine would warn the anaesthetist of low anaesthetic level as the machine does alarm for most other variables if they are below a safe level.

2) Inappropriate alarm limits
Most alarms have default limits set by the manufacturer of the device. A pump may alarm if a given pressure is exceeded or an ECG machine may alarm if a given heart rate is not achieved. Some default limits are however outside safe levels. For example, some oxygen saturation monitors will not alarm until the saturation falls below 90%. With normal saturations of 99-100%, many healthcare personnel would prefer to be alerted at a higher level in order to begin countermeasures. 

3) Alarm not muted
One consequence of not muting an alarm is that the noise may be "tuned out" and therefore ignored. Another consequence is that some healthcare devices do not change tone as alarms stack up, i.e. if a second variable, such as heart rate, causes another alarm to trigger when the first alarm is still sounding, the original alarm masks the new alarm.

4) Alarm muted inappropriately
This was the case with the White House intruder and the ushers did not feel it was inappropriate at the time. The decision as to whether an alarm was muted inappropriately is often one taken in hindsight. The consequences are obvious, an alarm does not sound when it is supposed to. In addition, a false sense of security may occur, especially if not everyone is aware that the alarm is muted. In the White House example, the guards on the North Lawn pursuing the intruder may have believed that he would not gain access to the entrance hall as the guard inside is meant to lock the door if the alarm sounds.


Solutions

Intruder alarms tend to have low specificity and high sensitivity which may lead to repeated activation. In the White House case, with the wonderful retrospectoscope, the muting of the intruder alarm should have triggered an investigation and a search for alternative solutions. Perhaps the alarm could be a visual rather than auditory one, or perhaps the alarm could be relayed to an earpiece carried by the guard.

As always there is not one but many "solutions". Device users should be trained to know what the alarm settings are, how to alter them and the possible consequences of alarm (mis)use. Organisations should be aware of how their devices are being used, should set standards for critical alarm defaults and examine near-misses and critical events where alarms were contributory factors. Device manufacturers should involve end-users from the design stage of the equipment, should test their devices under realistic conditions (e.g. in a simulator) and should act on feedback from end-users to modify their devices.

Wednesday 1 October 2014

Book of the month: The Blame Machine: Why Human Error Causes Accidents by Barry Whittingham

About the author

According to the back cover, R.B. (Barry) Whittingham is "a safety consultant specialising in the human factors aspects of accident causation. He is a member of the Human Factors in Reliability Group, and a Fellow of the Safety and Reliability Society." He is also the author of "Preventing Corporate Accidents: The Ethical Approach".

 

Who should read this book?


Whittingham wrote this book for non-specialists, avoiding discussion of complex, psychological causes of human error and concentrating instead on system faults.

In summary


The book is split into 2 Parts. The first part looks at the theory and taxonomy of human error as well as the methods for calculating and displaying the probability of human error. The second part is a series of case studies of mishaps and disasters in a variety of industries, organised by error type.
  • Part I: Understanding human error
    • Chapter 1: To err is human
      • Whittingham looks at definitions of human error. He explains that it is impossible to eliminate human error, but that with system improvements these can be reduced to a minimum acceptable level.
    • Chapter 2: Errors in practice
      • In this chapter, Whittingham details two error classification systems: Rasmussen’s Skill, Rule and Knowledge (SRK) and Reason’s Generic Error Modelling System (GEMS) taxonomies.
    • Chapter 3: Latent errors and violations
      • Whittingham has placed these two subjects together for convenience rather than relation. He explains the preponderance of latent errors in maintenance and management, as well as the difficulty in discovering latent errors. He looks at ways of classifying violations, their causes and control.
    • Chapter 4: Human reliability analysis
      • Whittingham argues for a user-centred (rather than system-centred) approach to equipment design and, in this chapter, examines methods for determining human error probability (HEP). The two main methods are database methods and expert judgment methods.
    • Chapter 5: Human error modelling
      • In the most mathematics-intensive chapter, Whittingham looks at probability theory including how to combine probabilities and how to create event trees. This chapter also looks at error recovery (how errors are caught and why some are not).
  • Chapter 6: Human error in event sequences
      • Following on from chapter 5, Whittingham provides a detailed example of a human reliability analysis (HRA) event tree: a plant operator who has to switch on a pump to prevent the release of toxic gas in an industrial process.
  • Part II: Accident case studies
    • Chapter 7: Organizational and management errors
      • Flixborough chemical plant disaster, capsize of the Herald of Free Enterprise, privatisation of the railways
    • Chapter 8: Design errors
      • Fire and explosion at BP Grangemouth, sinking of the ferry "Estonia", Abbeystead  water pumping station explosion
    • Chapter 9: Maintenance errors
      • Engine failure on the Royal Flight, Hatfield railway accident
    • Chapter 10: Active errors in railway operations
      • Clapham junction, Purley, Southall, Ladbroke Grove
    • Chapter 11: Active errors in aviation
      • KAL007, Kegworth
    • Chapter 12: Violations
      • Chernobyl, Mulhouse Airbus A320 crash
    • Chapter 13: Incident response errors
      • Fire on Swissair flight SR111, Channel Tunnel fire
  • Conclusions
    • Whittingham concludes by drawing together his thoughts on human error and blame.

I haven't got time to read 265 pages!


This is a very easy to read book (a stark contrast with last month's book) and you may be surprised at how quickly you can get through it. However, those who are pressed for time should probably focus on Chapters 1 to 3 and then skip on to the accident case studies that they are most interested in.

What's good about this book?

Whittingham's style is eminently readable and makes this book into a real page-turner. He also simplifies concepts such as human reliability analysis. For example, having realised the health benefits of soya milk one can create a human reliability analysis event tree of the coffee shop barista not using soya milk in your coffee (all errors are the blog author's not Whittingham's)
 
The error probabilities are the blog author's own (with some reference to the HEART methodology data on p.54) and would suggest that about 1 in 200 coffees will result in the author walking away with dairy milk in his coffee.

Whittingham does not shy away from pointing out the corporate, judicial and political factors which create the environment in which simple errors become disasters. The corporate blame culture which results in the cover-up of near misses and political short-termism, such as seen in the nationalisation of the UK railways, are particular targets of opprobrium.

Whittingham also delivers a fresh look at a number of events which have been covered in detail in other books such as the Chernobyl disaster and the sinking of the Herald of Free Enterprise.

What's bad about this book?


Very little. The mathematics required to calculate error probabilities may be complicated, but this should not prevent an understanding of the concepts. One small gripe is the sub-title of this book (Why Human Error Causes Accidents) which is (perhaps unwittingly) ironic. Whittingham does a fine job of explaining how the term "human error" can be abused to quash an investigation. He also argues that the individual at the sharp end does not "cause" the event, but that the causative factors may lie in the distant past. Lastly, in healthcare at least, we are moving away from the term "accident" (the British Medical Journal banned the word in 2001) as it implies that there was nothing that could be done to prevent the event from happening. Perhaps the subtitle could be rephrased: "Why 'human error' 'causes' 'accidents'"

Final thoughts


This book deserves a place on the bookshelf of simulation centres which are interested in human factors and human error. The concepts of human reliability analysis and human error probability should be welcome in the healthcare environment.


Further Reading


"It's all human error" Blogpost

Tuesday 26 August 2014

Book of the Month: Using Simulations for Education, Training and Research (Dieckmann (ed))

About the editor

Peter Dieckmann is a work and organisational psychologist and head of research at the Danish Institute for Medical Simulation (DIMS). Dieckmann is a former Vice-President and President of the Society in Europe for Simulation Applied to Medicine (SESAM), as well as the former Chair of the Society for Simulation in Healthcare (SSH) Research Committee. His publication history is extensive and he is rightly considered an expert in the field of medical simulation.


About the contributors

While June's book of the month (Stress and Human performance) was US-dominated, this book is Europe-centric. The contributors are based in Belgium (2), Denmark (1), Germany (2), Norway (1), Sweden (1), Switzerland (2) and The Netherlands (1). The contributors are psychologists (Sven De Weerdt, Art Dewulf, Johan Hovelynck, Tanja Manser, Klaus Mehl, Theo Wehner), a social scientist (Ericka Johnson), a medical doctor (Marcus Rall) and a physicist/clinical engineer (Arne Rettedal).


Who should read this book?

Simulation-based educators should read this book, particularly those who are involved in designing programmes of training or who are responsible for designing the simulator environment and purchasing equipment. 


In summary

This book is part of a publication series entitled "Work Research Multidisciplinary" by Pabst Science Publishers, edited by Theo Wehner and Tanja Manser. The aim of the series is to show that complex research questions require a multidisciplinary approach and "help to show different perspectives of different disciplines on a specific topic."

The book is divided into 3 parts:
  • Part I: This consists of two sections which provide a context for the study detailed in Part II.
    • The use of simulations from different perspectives: a preface (Dieckmann)
    • On the ecological validity of simulation settings for training and research in the medical domain (Dieckmann, Manser, Rall & Wehner)
  • Part II: This, the core of the book, consists of a condensed version of Dieckmann's PhD dissertation, translated from German: "Simulation settings for learning in acute medical care"
  • Part III: This part is meant to broaden the perspective detailed in Part II and presents aspects of simulation settings that the contributors believe to be important.
    • A closer look at learning in and around simulations: a perspective of experiential learning (De Weerdt, Hovelynck, Dewulf)
    • Simulation as a tool for training and analysis (Mehl)
    • Extending the simulator: Good practice for instructors using medical simulators (Johnson)
    • Illusion and technology in medical simulation: If you cannot build it make them believe (Rettedal)


I haven't got time to read 214 pages!

Read pages 13-16 to get an overview of the book and help you decide what to focus on. p.93-104 are worth looking at, as they are a distillation of the views of simulation educators as to the goals, success factors and barriers of various stages of a simulation course. Unfortunately much of the remainder of the book's best arguments and ideas are, like panning for gold, only revealed after much hard work (see "What's bad about this book?" below)


What's good about this book?

The book has a number of thought-provoking sections.

Part I:
Dieckmann tells us we must move beyond the "scenario/debrief" model and view the simulation as a social setting, in which participants are trying to avoid some things, hoping to achieve others and how can we best achieve a realistic environment. In addition, we must not become obsessed with "making it real" but instead concentrate on the elements that make it sufficiently realistic for the participants to achieve their goals.

The contributors also caution about using an improvement in scores within the same session to show a real improvement, as this may merely reflect familiarisation with the simulator.

Part II:
Dieckmann talks about the "irony of safety" where healthcare workers are rarely exposed to events which require their best performance. He also provides us with a model for the components of a simulation course:
  1. Setting introduction
  2. Simulator briefing (Mannequin familiarisation)
  3. Theory input (Micro teaching, Algorithms…)
  4. Scenario briefing
  5. Simulation scenario
  6. Debriefing
  7. Ending
In this section, Dieckmann explains some of the ways that a simulator/simulation can be used. For example, we can use some of the "unrealistic" aspects of the simulator to emphasise characteristics more clearly than we can in clinical practice. We can for example remove all the "noise" that a patient presents with in terms of social history, co-morbidities etc. and focus on the "signal" of the salient pathology. The simulator can also be used to try out things in a safe environment, e.g. "Do everything you can to NOT work well as a team" and use them as a springboard for discussion. Dieckmann also suggests that we can use the simulator to train people in "control actions", these are generic actions which people can use in complex situations in order to help establish control, e.g. consciously stepping back and scanning the environment.

The final part of this section includes Dieckmann's recommendations for simulation practice, which include:
  • Consider the simulation setting as a whole with different parts
  • Distinguish verification from validation and check both
  • Distinguish teaching processes from learning outcomes
  • Keep it simple
  • Make it a co-operation, not a fight
  • Go beyond the clinical reality by using the full potential of the simulation reality
Part III:
In this part, we are introduced to the idea of simulation as a grown-up game, where participants can play and learn through playing. In his chapter, Mehl shows us how poor simulations can be in terms of showing participants their level of performance, the progress they have made and the errors they need to avoid. Running a simulation without an effective debrief can leave participants none the wiser as to where they are doing well and where they need to improve. Johnson underlines the importance of making the mannequin human by, for example, having a conversation with the mannequin during the familiarisation stage.  


What's bad about this book?

The main problem with this book is the language and vocabulary. Although Dieckmann may be forgiven for using scientific language for the condensed version of his dissertation, the other contributors have no such excuse. It may be that  English is not the first language of many of the contributors but at times the book becomes almost unreadable:
"A better understanding of potential alterations of experience and behaviour in simulation settings will allow for critically reflecting these effects in the interpretation of results from simulation-based studies."
 "However, the materialisation of these elements of medicine is only one half of the process of reifying medical practice."
"We propose that simulations represent a meeting point between blueprint and experiential approaches, expert and participative stances, and positivist and constructionist epistemologies."
Einstein said: "If you can't explain it simply, you don't understand it well enough." Although the contributors may understand the concepts very well, they have some way to go to help the rest of us.


Final thoughts

One of the main take home messages from this book was the unfortunate obsession of some educators in the simulation setting with the physical. "Simulators must be made to look more real", they cry, "and then the simulation will be perfect." These materialists ignore the social, inter-personal aspects of simulation: the scene-setting of the brief at the start of the day, the importance of the confederate in performing the magic act of turning the mannequin into a person, the subtle cues provided by the participants in the debrief. Materialists think that if you buy the right bit of kit, preferably costing a lot of money, then the rest is just fluff. Instead it is the other way around, the "fluff" makes the kit work.

Unfortunately, and somewhat ironically, because of the dryness of the language and the obtuse vocabulary this book falls into a similar trap. Although it rewards the determined reader, the book is not an easy read, its readership will be small and those who do read it are most likely experienced educators who already practice the magic. A more digestible book carrying the same messages would be welcome.

Friday 22 August 2014

The ice bucket challenge and other buckets

Bill Gates has a fancy ice bucket
This time next year no one will remember what the ice bucket challenge was about, so a brief description is called for:

In the summer of 2014 people in the United States started pouring ice water over their heads in support of the Amyotrophic Lateral Sclerosis (ALS) Association. Spread via social media the challenge was adopted by the Motor Neurone Disease Association in the UK and then Macmillan Cancer Support. Various versions exist, one of the more popular is to have to donate a small amount of money if accepting the challenge and a larger amount if not willing to be drenched in ice-cold water. (Billionaires tend to accept the challenge).

The tenuous link to simulation and human factors is that buckets are also found in our areas of interest…

The s**t buckets (James Reason's three bucket model)

James Reason asks us to consider three buckets, each bucket has things that will fill it up:
  1. Self
    1. Knowledge
    2. Skill
    3. Expertise
    4. Current capacity (stressed, tired, ill…)
  2. Context
    1. Equipment and devices (poorly maintained, broken, poorly designed…)
    2. Physical environment (too hot, noisy, unlit…)
    3. Workspace (novel, poorly laid out, interruptions…)
    4. Team and support (unfamiliar, poorly led, unclear roles…)
    5. Organisation and management (poor safety culture, steep authority gradient…)
  3. Task
    1. Errors (omission, commission, fixation…)
    2. Complexity
    3. Novelty
    4. Process (overlaps, multi-tasking…)
The more full the buckets are, the greater the risk of poor performance/error. Reason suggests that the more full the buckets are the more attention we need to focus on the task and, at a certain fill level, not start the task.


The mental workload bucket

This bucket was referred to in a previous blogpost. The workload bucket fills up as the number of tasks increase. The capacity and volume that a given task occupies are also affected by stress and expertise. When the workload bucket is full, it overflows and "something" has to make way for the new task.

The IV fluids bucket

The last bucket is for the simtechs. The IV fluids bucket sits under the bed/trolley/gurney and collects the fluids and drugs that participants on a sim course give the mannequin. This means that instead of pretending to give 6 litres of 0.9% NaCl, the participants can really give it and watch for hyperchloraemic acidosis...


Further reading

Reason J (2004) Beyond the organisational accident: the need for ‘error wisdom’ on the frontline. Quality and Safety in Health Care. 13, (Suppl 2), ii28–ii33.

Saturday 26 July 2014

Michael Jamieson and the Yerkes-Dodson rollercoaster


Michael Jamieson is a Scottish swimmer who was tipped for gold (by Adrian Moorhouse and Rebecca Adlington) in the 200m breaststroke at the 20th Commonwealth Games in Glasgow. Jamieson also declared that he was aiming for a new world record. On the day, however, he was out-swum by fellow Scot, Ross Murdoch. Could the high expectations and amount of stress Jamieson was under have prevented him from achieving his full potential?

Background

In 1908 Robert Yerkes and John Dodson wrote a paper entitled: "The relation of strength of stimulus to rapidity of habit formation". Using 40 "dancer" mice, a choice of two chambers and electric shocks of varying intensity, Yerkes and Dodson made an interesting discovery. As they increased the shock intensity the mice would learn faster which chamber to avoid. But only up to a point. Past this point the increase in shock intensity had a detrimental effect on the retention of this information.


The Yerkes-Dodson theory has been applied to human performance under stress. This means that people have an optimum point of stress and retention of learning. The optimum point is the plateau of the curve but each person will have different curves based on their personalities, experience and expertise.

The Yerkes-Dodson curve has also been shown to apply when we compare stress (or arousal) with performance (rather than habit-formation). Once the optimum point has been reached, then the greater the stress, the poorer the performance

The alternative route

The "original" U-shapes (top 2)
and the "easy task" bottom line
There is a slight addition to the Yerkes-Donaldson rollercoaster, the little-discussed, alternative sigmoidal route, which is based on the difficulty of the task. If the chambers were designed so that it would be difficult for the mice to distinguish between them, then the above findings held true. However, if it was very easy to distinguish between the two chambers then the higher the shock intensity, the faster the habit formation. This is shown in the graph as the bottom-most line. This means that, for simple tasks, the more stressed/aroused you are the faster you will learn/perform.


Relevance to simulation

If the Yerkes-Dodson theory is true then the amount of stress we expose our participants to will affect their learning and performance. Either too little or too much stress will have a negative impact. Likewise, as the task becomes more complex, the higher the likelihood that we will push the participant onto the downward slope. We must therefore design our courses with this knowledge in mind.
In addition, we must appreciate that inter-professional courses and courses which simultaneously feature both junior and senior staff are more likely to have a wider spread of curves. This means that the scenarios must be designed to stress the different healthcare personnel appropriately and not aim for the lowest common denominator.

Lastly, the Yerkes-Dodson theory can be applied to your facilitators as well. Too little challenge and they'll fall asleep, too much and they'll perform poorly. We need to make sure that we match the facilitators to the participants and have back-up available if needed.

Wednesday 16 July 2014

Book of the Month: Modeling and Simulation in Biomedical Engineering by Willem van Meurs

About the author

Willem van Meurs received his doctorate in control engineering from Paul Sabatier University, Toulouse, France in 1991. His claim to fame in the field of simulation is that he is the co-inventor of the Human Patient Simulator (HPS) which was commercialised by Medical Education Technologies, Inc. (METI), now part of CAE Healthcare. In addition he was president of the Society in Europe for Simulation Applied to Medicine (SESAM) from 2005 to 2007.


Who should read this book?

According to van Meurs, the target audience is "those studying or working in biomedical engineering: engineers, physicists, applied mathematicians, but also biologists, physiologists and, clinicians", as well as "clinical educators and simulator technicians" using the HPS.
In reality, the number of people world-wide who would read the entire book (other than book reviewers) is probably in the hundreds.


I haven't got time to read 185 pages…

Thankfully, unless you are involved in designing or building an actual simulator, you can get away with only reading the following chapters.
  • Chapter 1: Introduction (10 pages providing an overview of the concepts and vocabulary)
  • Chapter 2: Model Requirements (9 pages discussing the initial stages of simulator design)
  • Chapter 12: Design of Model-Driven Acute Care Simulators (8 pages discussing training needs, training programme design and, as a result, simulator design)

What's good about this book

The book makes one appreciate the complexity behind the HPS, which aims to realistically model a human being's physiology. Even if we consider just one variable, for example the partial pressure of oxygen in arterial blood (PaO2), the HPS tries to consider a number of inputs (the partial pressure of oxygen in the inhaled gas, the degree of shunt in the lungs, the metabolic rate, the concentration of haemoglobin, etc.) and a number of outputs (respiratory rate, heart rate, cardiac contractility, ECG morphology, etc.). A number of these inputs depend on other variables and the outputs have a number of effects, some of them in a positive or negative feedback loop on the original variable, PaO2.

The book also introduces some useful terms in model development such as the concept of black, grey and white boxes. The familiar black box is used in modelling where there is no need to know what the internal workings of the model are: a given input results in a given output. A white box is where one needs to know the exact mechanics of how input becomes output, usually because these mechanics are influenced by other processes and therefore the output will not just depend on the given input but also on the state of the system. Lastly, a grey box is where some of the mechanics are known (and modelled) and others are not.

van Meurs also covers the development of a full-scale simulator, which has 4 steps:

  1. Conceptual model (with consideration of the qualitative aspects)
  2. Mathematical model (with consideration of the quantitative aspects)
  3. Software implementation (with consideration of interfacing)
  4. Simulation results and validation (with consideration of the output data)

Lastly, for the biomechanical engineer, there is a review section at the end of some of the chapters which ask you to use some of the material covered to work through problems. (Somewhat criminally however, van Meurs does not provide the answers (or at least a guide to obtaining the correct answer) in the book.)


What's bad about this book

And so it is proven...
In terms of the target audience, this must have been a difficult book to write. Trying to make the material easy enough for clinicians to grasp but in-depth enough for biomechanical engineers, van Meurs probably satisfies neither. It may have been better to write two books for the two audiences. Out of the two audiences, the biomechanical engineers probably got the better deal from this book. They will have no problem with the equations such as that shown here, which are liberally sprinkled throughout and they are also given a reasonable introduction to the basic variables in cardiorespiratory physiology.

There's a forgivable typo on p.58 where we're told that 1 hour = 1 hour = 3600 seconds. On p. 148 and 149 there is some missing information around chest wall and lung compliance. We are told that lung compliance (CL) is 200ml/cmH20 and chest wall compliance (CCW) is 244ml/cmH20.  We are then told that the value of CCW is derived from a total lung and chest wall compliance (CT) of 110ml/cmH20 and the value of CL. The missing information is that the total compliance is the sum of the reciprocals, i.e. 1/CT = 1/CL + 1/CCW, because the pressure (at a given volume) is inversely proportional to compliance.


Final thoughts

This book has a small target audience and does not require a spot on every simulation centre's bookshelf. It is probably very useful and interesting as a beginner's text for biomedical engineers and simulator designers. It will help them to understand the linkages between the final output (a clinician feeling the mannequin's pulse) and initial input (a conceptual model of the cardiac system). 

The book provides insight into the thinking and philosophy of the designers and manufacturers of mannequins which attempt to have a  realistic physiological model running in the background. Compared to the mannequin manufacturers who devolve this responsibility onto the end-user, van Meurs and colleagues have perhaps a more noble calling. However the drawbacks are that the finished product is more complicated and more expensive. It is also, as the human beings it simulates, less predictable and, if your centre owns a HPS, this book may help you understand why.


Thursday 26 June 2014

SESAM 2014: The good, the bad… and the missing

Poznan town hall
The 20th Anniversary meeting of the Society in Europe for Simulation Applied to Medicine (SESAM) took place in Poznan, Poland from the 12th-14th June 2014.

The good

Keynotes
The keynote speakers were excellent. Roger Kneebone started us off with a talk about engaging the public, setting up a 2-way conversation between clinicians, scientists and the laypeople. Roger provided lots of food for thought around breaking down the boundaries of the simulation centre, as well as the similarities between experts in other fields (stone masons, lute-makers, tailors) and experts in medicine.
Terry Poulton talked about virtual patients and how they are using the findings from Kahneman's studies to inform the decision-making aspect of the virtual patient programmes. Terry also asked for people who were willing to collaborate with him on a project combining virtual patients and simulation.
Lastly, Walter Eppich discussed feedback and debriefing, how they can be applied to clinical practice and pitfalls to watch out for.

Technology
The SESAM app was great. Simple and easy to use, it allowed you to see the programme including the abstracts of the workshops or lectures. It also allowed you to "favourite" individual sessions, so you could quickly figure out where you needed to go next. Also, if you registered with the app, you could be messaged by other conference attendees, which made meeting up with people very easy. A great addition and "must-have" for future conferences.
The wi-fi was free, fast and easily able to cope with the number of people connected. Unlike some conferences the wi-fi did not drop off intermittently or tell you that the maximum number of people had been connected.
There were charging stations for your iPhone/iPad (other mobile devices are available) which meant you didn't have to go looking in corners of rooms for plug sockets.
Twitter is being used more and more and #sesam2014 allowed you to keep up with developments in other sessions.

Workshops & SimOlympics
SimOlympics (with a scary mannequin)
The workshops were interactive (thank goodness!) and informative. Ross Scalese ran a very good workshop on simulation assessment, covering checklists and rating scales, how to train raters, reliability and validity. The small number of participants (see below "…and the missing") meant that this was almost a one-to-one opportunity to talk about problems and solutions. 
SimOlympics was good fun. Seeing group after group of medical students being put through their paces (with a range of performances) was inspirational.

Range of participants
It was a pleasure to meet people from all over Europe, including Ukraine and Czech Republic, from a range of healthcare backgrounds (paramedics, surgeons, GPs, paediatricians, nurses, etc.) all at different stages of simulation development. The conference was a real melting pot of people which allowed you to learn from some and help others.

The bad

Time-keeping was poor. In particular, the introductions to the keynotes started late and then ran over, which meant that the keynotes themselves were curtailed and/or rushed. Sticking to time is basic "good housekeeping" and, after 20 years, should not still be a problem.

Some sessions were cancelled or the facilitator failed to show up at the last minute. A pre-conference course for simulation technicians was cancelled the week before (although the organisers were happy to refund the money) and a workshop on ROI, to be led by Russell Metcalfe-Smith, resulted in about 20 participants milling around waiting for him, only to be to be told (after about 15 minutes) that the workshop had been cancelled.

…and the missing

There were much fewer participants than at SESAM 2013 in Paris. When you can barely walk around a hospital now without tripping over a mannequin of some sort, the lack of participants was surprising. Explanations include:
  • AMEE is in Milan this year. A number of people have said that they can only go to one conference per year and would prefer to go to "beautiful" Milan than Poznan. However, choosing the conference based on the city it is being held in rather than the content seems somewhat strange…
  • HPSN Europe is in Istanbul and similar arguments about "beautiful" Istanbul have been aired. In addition, the conference is free. It is unclear whether having a free industry-sponsored conference is of benefit to the advancement of simulation across Europe.
  • Budgets in a time of austerity. Having had to make a strong case for attendance at SESAM bSCSCHF staff it is probable that a "holiday" in Poland would not be supported by many simulation centres. Unfortunately it is a recurring theme that budget holders are happy to pay thousands of pounds/dollars/euros for pieces of equipment but are not willing to pay hundreds of pounds/dollars/euros for staff to be trained or to attend conferences. This short-sightedness needs to be tackled head-on.

Final thoughts

The SESAM2014 conference was extremely worthwhile attending. If you were unable to attend because of monetary constraints you need to make a stronger case. If you feel that you aren't part of a network or are unsure how to integrate simulation into your curriculum or need advice about inter-professional education then SESAM is the forum for you. If you want to get the chance to listen to and speak to some of the trailblazers in simulation (Kneebone, Eppich, Scalese, Dieckmann and more) then SESAM is where you need to be. SESAM2015 is in Belfast, Northern Ireland, June 24th-27th. Hope to see you there...

Wednesday 25 June 2014

Book of the Month: Stress and Human Performance (Driskell & Salas (eds))

About the editors

Eduardo Salas is currently Professor of Psychology and Program Director for the Human Systems Integration Research Department at the Institute for Simulation & Training at the University of Central Florida. When this book was published in 1996, Salas was a senior research psychologist and Head of the Training Systems Division at the Naval Air Warfare Center, Orlando, Florida.

James E. Driskell is President and Senior Scientist, Florida Maxima Corporation and Adjunct Professor, Rollins College, Winter Park, Florida. The Florida Maxima Corporation is, according to its website "a small business that conducts basic and applied research in the social and behavioral sciences in government, academia, and industry."

Salas and Driskell continue to collaborate on topics such as deception, team performance and stress.

About the contributors

There are 17 contributors to this book, including the 2 editors. The foreword states: "this book brings together a set of authors who are not only prominent researchers within this field, but are also actively involved in the application of this research to real-world settings." Unfortunately only 2 of the authors are not from the US and 8 of them work in Florida. It is possible that the rest of the world had nothing to add to this book but more likely that the strong tendency to collaborate with people you know meant that this book is rather US-focused.

Who should read this book?

This book was written for:
"...researchers in applied psychology, human factors, training and industrial/organizational psychology. (As well as) practitioners in industry, the military, aviation, medicine, law enforcement, and other areas in which effective performance under stress is required"(p. viii)
The editors are clear that this book deals with acute stress, and not with chronic stressors, stress-related disorders or "coping". By acute stress the editors mean "emergency conditions" where the stress is novel, intense and time-limited.

Parts of the book are relevant to the simulation-based medical educator (see below).

In summary

The book is split into 3 main sections:
  1. Introduction. A chapter looking at definitions of stress and its effect on performance.
  2. Stress Effects. 4 chapters which look at how stress affects performance.
    1. The effect of acute stressors on decision making
      • Looks at decision-making strategies (non-analytical and analytical; similar to Kahneman's System 1 and 2; recognition-primed, naturalistic etc.)
    2. Stress and military performance
      • Looks at stressors of military personnel and methods for improving performance under stress (including CRM). Importance of team training.
    3. Stress and aircrew performance: A team-level perspective
      • Importance of teamworking and teamwork training, for dividing up tasks, for monitoring one another's behaviour and for providing support
    4. Moderating the performance effects of stressors
  3. Interventions. 3 chapters which look at how to minimise the effects of stress.
    1. Selection of personnel for hazardous performance
    2. Training for stress exposure
      • Fidelity requirements; sequencing and training content
    3. Training effective performance under stress: queries, dilemmas, and possible solutions

I haven't got time to read 295 pages!

Read the following bits (depending on your area of interest):

Chapter 1 for a good introduction and overview of stress and its impact on performance. 
Chapter 2 p.69-83 for a very good description of USS Vincennes shooting-down of the Iranian airliner.
Chapter 3 p.105-116 to understand why team training is important (not just because it looks good).
Chapter 4 p.143-149 for an overview of how an organisation can help or hinder team performance
Chapter 6 p.203-206 and 213-217 for an overview of personality types and stress
Chapter 7 p.247-253 for Stress exposure training (SET) guidelines
Chapter 8 p.272 for concluding remarks on training effective performance
Chapter 9 if you're interested in human-system interface issues

What's good about this book?

The book is generally well-written and, on the whole, the arguments made are easy to follow. We are told how stress may be defined by orientation to the environment (i.e. the environment is a stressful one), by orientation to the individual or by orientation to the relationship between the environment and the individual. The editors prefer the latter and provide a nice working definition of stress (quoting Lazarus and Folkman (1984)):
"Psychological stress is a particular relationship between the person and the environment that is appraised by the person as taxing or exceeding his or her resources and endangering his or her well-being" (p.6)
The editors therefore distinguish between a threat, where the capacity to respond is exceeded, and a challenge, where the person has sufficient capacity to respond and the expected gain exceeds potential harm. By the same token, "stress" is in the eye of the beholder and what is stressful to one person will be a minor challenge to someone else.

There is a good explanation of two different theories of task load. The first is the bucket (or capacity) theory, where a limited pool of attentional resources is available and when the bucket is "full" there is a reduction in performance. The second is the structural theory which envisages a parallel processing system which must go through an attentional serial bottleneck and it is this bottleneck which slows down performance.


There is a good discussion of the impact of the organisation on the team, including how the organisation forms teams, how it supports them and how it helps teams to interface with one another. The recommendations in terms of support include:

  1. A reward system that provides positive reinforcement for excellent team work
  2. An education system that provides the relevant training and resources required by team members
  3. An information system that provides (in a timely fashion) the data and resources necessary to assess, evaluate and formulate effective crew coordination strategies

What's bad about this book? 

The book specifically does not look at chronic stressors, such as life stress, fatigue or sleep loss. Nor does it examine the effects of boredom on performance. Unfortunately we don't live or work in that utopian world where those influences are irrelevant. It may in fact be the case that the chronic stressors play a significant, even pivotal, part in turning a challenge into a threat, overwhelming our coping mechanisms. A book which refers to both chronic and acute stressors and their (synergistic?) role in failures would be very welcome.

Klein's chapter (Chapter 2: The Effect of Acute Stressors on Decision Making) is a confusing addition. In a book on stress and performance, Klein states:
"(Stressors) can degrade the quality of judgments, prevent the use of rational decision strategies, and severely compromise performance; at least that is a popular appraisal of stressors. The thesis of this chapter is that each of these assertions is either incorrect or misleading"(p.49)
Klein does try and mitigate this statement by then arguing that time stress, for example, leads to poor performance not because of the increase in stress but the decrease in time. He also argues that naturalistic decision-making is not "rational" anyway, is resistant to stress and that stress can improve performance. By p. 55 he is back-tracking slightly: "At the beginning of this chapter we claimed that stressors do not necessarily degrade decision making. (my italics)" The chapter reads poorly and Klein constructs straw men using semantic arguments about what "stress" really means.

In addition, although the book is presented as starting off with a section on stress and its effects followed by a section on dealing with stress, some of the earlier chapters (e.g. chapter 3) have "dealing with stress" sub-sections within them.

Final thoughts

There are repeated instances where, although simulation is not referred to, the benefits of using simulation to deal with or train for stress are made evident. For example, on page 12 "The development of positive performance expectations is a crucial factor in preparing personnel to operate under high-demand conditions."On page 15 "…performers were less distracted (by noise) when the task was well-practiced". On page 83 "To help decision makers avoid potential disruptions due to stressors, it may be useful to train them to better manage time pressure, distracting levels of noise, and high workload."

This book also provides the SBMEducator with some ammunition for courses with advanced-level participants. Stress can be induced using noise, group pressure, task load, threat or time pressure. In addition, stress can be induced by task similarity, so if you want to distract the participant use a visual distraction for when they need to focus on a visual task and auditory distractions when the task relies on auditory cues.

This book has emphasised the need to look at how we, as simulation providers, can both ensure that the environment is stressful enough (to ensure learning) but not so stressful that the participants are overwhelmed. In addition, we could do more to help participants recognise their stress reactions for what they are and explore with them how they can continue to perform optimally under stressful conditions. Lastly, SBME can increase the skills and skill levels of participants and, at the same time, make participants more aware of where their personal capabilities lie.

Rent this book out and read the sections which are relevant to your work, it will increase your understanding of stress, its effects and preventive/mitigating actions.