Friday, 29 April 2016

Book of the month: The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us by Christopher Chabris and Daniel Simons

About the authors
Christopher Chabris (@cfchabris) is an associate professor of psychology and co-director of the neuroscience programme, Union College, New York. Daniel Simons  (@profsimonsis a professor in the department of psychology and the Beckman Institute for Advanced Science and Technology at the University of Illinois. Chabris and Simons ran one of the most famous experiments in psychology, the "invisible gorilla" (video). A blogpost discussing the conclusions to be drawn from their experiment and related ones is available here: Inattentional blindness or "What's that gorilla doing there?".

Who should read this book?

Anybody with an interest in human performance limitations will find this book an interesting read. In addition, many of the concepts are useful to gain insight into how people perform within a simulated environment and in clinical practice.

In summary

The book is divided up into an Introduction, six chapters and a Conclusion. The six chapters are:
  1. "I Think I Would Have Seen That"
  2. The Coach Who Choked
  3. What Smart Chess Players and Stupid Criminals Have in Common
  4. Should You Be More Like a Weather Forecaster or a Hedge Fund Manager?
  5. Jumping to Conclusions
  6. Get Smart Quick!

Chabris and Simons explore and explain a number of misconceptions we have about our own abilities. Each chapter focuses on a specific "illusion": attention, memory, confidence, knowledge, cause, and potential. Chabris and Simons are interested in the fact that, not only do we suffer from these illusions, but we also are unaware of them and are surprised when they are pointed out.

What's good about this book?

This book is well-written and very easy to read. Each chapter focuses on one topic and is peppered with everyday examples to illustrate concepts. These include motorcycle collisions, film continuity errors, a sense of humour, and lifeguards in swimming pools.

Not an effective way to change behaviour
In Chapter 1 the authors discuss why cars hit motorcycles (at times due to inattentional blindness) and they also explain why "Watch out for motorcycles" posters and adverts are not effective. They suggest that making motorcycles look more like cars, by having two widely separated headlights, would make them more visible to other car drivers. The same concept of "attention" also explains why the risk of collision with a bicycle or motorcycle decreases as the number of these forms of transport increase. The more often people see a bicycle on the road, the more likely they are to expect to see one and look for one.The authors also provide additional details about the various illusions. For example, eye-tracking experiments have shown that those who do not see the "invisible" gorilla spend as much time directly looking at it as those who do.

Chapter 2 looks at memory and uses persuasive experimental evidence to convince the reader that memory is fallible. In particular, contrary to popular belief, people do not have crystal clear memories of what they were doing during exceptional events such as 9/11 or Princess Diana's death. People think they do, because they think they should, and therefore are confident about these (unclear) memories.

Chapter 3 explores confidence. The first example used is a doctor who looks up a diagnosis and treatment, which makes his patient feel very uneasy. Isn't a doctor supposed to know this stuff? We encounter similar situations in simulation, with the tension between appearing confident and being able to admit ignorance often results in a less than ideal outcome. The notion of moving from unconscious incompetence to unconscious competence is also covered here, by referring to an article ("Unskilled and Unaware of It") which begins with a description of an inept bank robber.

Would you ride this bike?
Chapter 4 explains why we often think we know more than we do. The authors make this point by asking the reader to draw a bicycle and then to compare this against the real thing. (Italian designer Gianluca Gimini has created some interesting 3-D renderings of people's concepts of what a bike looks like.) This illusion of knowledge, they argue, played a part in the 2008 banking crisis as bankers thought they understood both the banking system and the extremely complex collateral debt obligations (CDOs). 

In Chapter 5 Chabris and Simons explore causation and correlation. While many people with arthritis think they can tell when the weather is about to change, researchers have found no correlation. It is likely that their pain levels fluctuate but if the weather changes they then ascribe their pain to the change in atmospheric pressure.

In Chapter 6 the authors debunk the Mozart Effect, which led parents to play Mozart to babies in the belief that it would make them smarter. Similar claims by Lumosity, a company which alleged that playing its games would delay age-related cognitive impairment, resulted in a $2 million lawsuit.

What's bad about this book?

There is very little to fault this book. Chabris and Simons call limitations in human performance "illusions" because, like M. C. Escher's prints, they persist even when you know what they are. The authors do a great job of explaining the illusions but do not spend enough time addressing the ways in which we might improve our ability not to succumb to them. 


Final thoughts

In terms of simulation, this book explains a number of behaviours that we witness in the simulated environment. For example, it is not unusual for participants to "lie" about something that happened. They may be adamant that they called for help, but the debriefer knows (and the video shows) that this was not the case. The participant is falsely remembering a call for help because they think that they would always call for help.

Again, in terms of the illusion of confidence, we find that those who are least able are often most confident because they lack the insight required to know how poor their performance is.

In terms of human factors, this book will provide a number of examples of human fallibility for workshops or other courses. It also reinforces the need for systems which help humans. As an example, changes in a patient's end-tidal CO2 (ETCO2) trace can suggest physiological impairment, but most machines do not make the clinician aware of these. A smarter monitor would alert the clinician to these changes instead of relying on his or her continued awareness. 


Wednesday, 30 March 2016

Sharpening the saw: everyday debriefing practice

Participants on our 2-day introductory faculty development course are given all the tools they need to plan, run and debrief a simulated experience aligned to learning objectives. However, on returning to their own workplaces, they often do not have the opportunity to run simulations regularly. This lack of practice means that their skills in debriefing do not improve as quickly as they would like. Also participants often mention that they don't have the time to carry out a 40 minute debrief. The good news is that they don't have to.

In Stephen Covey's book "The 7 Habits of Highly Effective People", the seventh habit is "Sharpen the Saw". This habit, which includes social, emotional and physical well-being, also focuses on learning. This blogpost will explain how you can "sharpen the saw" every day with respect to debriefing in a few straightforward steps:

1) Find a learner
Anybody will do (a trainee, a student, a colleague...)


2) Rustle up some learning objectives
The learning objectives can come from your learner (e.g. "What do you want to focus us on today?" "What do you want to get out of today?" "What have you been struggling with?") Or they can come from you. 


3) Have an experience together
This can be pretty much anything. Inserting a nasogastric tube, carrying out a laparoscopic cholecystectomy, doing the drug round on a ward, going on a home visit, etc. The proviso is that you must have enough mental workspace available to observe the learner. This does not mean that you must be "hands off". However if you are too involved in the experience yourself, perhaps because it is complicated or time-critical, you are unlikely to be able to have a conversation with the learner about their performance.


4) Practice your debriefing skills (as per the SCSCHF method)

a) Reactions
Ask him/her how that felt. What are their emotions about the experience.

b) Agenda
Ask him/her what they thought went well and what the challenges were.

c) Analysis
The assumption is that you don't have the time to spend 30 minutes in this phase of the debrief, so focus on just one thing. Use good questioning technique (taught on the faculty development course) to delve into the mental frames, heuristics, assumptions etc. which led to this being a challenge or a good performance.

d) Take Home Messages
What is your learner going to differently or the same next time based on your facilitated discussion.


5) Get feedback
Practice does not make perfect, practice makes permanent. Deliberate practice with feedback propels you up the slope towards perfection. So get feedback from the learner. What was good about the way you helped them learn, what didn't work? If you can, now and again get a colleague, who has also been on the faculty development course, to sit in on the above and also give you feedback.


6) Reflect on your performance
This does not have to take long or to be done then and there. At some stage reflect on your performance with the benefit of the feedback you have obtained. What are you going to do differently next time?


7) Repeat
Do steps 1-6 again. Tell us how you get on....

Wednesday, 23 March 2016

Simulation and Learner Safety

Primarily when we talk about safety in simulation we are referring to patient safety. Patient safety in two senses. The first is that one of the main reasons for carrying out simulation is to improve patient safety by looking for latent errors, improving teamwork, testing equipment, etc. The second is that "no patient is harmed" during simulation exercises.

In the brief before the simulation event, safety is also often mentioned in the establishment of a "safe learning environment (SLE)" and, in this context, it refers to Learner Safety. A recent clinical experience reinforced my appreciation of the SLE.

It was 10pm and I was resident on-call when my phone went off to tell me that a poly-trauma was on its way in. 2 adults and 3 children had life-threatening injuries after a collision on the motorway. Although I have been an anaesthetist for 13 years, a consultant for 5 of those, my clinical experience of polytrauma in adults is minimal and in children is essentially nil. I have looked after a man who had major injuries and 95% burns after an industrial explosion, another man who suffered severe injuries after he ran his car underneath a flatbed truck and the occasional stabbing and shooting victims. In children I have intubated a  2-week-old "shaken baby" and anaesthetised a large number of children on the trauma list for broken wrists, arms, ankles, etc. 

When faced with infrequent events it is not unusual to carry out a memory scan to draw on previously obtained knowledge relevant to the situation at hand. I remembered the above patients and I also remembered a simulation course I had been on at the SCSCHF: Managing Emergencies in Paediatric Anaesthesia  for Consultants (MEPA-FC). My scenario involved a boy who had been run down by a car, he had a number of injuries including a closed intracranial bleed. My first thought when I remembered this scenario was "I did okay". Then I mentally went through the scenario again, thought about what had gone well and what, with input from the debrief, I should have done better. This then was the knowledge I had front-loaded and the emotional state I was in when the patients arrived in the ED.

When I talked through the above with David Rowney, the facilitator on the MEPA-FC course, he expressed surprise that my first thought was "I did okay" rather than remembering the Take Home Messages for my scenario. But there it is. It may be that I am very different from other people but I think it is not unusual to have an emotive reaction to a memory before a logical one.

This then made me think about the simulation participant who might not have had the SLE I had. The participant who, after their paediatric trauma scenario, had been dragged over the coals and made to feel incompetent. What would the emotional state of that doctor be as they walked down to the ED? And how would that affect their performance?

This blogpost is not a plea to "take it easy" or "be gentle" with participants. Poor performance must be addressed, but it must be addressed in a constructive manner. Help the participant understand their performance gaps and how to bridge them, while at the same time remembering "I'm okay. You're okay." Very few of us come to work (or to the simulation centre) to perform poorly. In fact most people in a simulation are trying to perform at the peak of their ability. When they fall short it is important to help them figure out why that is, while re-assuring them that they are not "bad".

Wednesday, 9 March 2016

Book of the month: Resilient Health Care (Hollnagel, Braithwaite and Wears (eds))


About the editors
Erik Hollnagel has a PhD in Psychology and is a Professor at the University of Southern Denmark and Chief Consultant at the Centre for Quality Improvement, Region of Southern Denmark. He is the chief proponent of the Safety-II paradigm and helped to coin the term "resilience engineering".
Jeffrey BraithwaitePhD, is the director and a professor of the Australian Institute of Health Innovation and the Centre for Health Care Resilience and Implementation Science, both based in the Faculty of Medicine and Health Sciences at Macquarie University, Australia. He is also an Adjunct Professor at the University of Southern Denmark.
Robert Wears, MD, PhD, is an emergency physician and professor of emergency medicine at the University of Florida and visiting professor at the Clinical Safety Research Unit, Imperial College London.

About the contributors

There are 27 other contributors, including well-known names such as Charles Vincent and Terry Fairbanks. The contributors are a world-wide selection, encompassing the US, Europe and Australasia. The majority are from a sociological/psychological research background rather than front-line clinical. 

Who should read this book?

This book will be of interest to those who are tasked with improving patient safety within their organisation, whether this is by collecting and analysing incident reports or "teaching" healthcare workers. It would be useful reading for board members, healthcare leaders and politicians involved in healthcare.

In summary

The book is divided into 3 parts (18 chapters), as well as a preface and epilogue by the editors

  1. Health care as a multiple stakeholder, multiple systems enterprise
    1. Making Health Care Resilient: From Safety-I to Safety-II
    2. Resilience, the Second Story, and Progress on Patient Safety
    3. Resilience and Safety in Health Care: Marriage or Divorce?
    4. What Safety-II Might Learn from the Socio-Cultural Critique of Safety-I
    5. Looking at Success versus Looking at Failure: Is Quality Safety? Is Safety Quality?
    6. Health Care as a Complex Adaptive System
  2. The locus of resilience - individuals, groups, systems
    1. Resilience in Intensive Care Units: The HUG Case
    2. Investigating Expertise, Flexibility and Resilience in Socio-technical Environments: A Case Study in Robotic Surgery
    3. Reconciling Regulation and Resilience in Health Care
    4. Re-structuring and the Resilient Organisation: Implications for Heath Care
    5. Relying on Resilience: Too Much of a Good Thing?
    6. Mindful Organising and Resilient Health Care
  3. The nature and practice of resilient health care
    1. Separating Resilience from Success
    2. Adaptation versus Standardisation in Patient Safety
    3. The Use of PROMs to Promote Patient Empowerment and Improve Resilience in Health Care Systems
    4. Resilient Health Care
    5. Safety-II Thinking in Action: 'Just in Time' Information to Support Everyday Activities
    6. Mrs Jones Can't Breathe: Can a Resilience Framework Help?

I haven't got the time to read 238 pages...

For the time-poor, the preface and epilogue are worth reading. Chapter 3 on the challenges resilience poses to safety, Chapter 5 on quality versus safety and Chapter 11, co-authored by Charles Vincent, on the downsides of resilience, are also worth reading.

What's good about this book?

This book makes it clear that "resilience" can mean different things to different people. The authors identify resilience as part of the defining core of a system, something a system does rather than something that it has (p.73, p.146, p.230). This is in contrast to some who call for more resilient healthcare workers, with the implication that if they were "tougher" then they would make fewer mistakes. Resilience is also not just about an ability to continue to function but an ability to minimise losses and maximise recovery (p.128).

The authors also make it clear that resilience is not a self-evident positive attribute. More resilience in a system does not come without cost including, for example, a system which may resist "positive" change, such as some of the changes that the patient safety movement is trying to embed. Safety may focus on standardisation and supervision while resilience focuses on innovation, personalisation and autonomy (p.29). In Chapter 3, RenĂ© Amalberti argues that "it is not a priority to increase resilience in health care. The ultimate priority is probably to maintain natural resilience for difficult situations, and abandon some for the standard" (p.35).

The book helps to explain the lack of rapid advance in patient safety because of the "economic, social, organisational, professional, and political forces that surround healthcare" (p.21). Healthcare may be unique in the diversity and strength of these influences. In addition the authors argue that there is a gap between the front-line and those who manage "safety" (p.42), a finding echoed by Reason and Hobbs in their book on maintenance error.

The book makes a good critique of the "measure and manage" approach of Safety-I (p.41) which:
  • is retrospective
  • focuses on the 10%
  • misses learning to be found in safe practice
  • focuses on the clinical microsystem rather than the wider socio-cultural, organisational, political system 
Lastly, much work is currently focused on standardisation, however the authors argue that we should  acknowledge the inevitability of performance variability, the need to monitor it and to control it (by dampening it when it's going in the wrong direction and amplifying it when it's going in the right direction). (p.13) The standardisation that does improve resilience is the type that decreases the requirements for effortful attention or the need to memorise (e.g. checklists, layout of workplaces).


What's bad about this book?

Throughout this book, resilience is linked with the Safety-II concept (e.g. "Chapter 1: Making Health Care Resilient: From Safety-I to Safety-II"). The argument for Safety-II can be a nuanced one, therefore a good book on resilience would use simple language and provide specific examples. This book fails on the former and performs poorly on the latter. In particular, how Safety-II can be put into practice now is only vaguely referred to. Even the chapters which purport to show resilience in action do not make this very clear. Exceptions include Chapter 12 "Mindful Organising and Resilient Health Care" which suggests that people should be shown their inter-relations, i.e. how their actions affect those who interact with a patient upstream and downstream. 


At times, the championing of Safety-II gives its proponents the appearance of a cult, e.g. "Enlightened thinkers in both industry and academia began to appreciate..." (p.xxiv) while one must imagine that unenlightened thinkers continued to live in their caves. There are also attacks on the PDCA/PDSA cycle (p.177) and the use of barriers (p. 131) as Safety-I thinking. In addition Safety-I, as a term and paradigm, has been created by Safety-II advocates, and in fact "pure" Safety-I probably does not exist. For example: "In contrast to Safety-I, Safety-II acknowledges that systems are incompletely understood...", however very few people working in healthcare, even within a Safety-I system, would argue that they fully understand the system.


One of the examples in the book of proactive safety management is the stockpiling of H1N1 drugs and vaccines in 2009. This was later deplored by a number of sources as the mild epidemic killed fewer people than seasonal flu and millions of pounds of stockpiles had to be destroyed. 

Lastly one of the arguments the authors use against Safety-I thinking is that focusing on the small number of adverse events means we miss the opportunity to look at all the times things went well. However, with 10% of patients admitted to UK hospitals being subjected to iatrogenic harm (Vincent et al 2008), the number of times things go wrong is still a large chunk of the total work.

Final thoughts

This book makes a strong argument that we must stop looking purely at what has gone wrong in order to find out how to prevent mistakes. It also makes it clear that healthcare, as a complex adaptive system, will not be "fixed" by silver bullets, and that all solutions to problems create their own problems.

The concepts underpinning Safety-II, which include an urge to focus less on incidents and accidents and more on things that go well, are antithetical to much current thinking within healthcare. In addition patients and their families would not accept "I'm sorry you were harmed but we're focusing on things that go right" as an apology. This means that rather than pushing Safety-II, it may be more effective to advocate Safety-III. In Chapter 12 this is defined as: 
"... enactive safety - embodies the reactive [Safety-I] and proactive [Safety-II] and therefore both bridges the past and future, and synthesises their lessons and prospects into current action." (p.155)
Hollnagel himself says "...the way ahead does not lie in a wholesale replacement of Safety-I by Safety-II, but rather in a combination of the two ways of thinking" (p.16). Safety-III may turn out to be a quixotic Theory of Everything. Or it may mature into an accepted, practical and applied paradigm, with "a degree of autonomy at the interface with the patient, yet predictability and effectiveness at the level of the organisation" (p.132). Its adherents still have much work to do.

Further reading:


Vincent, C., et al. (2008) Is health care getting safer? British Medical Journal, 2008;337:a2426.