Monday, 30 April 2018

Book of the month: Human Factors & Ergonomics in Practice (by Steven Shorrock & Claire Williams (eds))


(Conflict of interest: I have had a number of chats with Steven Shorrock, as well as email & twitter correspondence, and ran a 1-day Human Factors for Surgeons course with him. I have tried to give an objective review.)

About the editors
Steven Shorrock BSc, MSc, PhD (@StevenShorrock) is a chartered ergonomist and human factors specialist and a chartered psychologist. He is the European safety culture program leader at EUROCONTROL and adjunct senior lecturer at the School of Aviation, University of New South Wales, Sydney, Australia.

Claire Williams BSc, MSc, PhD (@claire_dr) is a chartered ergonomist and human factors specialist. She is a senior HF/E consultant at Human Applications and visiting research fellow in HF and behaviour change at Derby University, Derby, UK.

About the contributors

There are 45 other contributors, including internationally-recognised names such as Ken Catchpole, Sidney Dekker, Erik Hollnagel and Martin Bromiley. The contributors are based in North America, Europe and Australia. As befits the title, they are mainly involved in applied, practical human factors ergonomics.

Who should read this book?

This book should be on the bookshelf (actual or digital) of all those who are involved in HF/E work. This includes the "HF/E curious" with no formal qualifications in HF/E, experienced chartered ergonomists, as well as those who are purchasing the skills of HF/E practitioners. The book will also resonate with simulation-based educators with a number of themes such as safety culture and HF/E in healthcare.

In summary

The book is divided into 4 parts (31 chapters), as well as a foreword and afterword.
Part 1, "Reflections on the Profession", considers the definition of HF/E, as well as its history and current practice. 
Part 2, "Fundamental Issues for Practitioners", looks at some of roles HF/E specialists have to adopt and the challenges they face. These challenges include carrying out research when the employer is looking for a practical solution or the information gathered could be sensitive or embarrassing.
Part 3, "Domain-specific issues", details the musings of specialists currently engaged in a number of different domains. These include "obvious" sectors such as aviation, oil and gas exploration, and the nuclear industry, as well as less well-known sectors such as web engineering, agriculture, and the construction and demolition industry.
Part 4, "Communicating about Human Factors and Ergonomics", explains how to engage with executives as well as those at the sharp end.

I haven't got the time to read 413 pages...

Each Part has its own summary to give you an idea about what is going to be discussed and, as should be the norm with edited books, every chapter also starts with a single paragraph practitioner summary. You can therefore decide which chapters are most likely to be of benefit to you (although see "What's good about this book?" below)


What's good about this book?

There is a degree of soul-searching here not often seen in textbooks. For example, in Chapter 5 "Human Factors and the Ethics of Explaining Failure”, van Winsen and Dekker refer to the case of Karl Lilgert. Karl was jailed after the ferry he was in charge of, the “Queen of the North”, sank in 2006. In their verdict, the Supreme Court of British Columbia stated: “Maintaining situational awareness at all times and in all circumstances is key to proper navigation.” Situational awareness (SA) is a construct which HF/E practitioners (and others) use to explain human behavior. When terms such as SA are (mis)used by the legal profession then how much responsibility do HF/E practitioners bear? Similar arguments around the use of terms can be made regarding the use of “human error”. Although HF/E practitioners might know what they mean when they say “x% of accidents are due to human error”, the media and public often do not (p.87).

The book also reflects on the tension between the HF/E practitioners who work in research/academia and those who work in industry, as well as the place for those who are not formally qualified in HF/E. As with all professions, each group has different priorities, ways of working and cultures. In Chapter 1, Shorrock and Williams argue for a middle ground in which there is “collaboration among those with expertise in theory, method, and aspects of context… and those with deep expertise in their jobs, working environments, and industry” (p.14).

Hollnagel’s chapter on “The Nitty-Gritty of Human Factors” (p.45-62) is a good read. He talks about a pragmatic approach to human factors and counsels caution when using constructs such as “short term memory”. He advises us to remember that these constructs have been created to explain some observations but that they are constructs, with limitations.

Although healthcare workers might not be immediately drawn to a chapter entitled “Becoming a Human Factors/Ergonomics Practitioner” (Chapter 12), this chapter is worth a read for those of us involved in simulation and education. This chapter explores a number of challenges faced by those who want to certify as HF/E practitioners, as well as those who run the courses which lead to certification. In particular, there is a sense that the courses provide graduates with knowledge but perhaps not the skills required to enter the workplace. A similar problem is seen in healthcare where nurses graduate with the skills required to do the job from the first day whereas doctors often have a significant amount of “on the job” learning to do. This inability to perform Miller's "shows how" and "does" is something that we can use simulation-based education to address.

Healthcare practitioners and HF/E workers involved in healthcare must read Chapter 13 “Human Factors and Ergonomics Practice in Healthcare”. This details some of the issues that affect HF/E work in healthcare including the proliferation of checklists and the “try harder” mentality. Shelly Jeffcott (@drjeffcott) and Ken Catchpole (@KenCatchpole) are rightly optimistic about the future of HF/E in the healthcare setting. Simulation-based educators will also be pleased to see reference to simulation in design and procurement (p.189)

What's bad about this book?

The authors explain why we should be adopting a Safety-II approach, spend more time looking at the system than at the person and appreciate that the system is complex and intractable. However, there is a dearth of information about what to practically do. When an avoidable death occurs in healthcare (or other industries) there is little chance that bereaved families would be satisfied with explanations of complex systems, etc. It would be useful for the reader of “HF/E in Practice” to be given an introduction to current methods in HF/E and their uses.


Final thoughts

This is the best book I have read on Human Factors/Ergonomics. Its focus on the applied, practical aspects of HF/E make it relevant to front-line workers as well managers and researchers. If the General Medical Council is serious about wanting to involve HF/E professionals in its work then council members would do well to read this book.

Further reading:


Sunday, 22 April 2018

Making Care Better: Lessons from Space

Healthcare Improvement Scotland supports continuous improvement in health care and social care practice and this event is part of their QI Connect WebEx series connecting health and care professionals with improvement expertise from all over the world.
This event took place on 8 November at the Planetarium within the Glasgow Science Centre, with more than 120 health and social care colleagues in attendance and many more attending virtually by WebEx.


 “This is always a difficult presentation for me, but it is one of hope. The hope is that the people who hear it will tell the story and spread the word. The similarities in what we did, in terms of understanding, mitigating and minimising risk is as much as part of your everyday job in caring for your patients, as it is mine. To me, I owe it to the next generation of people who climb into the next space craft. I don’t want them to end up in the same situation as my friends, the crew of Space Shuttle Columbia.”

Dr Nigel Packham
Born in London and now living in Houston, Texas, Dr Nigel Packham is no stranger to the world of healthcare. Both his parents were clinicians: his father an Urologist and his mother an Ophthalmologist. His brother, a recently retired General Physician. Nigel, himself, works at NASA Johnson Space Centre as lead for flight safety and managed the review which led to the public release of the Columbia Crew Survival Investigation Report in 2008.

On 16th January 2003, Space Shuttle Columbia (STS-107) embarked on her 28th orbital flight which was to be a 16 day science mission. At 81.7 seconds into the flight, a piece of foam detached from the external fuel tank and collided with the left wing of Columbia causing significant damage. Whilst in space, Columbia was able to perform what appeared to be as normal and the crew of seven completed their scheduled experiments successfully and without any cause for concern.

On 1 February 2003, Columbia deorbited and reached the entry interface to the Earth’s atmosphere (around 400,000 feet in altitude) travelling at 24.5 times the speed of sound. The planned touchdown at the Kennedy Space Centre, Florida, was at 14.15 pm GMT. At 13.58pm GMT, Mission Control reported an issue with the inboard tyre pressure on the left side of the Shuttle and by 13.59 pm GMT, they had lost communication with the crew.


“In the space of 10 seconds we went from being in control to being out of control.”

The tragedy of the last moments prior to the disintegration of Space Shuttle Columbia was graphically shown through different perspectives. Through those watching on the ground; a video simulation depicting a vehicle out of control; and the impending disaster through the eyes of the crew who bravely battled to re-gain control.   

Each of these perspectives show the same tragic events unfolding but from different viewpoints. The story of NASA’s learning from the Columbia disaster has learning for health and social care.  



Is it safe?

The simple fact, as Nigel explains is that space travel is not without risk and, as in health and social care, we need to, instead, ask the question: ‘is it safe enough?’ How we manage risk is key. We must identify and understand the likelihood of any risk and mitigate to minimise the potential impact.  

But, who ultimately accepts these risks? In space travel, this would, of course, be the astronauts themselves. Within health and social care, we have a responsibility to ensure that people are supported to make an informed decision about their own care and understand the risks they are ultimately facing. The principles of ‘Realistic Medicine’ now apply, not only globally, but also universally.

The consideration and interpretation of risk changes with the accumulation of knowledge. The risk of a disaster at the outset of the Space Shuttle Programme in 1981 was estimated between 1:1,000 and 1:10,000.  By the time of the completion of Shuttle Programme in 2011, 135 flights later, the modern tools estimated the risk for the first flight was revised to 1:12.  New data and the accumulation of learning made NASA radically re assess their quantification of the risks of space travel.  

So what about health and social care? How should we systematically interpret our perception of risk based on our experience of incidents, both locally and nationally? How do we share our knowledge and learning so that we can prevent further tragedies? 


“These were our friends"

Following the Columbia disaster, NASA has carefully considered its culture and leadership model. Their decision to publically share the final investigation report would, in no doubt, have been a difficult one, due to the sensitivities for the families and loved ones of the crew but, also, for NASA staff who were responsible for guiding the Shuttle safely back to Earth.

As part of their commitment to continuous improvement, NASA now routinely collect and share examples of real and potential adverse events at different stages - from blast off, to orbit, re-entry, and landing. Each stage is described as well as the implications for improved and ultimately safer systems. Sharing this internally to improve their own safety procedures is one thing, but NASA goes a step further by pro-actively sharing with other space faring nations so that they can also learn and avoid making similar mistakes.  

Ensuring that we too create a culture within health and social care which supports openness and learning is essential so that we can continue to make care better. The events in Mid Staffordshire NHS Trust highlighted the fundamental difference in perspectives of the Trust Board, the regulators who oversaw that Trust, the staff and those families caring for their loved ones. The voices of the weakest - the junior doctors and the families - were not heard until it was too late. From the bed side to the board room, there was a deep and fundamental failure to listen and to act.


Lessons for Healthcare

There is much that we can learn from NASA as well as other high risk and high reliable organisations. Specifically, how they have continued to develop processes to support learning and improvement following close calls or poor outcomes. Though there are inherent differences between health care and space flight, it is evident that success in outcomes in both these fields is ultimately dependent on the interaction between systems, people and environment.

The key is to have a better understanding of these interactions within a complex systems and its relevance when things do go wrong. Often when reviews of ‘incidents’ or adverse events are performed in healthcare, there can exist disconnect between reviewers and individuals or teams involved with the care of the patient. This includes differences in understanding of the challenges faced at the various levels within the overall system meant to support provision of good care. Feedback from reviews may be delayed or even not shared. This highlights the significance of the concept of Work-As-Imagined versus Work-as-Done in healthcare which often contributes to constraints in conducting effective reviews. This inevitably leads to a lost opportunity in understanding weaknesses within the system, possible incorrect focus on what is deemed to be the required improvements as well as in difficulty in capturing and sharing learning.

We believe a significant opportunity does already exist in health care to address these challenges and we are working on optimising this process for NHS Scotland. Mortality and Morbidity Review (M&M) or similar peer review meetings and process describes a systematic approach that provides members of a healthcare team with the opportunity for timely peer review of complaints, adverse events, complications or mortality. This facilitates reflection, learning and improvement in patient care. Importantly, such peer review processes also provide the opportunity to explore and inquire the significant majority of good practice that occurs daily in patient care.

When carried out well, structured M&Ms have added advantages compared to other review processes, including being as near to the event or patient experience as possible and helps promote a culture which support openness and learning in organisations. It provides an opportunity for teams to seek multiple perspectives, describe and discuss complex systems issues and interactions which may have contributed to the event. These factors can be missed when carrying out case note reviews or audits of care. M&Ms also facilitate sharing of learning and immediate feedback, ensuring concerns are addressed immediately thus helping mitigate against errors recurring whilst a relatively lengthy review process is undertaken. This process brings Work- As- Imagined and Work-As-Done closer and provides an opportunity for a better understanding of risks and sharing of learning from frontline to board to improve care.



Authors:
Jennifer, Nigel and Manoj
Manoj Kumar, National Clinical Lead, Scottish Mortality & Morbidity Programme, Healthcare Improvement Scotland / Consultant General Surgeon, NHS Grampian @Manoj_K_Kumar

Robbie Pearson, Chief Executive, Healthcare Improvement Scotland @rpearson1969

Jennifer Graham, Clinical Programme Manager, Healthcare Improvement Scotland @jennigraham8



Thursday, 7 September 2017

What's in your attic?

This blogpost weaves together 4 threads:
1) In Oscar Wilde's only novel, "The Picture of Dorian Gray", Dorian sells his soul to ensure that a portrait of him ages while he remains young. 

2) In his must-read book "Safe Patients, Smart Hospitals" Peter Pronovost argues that healthcare professionals are very good at hiding mistakes from themselves. They compartmentalise mistakes and explain them away because of a belief that "doctors don't make mistakes."

3) In a very good lecture, Scott Weingart argues that:

"The difference between bad doctors and good doctors is not that the bad ones make a bad decision every single shift or even every single week. The difference between a bad doctor and a good doctor may be one bad decision a month. And that's really hard to get self-realised feedback on. There are not enough occurrences of real, objective badness to learn from one's mistakes."

4) Lastly, if we take the numbers from the Instititute of Medicine's "To Err is Human" as correct then we can postulate the following: 
  • There are approximately 100,000 deaths due to medical error per year in the US (1)
  • There are approximately 500,000 doctors in the US (2)
  • Therefore a given doctor will be involved in a death due to medical error once every 5 years
  • Let us make the assumption that only 50% of these deaths are recognised as having been caused by medical error. Then a given doctor will be aware of a patient who died in part due to his/her medical error once every 10 years.  Or 4 deaths per career.

Now for the weaving. 

For a number of reasons healthcare professionals will not be able to have a good understanding of their actual performance. Partly this is because our involvement in errors leading to death is (thankfully) rare and partly because the feedback loop in healthcare is often very long or non-existent. 

We are also, because we are human, very good at rationalising our poor performance. Lastly many of our jobs require confidence, or at least an outward confidence, in order to believe that we can do the job and to put patients at ease. 

This means that, like Dorian Grey, we have a public persona which is confident, capable and error-free. But we also have our "true" selves hidden away, perhaps not as pretty as we might like to think. 

If this is a problem then what are the solutions? 

Unsurprisingly perhaps, given that this is a simulation & HF blog, one solution is immersive simulation. The simulation has to be realistic enough to trigger "natural" behaviour and actions. Realisation of the differences between one's imagined and actual performance  often emerge as the simulation progresses. The simulation can also create the conditions under which poor decisions are more likely to be made. This means rather than waiting a month for a sufficiently stressful real-life event to occur, twelve stressful scenarios can be created in a day.

However it is during the debrief that the two personas, the portrait and the person, can be compared. The use of video-based debriefing means that the participant can see their own performance from an outsider's perspective. The facilitator helps the participant see the metaphorical wrinkles and scars that have accumulated over time. The skilled facilitator provides help in taking ownership of the blemishes and advice on how to work on reducing them.

Simulation remains overwhelmingly the domain of the healthcare professional "in training". Consultants, staff grades, registered nurses, midwives and other fully qualified professionals rarely cross the threshold. Perhaps this is because in training the portrait of ones true self is constantly being exposed. It hangs, as it were, above the fireplace or in a prominent position where many people can and do comment on it. Upon completion of training it is with a sense of relief that the portrait is relocated to the attic. And the longer it stays up there, the greater the fear of the horror we will be faced with if we take it back down.

Face your fears, attend a simulation session and let's clear out that attic together. 

References:
1) DONALDSON, M. S., CORRIGAN, J. M. & KOHN, L. T. 2000. To err is human: building a safer health system, National Academies Press.
2) Number of active physicians in the U.S. in 2017, by specialty area (Accessed 7/9/17) https://www.statista.com/statistics/209424/us-number-of-active-physicians-by-specialty-area/

Monday, 17 April 2017

Translating simulation into clinical practice: Psychological safety

At our sim centre, safety is a key concern. When people mention safety in the context of simulation, the first thought is often the safety of the patient. Simulation is safe for patients because, in the majority of cases, lack of patient involvement means that no patient is harmed. Perhaps the second thought regarding patient safety is that this is one of the reasons we carry out simulation in the first place.

Safety is not just about the patient however, but also about the simulation participant. In terms of physical safety, at our sim centre we have had sharps injuries, slips and trips, as well as a defibrillation of a mannequin while CPR was in progress. So, physical safety is important.

However, we think that the psychological safety of the participants is as important as their physical safety. Psychological safety “describes perceptions of the consequences of taking interpersonal risks in a particular context such as a workplace” (Edmondson & Lei 2014).  When people feel psychologically safe they will be more willing to speak up, to share their thoughts, and to admit personal limitations. This means that psychological safety is important not just in simulations but also in clinical practice.

The psychologically safe simulation environment is not self-generating, it must be created and sustained by the facilitator and participants. Creating this environment is not a cryptic, mystical feat which is only achieved by the expert few, but rather a set of behaviours and actions which can be learned. This means that the lessons learnt from creating psychological safety in simulation can be translated into clinical practice. Key concepts are:
  • Flatten the hierarchy
  • Prime people that mistakes will be made
  • Set an expectation of challenging observable behaviours/actions
  • Stress confidentiality

Flatten the hierarchy


A hierarchy is evidenced by a power distance or authority gradient where certain people are placed “above” others usually as a result of additional training or skills. A hierarchy, with defined leadership, is essential for safe care. However when the authority gradient is very steep those lower down are less likely to challenge behaviour. In aviation this has contributed to a number of well-publicised crashes including the Tenerife disaster. In healthcare it results in leaders making fatal (for the patient) mistakes without members of their team speaking up to correct them.

Flatten the hierarchy:
            In the daily brief by:
            Ensuring everyone introduces themselves
Ensuring everyone introduces themeselves by their first name
Admitting to personal fallibility
Setting the tone of expected respect

During the day by:
Gently correcting colleagues who use your title to refer to you by first name
Protecting those at the bottom of the authority gradient from bullying, harassment or other demeaning behaviour by others.


Prime people that mistakes will be made

In the simulated environment mistakes are almost guaranteed due to the planned crisis nature of the experience. In clinical practice mistakes cannot be guaranteed but it is unlikely that no mistakes will happen during a typical day. (Where research has been carried out, in paediatric cardiac surgery, there were approximately 2 major compensated events and 9 minor compensated events per operation. (Galvan et al, 2005)) It is therefore essential to prime people at the beginning of the day that mistakes are likely, that this is “normal” and that they should be looking out for them.


Set an expectation of respectful challenge to observable behaviours/actions

You have made it clear that people will make mistakes. You can then therefore set an expectation that others will challenge any behaviour or action which they are unsure about, which they think is a mistake or which they think threatens patient safety. Warn people that when their behaviour or action is under scrutiny that they will feel uncomfortable and perhaps threatened. Reassure people that when they are having these feelings of discomfort that they are experiencing a "learning moment". Either the person raising the concern is correct and a mistake is being averted or they are correct and the person raising the concern can be thanked and the action clarified.

Stress confidentiality

In the simulation environment, with very few exceptions, we can guarantee that the experience will remain confidential with respect to the facilitator (i.e. we will not talk about participant performance after the simulated event is over) and that we expect the same from the participants. In clinical practice a similar promise can be made. Of course errors, particularly those which may recur in other situations, must be reported using the appropriate system in order to help the system learn. However this does not have to be on a naming and shaming basis but rather a collective effort to explain how an error happened, how it was dealt with and how it may be prevented in the future. In addition this is an opportunity to stress that you will not talk about any mistakes behind people's backs or use the reporting system as a weapon to punish people.

Psychological safety and mistakes

Staff psychological safety will improve patient health
One of the concerns that people may have is that the “psychologically safe” unit/team/department will be more tolerant of error and therefore make more mistakes.  In 1996, Amy Edmondson looked at eight hospital units and, with the help of a survey instrument and a blinded observer, rated their psychological safety with respect to medication errors. She found that the more psychologically safe the unit was, the greater the number of errors reported. However, she also found that the more psychologically safe the unit, the fewer medication errors the staff actually made. Units which were not psychologically safe not only reported fewer errors but made more.


Final thoughts

Words shape our thinking and we struggle to discuss a concept if we don't have a name for it. It is time that the term "psychological safety" escapes the confines of the simulation centre and enters clinical practice. We all deserve psychological safety at work and you can help make this a reality by using some of the above tips.


References

1)   EDMONDSON, A. C. & LEI, Z. 2014. Psychological safety: The history, renaissance, and future of an interpersonal construct. Annual Review of Organizational Psychology and Organizational Behavior, 1, 23-43.

2)   GALVAN, C., BACHA, E. A., MOHR, J. & BARACH, P. 2005. A human factors approach to understanding patient safety during pediatric cardiac surgery. Progress in Pediatric cardiology, 20, 13-20.

3) EDMONDSON, A. C. 1996. Learning from mistakes is easier said than done: Group and organizational influences on the detection and correction of human error. The Journal of Applied Behavioral Science, 32, 5-28.