Wednesday 19 December 2012

When things go wrong

Many of us who are involved in simulation-based medical education (SBME) continue to carry out clinical duties. There are arguments in favour and against maintaining a clinical role, which I will not go into in this post. However, for those of us who are still clinically active there will come a time when, despite the human factors training we have received and deliver, we will be involved in a serious adverse event. I thought I would share some thoughts with you on what to do/not to do and how we can use these experiences to enrich SBME.

Step 1: Take ownership of your omissions and commissions
The best way not to learn from an adverse event is to deny that it has anything to do with you or that it was not your "fault". If you were in the room when the event was happening then there will have been steps you could have taken to prevent or mitigate the event. Ensure that the patient and/or family know that you are sorry for what has happened and that the event will be investigated.

Step 2: Write down a full timeline of the events from your perspective (and ask others to do the same)
Doing this as soon as possible after the event means that you will have the best chance of remembering things.

Step 3: Analyse the timeline and add human factors commentary
Consider at all stages and from as wide a view as possible what the circumstances were which led to the event. Were there gaps in knowledge? Did fatigue play a role? Was communication an issue? Were there any error traps such as confirmation bias, loss aversion, or recency bias?

Step 4: Debrief
Use the timelines from as many people as possible to create a "master timeline" (which may have contradictive events) and assign a non-involved person versed in human factors to debrief. Remember to list all the things that went well. Identify changes in practice which may attenuate or prevent a similar adverse event.

Step 5: Initiate and sustain changes in practice
As a person who was involved in the adverse event you have a duty to initiate and sustain changes in your workplace (e.g. use of WHO checklist, time-outs, encouraging people to speak up)

Step 6: Use the increased understanding of this adverse event in your delivery of SBME
Generally speaking I would discourage the exact "re-run" of a particular adverse event in the simulator, however many of the circumstances identified in step 3 will be applicable (with perhaps minor changes) to the courses you currently run.

Step 7: Inform the patient/family
Let the patient and/or family know about all of the above and how the lessons learnt are being applied.


I appreciate that the above is not a perfect sequence but it is a good starting point. Lastly, if you are involved in a serious adverse event, remember that you too are human, that you too will make mistakes and that the best possible outcome from a mistake is that you learn from it.

Monday 3 December 2012

Please tell us how we did.

As I mentioned in my previous post, we collect feedback from the participants on all our courses. We don't (yet) have a generic feedback form so courses which are scenario-based will ask how each scenario went, courses such as faculty-development will ask what we did well and what we could do to improve, etc.

The more feedback forms I see and the more feedback forms I fill out myself the more I realise that our feedback forms are (generally) not fit for purpose. (For a similar point of view, see this article from the BBC.) In particular the tick boxes of "pre-course administration" "administration during the course" "catering" etc. actually give us very little information. The majority of participants tick "very good" or "good", while the occasional "poor" or "very poor" remains unexplained.

Even specific questions such as "Was the duration of the course: a) Too short b) About right or c) Too long" can be difficult to interpret. When people were ticking "Too short", I wasn't sure if they meant that the actual course on the day was too short or that they would have preferred a 2 day course. (When I asked that anybody who ticked "Too short" explain what they meant in the comments section, it turned out that they meant that they would like to have a 2 day course, not that they wanted to finish at 5:30pm or minded if the course finished "early" at 4:45pm)

Currently we also ask people to fill out the feedback on a piece of paper which our administrator then transcribes onto an excel spreadsheet, quite a time-consuming task.

The temptation then is to just ask two questions at the end of the course:
1) What did we do well?
2) What could we do better?
Might these two questions get to the heart of the matter? Would the lack of numerical data make it difficult to show funders that the courses are value-for-money?

I would be interested to hear from anybody who has cracked the feedback puzzle.