Einstein is accredited with saying “The important thing is not to stop questioning” while Euripedes apparently said “Question everything. Learn something. Answer nothing.” I’m sure of the origins of neither quote – but I think I must have inhaled both of them when I was a toddler because I have never stopped asking ‘why’!
In clinical reasoning, there is a real risk of having our very human cognitive reasoning biases kick in before we can even draw a breath. This isn’t surprising – but it is very, very subtle and we can fail to identify our biases until well after we’ve made decisions based on hunches, intuition or other incomplete information – if we pick them up at all.
And if we think that years of experience make us immune to these errors – think again! Not to mention that if we do have this problem highlighted, we’re also inclined to think that ‘it doesn’t apply to me because I’m so much more thorough, better informed, careful and aware of the problem’. Yeah right, as an infamous advert in NZ says!
Intuition has been a term used and abused in health care. A strict definition from Princeton Uni, no less, states: “intuition n. The act or faculty of knowing or sensing without the use of rational processes; immediate cognition.” In other words, intuition is knowing without thinking. Some people use it to mean their ‘knack’ of knowing what is ‘really’ wrong with a patient, or their sense of what someone is feeling, or why something has happened. Other people use it as a way to describe their way of working – coming up with ‘the right thing’ without systematically going through an assessment, hypothesis development and testing process. And still others use it to defend failing to read the literature or keeping on with learning.
The truth is, because of our human cognitive biases, intuition could be one of a couple of things:
- It could be simply guessing – putting two and two together and occasionally coming up with four. While this happens on the odd occasion, we’re likely to think it happens more often than it really does, and this can confirm that we have ‘the knack’.
- It could be over-learned patterns of observation that we are now no longer aware of. Remember the trip from home to work this morning? I’ll bet that most of us can’t really recall every kilometer of that journey, and in one sense I guess we could call the trip ‘intuitive’. Over time, we can very quickly make observations of a patient and arrive at an impression before we’ve carried out any deliberate assessment process. Some people think that psychics have this ability to very quickly ‘read’ the nonverbal behaviour in people and detect minute changes in facial and body expression that most of us wouldn’t notice.
- Worse: it could be that we were actually incorrect and the patient hasn’t said anything (demand characteristics), or the intervention by happenstance does ‘something’ – and we are none the wiser!
I think the problem of biases in clinical reasoning is only exacerbated by ‘self reflection’ as a major approach to looking at our own clinical practice. This is for many reasons, but one is that we consistently over-estimate our own abilities (the Dunning-Kruger effect). After all, most of us would estimate our driving ability to be ‘above average’! So we are likely to look at our therapy and think we’re doing OK especially if we don’t have another referent to use as a bench-mark. It’s one reason I think a useful strategy is to record a session and review it with a systematic checklist to see how well we’ve followed optimal treatment protocols. Another strategy is to ask another clinician to sit in on a treatment session.
A further problem with cognitive biases that particularly plagues clinicians is that we are very good at reasoning back – after the fact! So we may be ‘intuitive’ in-session, then after the session we identify the purpose and processes we used – and a final bias to really knock the reasoning stuffing away is that we’re inclined to convince ourselves of our own ‘rightness’ as part of justifying our treatment. The more we discuss why we used a particular approach, the more convinced we become that this was the right strategy.
What to do, what to do
- The first step is to become aware of the probability that any clinical reasoning we do will be subject to these cognitive biases. No-one is immune, from novice to highly experienced clinicians, we’re all inherently vulnerable to the thinking errors that have given us humans such a headstart in dominating the world.
- The next step is to put some strategies in place to counter the most common biases. For me, this means systematically collecting a lot of clinical information across many domains, and delaying making a decision on ‘what is going on’ until after I have done this. It means investing a good deal of time in assessment before beginning treatment. It also means generating several competing hypotheses about what ‘might’ be going on.
- It means looking at outcomes dispassionately – using outcome measures that are less subject to demand biases than asking ‘How do you feel now?’ Taking at least three outcome measures: one before treatment, one after treatment and one at follow-up (actually, I’d make the one after treatment happen several weeks after treatment, and the follow-up several months – but this takes buy-in from the funder).
- It means questioning everything carried out as part of treatment. Questioning and challenging and holding up our processes to someone else’s scrutiny. Preferably someone who is prepared to challenge and question treatment choices just as strongly. This can take the form of a file review by the whole team, or maybe a random sample of patient treatments that can be reviewed according to written protocols. Preferably reviews by someone else other than you!
While self-reflection is a great approach to use every day it simply is not enough to ensure good therapy. We all need to be able to hold our work up to others for review because our biases are like blind spots.
I’ve linked to Croskerry’s 2002 paper on heuristics and clinical reasoning. While this paper refers to emergency medicine, just because we may work in chronic pain management does not mean we make decisions any differently! In fact, we often deal with highly complex interacting variables, with outcomes often quite subtle and that develop over a long period, perhaps over months – and this makes our reasoning more challenging. I liked this paper because it describes the bias, lists several of the common names for it, describes the effect of the bias and provides strategies to counter these. Read – and weep! We have to take note of all these biases in some way, or we may be trying to use ‘intuition’ when it doesn’t help one bit.
Croskerry, P. (2002). Achieving Quality in Clinical Decision Making: Cognitive Strategies and Detection of Bias Academic Emergency Medicine, 9 (11), 1184-1204 DOI: 10.1197/aemj.9.11.1184
Croskerry, P. (2002). Achieving Quality in Clinical Decision Making: Cognitive Strategies and Detection of Bias. Academic Emergency Medicine; 9: 1184–1204.