From the particular to the general –
Clinical reasoning in the real world
I make no secret of my adherence to evidence-based healthcare. I think using research-based treatments, choosing from those known to be effective in a particular group of people in a specific context helps provide better healthcare. But I also recognise problems with this approach: people in clinical practice do not look like the “average” patient. That means using a cookie cutter, or algorithm as a way to reduce uncertainty in practice doesn’t, in my humble opinion, do much for the unique person in front of me.
I’ve been reading Trisha Greenhalgh’s recent paper “Of lamp posts, keys, and fabled drunkards: A perspectival tale of 4 guidelines”, where she describes her experience of receiving treatment based on the original description given for her “fall”. The “fall” was a high-impact cycle accident with subsequent limb fractures, and at age 55 years, she was offered a “falls prevention” treatment because she’d been considered “an older person with a fall”. Great guidelines practice – wrong application!
Greenhalgh goes on to say “we should avoid using evidence-based guidelines in the manner of the fabled drunkard who searched under the lamp post for his keys because that was where the light was – even though he knew he’d lost his key somewhere else”
Greenhalgh (2018), quoting Sir John Grimley Evans
When someone comes to see us in the clinic, our first step is to ask “what can I do for you?” or words to that effect. What we’re looking for is the person’s “presenting symptoms”, with some indication of the problem we’re dealing with. Depending on our clinical model, we may be looking for a diagnostic label “rheumatoid arthritis” or a problem “not sleeping until three hours after I go to bed”.
What we do next is crucial: We begin by asking more questions… but when we do, what questions do we ask?
Do we follow a linear pattern recognition path, where we hypothesise that “rheumatoid arthritis” is the problem and work to confirm our hypothesis?
Our questions might therefore be: “tell me about your hands, where do they hurt?” and we’ll be looking for bilateral swelling and perhaps fatigue and family history and any previous episodes.
Or do we expand the range of questions, and try to understand the path this person took to seek help: How did you decide to come and see me now? Why me? Why now?
Our questions might then be: “what do you think is going on? what’s bothering you so much?”
Different narratives for different purposes
Greenhalgh reminds us of Lonergan (a Canadian philosopher), as described by Engebretsen and colleagues (2015), where clinical enquiry is described as a complicated process (sure is!) of 4 overlapping, intertwined phases: (a) data collection – of self reported sensations, observations, otherwise known as “something is wrong and needs explaining”; (b) data interpreting “what might this mean?” by synthesising the data and working to recognise possible answers, or understanding; (c) weighing up alternative interpretations by judging; and (d) deciding what to do next, “what is the right thing to do”, or deliberation.
Engebretsen and colleagues emphasise the need to work from information from the individual to general models or diagnoses (I’d call this abductive reasoning), and argue that this process in the clinic should be “reflexive” and “informed by scientific evidence” but warn that scientific evidence can’t be replaced simply by reflexive approaches.
The reason for conceptualising clinical reasoning in this way is that a narrative primarily based on confirming a suspicion will likely reduce the number of options, narrow the range of options considered, and if it’s focused on diagnosis, may well over-ride the person’s main concern. A person may seek help, not because he or she wants a name or even treatment, but because of worries about work, the impact on family, or fears it could be something awful. And without directly addressing those main concerns, all the evidence-based treatments in the world will not help.
Guidelines and algorithms
Guidelines, as many people know, are an amalgamation of RCT’s and usually assembled by an esteemed group of experts in an attempt to reduce unintended consequences of following poorly reasoned treatment. They’re supposed to be used to guide treatment, supporting clinical reasoning with options that, within a particular population, should optimise outcomes.
Algorithms are also assembled by experts and aim to provide a clinical decision-making process where, by following the decision tree, clinicians end up providing appropriate and effective treatment.
I suppose as a rather idiosyncratic and noncomformist individual, I’ve bitterly complained that algorithms fail to acknowledge the individual; they simplify the clinical reasoning process to the point where the clinician may not have to think critically about why they’re suggesting what they’re suggesting. At the same time I’ve been an advocate of guidelines – can I be this contrary?!
Here’s the thing: if we put guidelines in their rightful place, as a support or guide to help clinicians choose useful treatment options, they’re helpful. They’re not intended to be applied without first carefully assessing the person – listening to their story, following the four-step process of data collection, data interpretation, judging alternatives, and deciding on what to do.
Algorithms are also intended to support clinical decision-making, but not replace it! I think, however, that algorithms are more readily followed… it’s temptingly easy to go “yes” “no” and make a choice by following the algorithm rather than going back to the complex and messy business of obtaining, synthesising, judging and deciding.
Perhaps it’s time to replace the term “subjective” in our assessment process. Subjective has notions of “biased”, “emotional”, “irrational”; while objective implies “impartial”, “neutral”, “dispassionate”, “rational”. Perhaps if we replaced these terms with the more neutral terms “data collection” or “interview and clinical testing” we might treat what the person says as the specific – and only then move to the general to see if the general fits the specific, not the other way around.
Engebretsen, E., Vøllestad, N. K., Wahl, A. K., Robinson, H. S., & Heggen, K. (2015). Unpacking the process of interpretation in evidence‐based decision making. Journal of Evaluation in Clinical Practice, 21(3), 529-531.
Greenhalgh, T. (2018). Of lamp posts, keys, and fabled drunkards: A perspectival tale of 4 guidelines. Journal of Evaluation in Clinical Practice, 24(5), 1132-1138. doi:doi:10.1111/jep.12925