From the particular to the general – Clinical reasoning in the real world


From the particular to the general –
Clinical reasoning in the real world

I make no secret of my adherence to evidence-based healthcare. I think using research-based treatments, choosing from those known to be effective in a particular group of people in a specific context helps provide better healthcare. But I also recognise problems with this approach: people in clinical practice do not look like the “average” patient. That means using a cookie cutter, or algorithm as a way to reduce uncertainty in practice doesn’t, in my humble opinion, do much for the unique person in front of me.

I’ve been reading Trisha Greenhalgh’s recent paper “Of lamp posts, keys, and fabled drunkards: A perspectival tale of 4 guidelines”, where she describes her experience of receiving treatment based on the original description given for her “fall”. The “fall” was a high-impact cycle accident with subsequent limb fractures, and at age 55 years, she was offered a “falls prevention” treatment because she’d been considered “an older person with a fall”. Great guidelines practice – wrong application!

Greenhalgh goes on to say “we should avoid using evidence-based guidelines in the manner of the fabled drunkard who searched under the lamp post for his keys because that was where the light was – even though he knew he’d lost his key somewhere else”

Greenhalgh (2018), quoting Sir John Grimley Evans

When someone comes to see us in the clinic, our first step is to ask “what can I do for you?” or words to that effect. What we’re looking for is the person’s “presenting symptoms”, with some indication of the problem we’re dealing with. Depending on our clinical model, we may be looking for a diagnostic label “rheumatoid arthritis” or a problem “not sleeping until three hours after I go to bed”.

What we do next is crucial: We begin by asking more questions… but when we do, what questions do we ask?

Do we follow a linear pattern recognition path, where we hypothesise that “rheumatoid arthritis” is the problem and work to confirm our hypothesis?

Our questions might therefore be: “tell me about your hands, where do they hurt?” and we’ll be looking for bilateral swelling and perhaps fatigue and family history and any previous episodes.

Or do we expand the range of questions, and try to understand the path this person took to seek help: How did you decide to come and see me now? Why me? Why now?

Our questions might then be: “what do you think is going on? what’s bothering you so much?”

Different narratives for different purposes

Greenhalgh reminds us of Lonergan (a Canadian philosopher), as described by Engebretsen and colleagues (2015), where clinical enquiry is described as a complicated process (sure is!) of 4 overlapping, intertwined phases: (a) data collection – of self reported sensations, observations, otherwise known as “something is wrong and needs explaining”; (b) data interpreting “what might this mean?” by synthesising the data and working to recognise possible answers, or understanding; (c) weighing up alternative interpretations by judging; and (d) deciding what to do next, “what is the right thing to do”, or deliberation.

Engebretsen and colleagues emphasise the need to work from information from the individual to general models or diagnoses (I’d call this abductive reasoning), and argue that this process in the clinic should be “reflexive” and “informed by scientific evidence” but warn that scientific evidence can’t be replaced simply by reflexive approaches.

The reason for conceptualising clinical reasoning in this way is that a narrative primarily based on confirming a suspicion will likely reduce the number of options, narrow the range of options considered, and if it’s focused on diagnosis, may well over-ride the person’s main concern. A person may seek help, not because he or she wants a name or even treatment, but because of worries about work, the impact on family, or fears it could be something awful. And without directly addressing those main concerns, all the evidence-based treatments in the world will not help.

Guidelines and algorithms

Guidelines, as many people know, are an amalgamation of RCT’s and usually assembled by an esteemed group of experts in an attempt to reduce unintended consequences of following poorly reasoned treatment. They’re supposed to be used to guide treatment,  supporting clinical reasoning with options that, within a particular population, should optimise outcomes.

Algorithms are also assembled by experts and aim to provide a clinical decision-making process where, by following the decision tree, clinicians end up providing appropriate and effective treatment.

I suppose as a rather idiosyncratic and noncomformist individual, I’ve bitterly complained that algorithms fail to acknowledge the individual; they simplify the clinical reasoning process to the point where the clinician may not have to think critically about why they’re suggesting what they’re suggesting. At the same time I’ve been an advocate of guidelines – can I be this contrary?!

Here’s the thing: if we put guidelines in their rightful place, as a support or guide to help clinicians choose useful treatment options, they’re helpful. They’re not intended to be applied without first carefully assessing the person – listening to their story, following the four-step process of data collection, data interpretation, judging alternatives, and deciding on what to do.

Algorithms are also intended to support clinical decision-making, but not replace it! I think, however, that algorithms are more readily followed… it’s temptingly easy to go “yes” “no” and make a choice by following the algorithm rather than going back to the complex and messy business of obtaining, synthesising, judging and deciding.

Perhaps it’s time to replace the term “subjective” in our assessment process. Subjective has notions of “biased”, “emotional”, “irrational”; while objective implies “impartial”, “neutral”, “dispassionate”, “rational”. Perhaps if we replaced these terms with the more neutral terms “data collection” or “interview and clinical testing” we might treat what the person says as the specific – and only then move to the general to see if the general fits the specific, not the other way around.

 

Engebretsen, E., Vøllestad, N. K., Wahl, A. K., Robinson, H. S., & Heggen, K. (2015). Unpacking the process of interpretation in evidence‐based decision making. Journal of Evaluation in Clinical Practice, 21(3), 529-531.

Greenhalgh, T. (2018). Of lamp posts, keys, and fabled drunkards: A perspectival tale of 4 guidelines. Journal of Evaluation in Clinical Practice, 24(5), 1132-1138. doi:doi:10.1111/jep.12925

7 comments

  1. Bronnie, thanks you for raising these important issues for discussion.

    However, for the sake of completeness, I would point out that a number of different “styles” of medical diagnosis have been identified [Card & Good, 1971; Scadding, 1972: Stanley & Campos, 2013].

    The thinking involved can include recognition of patterns from the clinical description, estimation of probability of a specific disease being present, the use of a diagnostic algorithm, and employment of mechanism-based physiological reasoning.

    In complex cases, deeper investigation for discriminating features can also be undertaken.

    References:

    Card WI, Good IJ (1971). Logical foundation of medicine. Brit Med J 1971, 1: 718-720.

    Scadding JG (1972). Viewpoint: the semantics of medical diagnosis. Ann Thoracic Surg, 3: 83-90.

    Stanley DE, Campos DG (2013). The logic of medical diagnosis. Perspect Biol Med, 56(2): 300-315.

    1. Hi John, I agree there are more than one or two models of clinical reasoning, though what I see in practice (thinking not of medical practice but of physiotherapy clinical reasoning) is more often than not this linear process…
      My preference is for abductive reasoning, as you’ve probably seen, and of generating multiple competing hypotheses that are collaboratively tested by me and the person I’m seeing. I’m focused on problems rather than on classifications – problems allow more room for me to explore possible contributors which may be a disease process, but more often than not are complex learned associations. Diagnoses assume we know ‘the problem’ and that it will be ‘solved’ by treating the underlying disease process. I don’t think people are as simple as that!

  2. Thanks for this, Bronnie. I appreciate this post and your wise perspectives.
    I have also tried to re-conceptualize the ‘subjective’. I have renamed the ‘subjective assessment’ to the ‘narrative review’ on my forms and when I teach/speak. I hope others can appreciate and see the subtle yet valuable difference.

    Thanks again for all you do,

    Shelly

    1. Thank you so much Shelly! It’s funny: some people think words stand alone and have few implications, while others like me think words are important and we need to be aware of the implications people draw from what and how we describe things. If we can value the narrative review (love that term!) we’ll find ourselves in a much stronger position to be patient/person-centred.

  3. Thank you Bronnie! I, too, was referred to The Falls Prevention Programme, after I fell and fractured my shoulder while I was out running. What totally “cracked me up” was that at the time I was employed to deliver the same community based programme! I find I am using the narrative review as the foundation of my interventions in palliative care. Nothing makes better sense!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.