Musings on theory and clinical work
This muse won’t be quite as lengthy as my last rant on occupational therapy and science, I promise! At the same time, it’s on a similar theme – and touches also on my post about ‘back to the basics’ where I discussed the recent review of pain contracts by ACC in New Zealand.  This review criticised the number of pain intervention services (eg injection therapies) and functional reactivation programmes that are provided without due regard to integrating the psychosocial along with the biomedical.  I suggested that perhaps providers need to be ‘risk profiled’ as well as claimants, because some of the behaviour seen in providers is likely to disregard high risk psychosocial factors and reinforce disability.

If clinicians are to be something other than ‘technicians’ applying a process to their patients, they (we) need to understand the concepts lying behind an intervention.  At the same time, we also need to be able to understand when an intervention isn’t likely to succeed, and when a variation on an intervention might suit better.  To do this requires effective clinical reasoning – aha! a theme!  I’ve hammered on a bit about clinical reasoning because it underpins the WHY we might choose to use one intervention over an other.  Clinical reasoning implies working backwards from what is evident in the patient’s presentation to hypothesise about how those features might occur.  In other words, developing a clinical theory to explain how and why the person is showing this behaviour.

Clinical reasoning can be quite straightforward in many settings.  After all, building on the knowledge of centuries, clinicians know enough about bone healing to align a broken bone, stabilise it, and wait for it to heal without doing too much thinking about it.  (Pssst! Don’t tell the orthopaedic surgeons this!)  Similarly, in an acute hospital setting, some fairly simple reasoning is needed to establish whether a person can get on and off the toilet with a raised toilet seat and then issue one if need be!  However, clinical reasoning can be (and usually is) much more complex than this.  Depending on their professional orientation, clinicians working with an acute fracture might ask why the person broke the bone, what the implications of that immobilised limb might be on occupations, might be considering the need for supports at home, might be monitoring for signs of shock – and the point is, these further interventions depend entirely on the theory-base of the clinicians working with that person.

Even in the case of someone needing a raised toilet seat in order to be discharged safely home, the clinical reasoning behind that simple intervention is not just about the biomechanics of getting on and off the toilet!  It could also be asking why the person has trouble standing up and keeping balanced, how the person might cope at night without lighting, whether the person can (or does) ask for help and so on.  Without having a good theoretical framework on which to base information collection, and a similarly effective way to organise that information, the clinician might as well simply issue a raised toilet seat and be done with it!

I’ve deliberately used simple examples to illustrate so-called simple clinical reasoning.  Now lets consider more complex examples.

Firstly, an analogy.  If I wear a set of glasses that occlude my vision on the left hand side of each visual field, I can still see. What I can see is limited, and I need to move my head around to scan the whole of my environment, but I can see.  After a while wearing these glasses, perhaps a week or so, finding my way around becomes easier, and in fact I’d have trouble after just another week of wearing the glasses, adjusting to ‘normal’ vision.  The world would look ‘normal’ to me even though part of my visual field is blocked.  New items appearing on the left side of my field of vision could suddenly ‘pop’ out of nowhere, and unless I know I’ve got those glasses on, I could be quite unaware of the amount of visual information I’m missing.

This is exactly what happens when a professional dons a single theoretical perspective.  I’m guessing we can all recall the first years of becoming a professional, and how strange adopting that new ‘persona’ felt.  After a while, though, it becomes familiar and we hardly notice it.  Then along comes new research, new theory, new models and new interventions.  The world gets a little shaken up!  We either integrate this new information, or we work hard to ignore it. ‘High risk’ clinicians are, IMHO, those who fail to recognise the contribution of information from outside their existing frame of reference. It’s my opinion that these clinicians can and should be identified, and either helped to integrate the new knowledge – or not allowed to practice in a field like pain management where the contribution of information obtained from so many fields is critical.

Bringing this back to clinical reality, if we are unaware of the theoretical models or even the professional models we use, we can be completely stumped when a new situation arises, or when a new piece of information is brought to light – a bit like that object coming into view on the left field of vision when I’m wearing those glasses!  By taking the glasses off, opening up the whole visual field, we can be much more aware of the fact that we do have constraints on what we can see, and if we look more broadly we can identify areas we want to look at in more detail.

OK, enough with the analogy.  Some clinicians scoff when I talk about my interest in science, theory, models and the process of clinical reasoning.  I think it’s vital.  Without articulating why a certain intervention is recommended, I think it’s impossible to distinguish between following a protocol as an assistant and being a versatile and adaptive clinician.  An assistant may not know how a process works, just that if (a) and (b) are following in a certain order, (c) will ensue.  If (g) or (h) are present, an assistant won’t know how to respond.  A good clinician knows that people may present with the same behaviour, but the underlying factors influencing that behaviour could be very different.  For example, someone saying that he or she can’t sleep and wakes often might be due to pain, a natural wakening during normal stages of sleep, having chronic sinus problems, low mood, anxiety – or even the effects of having a new baby in the house!  The work of clinical reasoning doesn’t start with simply ‘identifying the problem’ and then solving it: it begins with the way in which the clinician views the situation and the contributing factors.  If we’re not careful, even as experienced clinicians, we can jump to conclusions or simply ‘assume’ that the clinical problem is the one with which we’re most familiar, or the one that springs to mind the most easily.

It takes a lot of effort to avoid prematurely deciding on ‘what the problem is’ during a clinical intervention.  Being aware of our cognitive limitations, noticing our assumptions and broadening our view to include searching for as many different pieces of information as we can helps to prevent clinicians from working from a recipe – but it’s also hard work.

I’m referring back to Vertue and Haig’s paper on Abductive Theory of Method in clinical reasoning as the basis for today’s post. Read it if you’re keen on science, models, theory and clinical reasoning, and let me know what you think.

Frances M. Vertue, Brian D. Haig (2008). An abductive perspective on clinical reasoning and case formulation Journal of Clinical Psychology, 64 (9), 1046-1068 DOI: 10.1002/jclp.20504

Finally – truth and opinion

This is the last post in this mini-series on why I use science when deciding what interventions to use as a therapist.  As I did yesterday and the day before, I refer to William Palya’s book on research methods – it’s easy to read, available on the internet for free, and although it gives only one view of scientific method, it’s a good start.

After having discussed the first onus – which is to be ethical, and the second, which is to be pragmatic, the third is to use a method to help achieve one or both aims. There are two basic things you need to do:
(1) Demand truth and
(2) Have good understanding

So, what is truth?
Well this can get into murky waters – especially if you listen to the philosophers! But for practical purposes, we can assume that ‘truth is an accurate description of something that is real’. It’s process of building up evidence from many sources, at different times, in different places that describe the same thing, using the least number of assumptions or appeals to special factors that can’t be tested, and describing the majority of the thing under examination. We can use the word ‘phenomenon’ instead of ‘thing’, or ‘event’ or ‘factor’.

Empiricism is one way that is used to determine ‘truth’. Something that is empirical is observed – through technology, to be sure, but can in some way, correspond with something that exists in the real world. As Palya puts it ‘If we wish to claim that something we cannot experience is real then the burden is on us to prove it to a skeptical audience; that is only fair.’

The evidence needs to be reliable – that is, if you look at it more than once, it should be the same. It should also be the same if anyone looks at it.

There should be more than one source of evidence for the ‘thing’. Palya’s example may help – ‘The more evidence from the wider a variety of sources, the more believable. If the police find a finger print the same as yours at a murder scene, maybe it means you are guilty, maybe it doesn’t. However, if the police also find your wallet there, and the murder weapon in your house, and the tire tracks of your car at the murder scene, and the victim’s jewelry at your house, and your teeth marks on the victim’s throat, and a VCR tape of the murder with you in the starring role – well, then you’re in trouble.

You can’t be the only person to say it’s so – and the others that agree with you also need to hold to the same ideas about what is ‘true’ and ‘real’. ‘If several observers who abide by the “rules” of science all agree concerning an event then it is probably true. It is reliable, it is objective. If only one person observes something and others do not observe the same thing then it is subjective.’

The phenomenon needs to be carefully defined so we all agree on what it is and that it describes the essence of the phenomenon. ‘The concept of a horse is false if it includes the saddle or fails to include four legs; it is false if it includes speaking English or fails to include galloping.’
The implications of this aspect of ‘truth’ is that the words we use to describe need to include the critical or essential elements, while excluding those that are not essential. A good definition is unambiguous with respect to what is included and what is not.

The definition you provide must actually have an impact on something that can be measured – because if you can’t confirm that it affects something, it might as well not exist.
Things cannot be said to exist outside the impact they have on sensation (resulting measures) or the impact on other things (functional definition). If your idea of the correct concept of a thing exceeds its operational/functional definition, the burden of proof or burden of communication is on you to prove, explain, and communicate the difference.’

I leave the best summary again to Palya: ‘we start with the notion of empirical, reliable evidence with multiple converging support which is operationally/functionally defined and has consensual validation and ask what is beyond. If someone wants to offer something else as a “truth” it must be proven. Truth does not mean anything anybody wants it to mean. Anyone wanting to extend the meaning of truth to something beyond what science has already substantiated must explain to us what they are talking about.’

Some people say that because various ideas that were once strongly supported by scientists have been rebutted in recent years that there is no such thing as ‘truth’ and science is nothing more than a set of opinions that change all the time. (eg disc prolapses on MRI were once thought to indicate the source of back pain and therefore needed surgery, now they are thought to be incidental and possibly a ‘normal’ variant in many cases)

Some things do change over time – not because the ‘truth’ part changes, but because more information comes to hand that explains more, or explains more with fewer special assumptions, or has more robust support than the previous ‘truth’. This is, in part, why we describe ‘truths’ as theories – theories can be and should be continually tested and as a result, refined. If a theory cannot be tested – then it’s really a model and needs to be evaluated in terms of how useful it is. If it doesn’t help with making decisions that can be tested, then it’s not useful at all.

Whew!! That’s a lot of theory and philosophy of science!
I think though, that it’s really important that we, as therapists, work out why we use the interventions we do, and that we can point to a method that means we feel we can rely on the interventions – and that we really do understand what we mean by evidence and science. Otherwise we are only reciting by rote, or working by habit and convention rather than seeking to understand.

What’s understanding?
It means you can describe, predict, know how to influence, synthesise and explain what you are actually doing.

This is from Palya’s chapter – summarises it quite neatly I think!

Have a great weekend – it’s Friday here, and I’m about to look for a Friday Funny. Be back soon!

Bad, bad science and why learning about real science is important

I had to chuckle a lot to myself this morning when I went over the Ben Goldacre’s site Bad Science and read through the article on the fad of Brain Gym. Thankfully my kids have mainly managed to avoid this – but oh! what a lot of twaddle dressed up in pseudoscience!

Basically for those who haven’t been exposed to Brain Gym, it’s a series of exercises intended to integrate neural circuitry so that kids learn more easily. A lot of the exercises are fun, they certainly make you think about coordination and they make people laugh – great stuff! Where they fall over is in the use of pseudoscientific claims about the mechanisms involved…
Now Ben makes some really good points about how easy it is for both lay people, and people with a degree of sophistication and knowledge, to be bluffed by statements that throw in a few polysyllabic words…

He reports on some experiments discussed in the “March 2008 edition of the Journal of Cognitive Neuroscience, which elegantly show that people will buy into bogus explanations much more readily when they are dressed up with a few technical words from the world of neuroscience.”

Here is one of their scenarios. Experiments have shown that people are quite bad at estimating the knowledge of others: if we know the answer to a piece of trivia, we overestimate the extent to which other people will know that answer too. A “without neuroscience” explanation for this phenomenon was: “The researchers claim that this [overestimation] happens because subjects have trouble switching their point of view to consider what someone else might know, mistakenly projecting their own knowledge on to others.” (This happened to be a “good” explanation.)

A “with neuroscience” explanation – and a cruddy one too – was this: “Brain scans indicate that this [overestimation] happens because of the frontal lobe brain circuitry known to be involved in self-knowledge. Subjects make more mistakes when they have to judge the knowledge of others. People are much better at judging what they themselves know.” The neuroscience information is irrelevant to the logic of the explanation.

The subjects were from three groups: everyday people, neuroscience students, and neuroscience academics. All three groups judged good explanations as more satisfying than bad ones, but the subjects in the two non-expert groups judged that the explanations with logically irrelevant neurosciencey information were more satisfying than the explanations without. What’s more, the bogus neuroscience information had a particularly strong effect on peoples’ judgments of bad explanations. As quacks are well aware, adding scientific-sounding but conceptually uninformative information makes it harder to spot a dodgy explanation.

Go on over to the post, and see for yourself – and then think about some of the pseudoscience involved in chronic pain management… I think many of the explanations for ‘believing’ in Brain Gym apply to therapists adhering to ‘NEW’ ‘IMPROVED’ treatments for things like CRPS or back pain. Let’s hear what you think…