Last week I discussed an interview with F Sommer Anderson and also discussed aspects of central sensitisation syndromes, and Will Baum from where the client is kindly forwarded me a response by Dr Anderson. I am going to muse on one or two aspects of her response because they raise issues that I think are relevant to anyone working in health – and more importantly, anyone working in pain management.
Pain, like many other conditions, is complicated by the fact that it’s invisible – we don’t have any objective measures of pain itself, and we have to rely on behaviours (including verbal self report and movements) to determine whether treatment has done any good. Behaviours are strongly influenced by external factors such as other people’s responses, along with internal factors such as beliefs and expectations.
We therefore have a ‘proxy’ for the experience of pain that doesn’t correlate all that well with anything else – not even tissue damage, nociception, or disability. Pain behaviour does, however, have relationship with psychological distress and consequently, when distress is reduced, pain behaviours often also reduce.
What this means for anyone working in the field of pain management (or pain reduction) is that it’s difficult to establish whether what we’re doing is changing the pain experience itself, or simply the behaviours around having pain such as how much the person will report their pain. And it means we have to rely on what the person DOES as a proxy for working out whether our treatment has worked.
I’ve made it very clear that I’m in the business of helping people live well despite having pain, I don’t aim to change the pain intensity much (although sometimes that does happen). This is because the research seems to show that it’s not inevitably the pain intensity that leads to distress and disability, it’s the beliefs and attitudes and the way in which pain interferes with living life that contributes to difficulties. Distress associated with pain leads to more searching for treatments, which often temporarily reduce this negative emotion, but distress can increase once the initial response to treatment settles down. We see this time and again in pain management programmes.
How do we work out what is a good treatment? Do we look at how many people have attended for this particular type of treatment? Do we look at how many people return for the same treatment? Do we judge a treatment by how many people say they’re happy with their treatment? Do we judge it by how many books have been sold by the author or originator of a treatment?
All of these things are basically a popularity vote. How personable, charismatic, or even solicitous the treatment provider was; how the treatment was conveyed to the person; how much advertising was carried out; how well-written the books were; how many people studied under the originator of a treatment – all of these can influence how often a treatment is tried.
If you’ve picked nothing else up from my blog posts, you should have seen a bias toward evidence-informed practice. What I mean by this is that I draw from the empirical peer-reviewed literature (with all the publication bias and other faults that it has) to inform my clinical practice. I do this because, although there are faults with published studies, many more of the usual biases that afflict anecdotes and case studies are minimised.
There are loads of criticisms of ‘evidence-based practice’ – the main one being that innovation and novel therapies can’t be introduced until the ‘gold standard’ randomised controlled trials have been carried out. The rigour of having to conduct one of these RCT’s means that often the inclusion and exclusion criteria (as well as the treatment itself), and the outcomes achieved, can differ from what we might see in everyday practice.
Another criticism of evidence-based practice is that studies are carried out on groups of people, and results analysed in terms of groups of people – whereas we see individuals, one at a time, all with their unique presentation. How, it is argued, could a ‘generic’ treatment be relevant to the unique person I am seeing?
I’m not a very technical person when it comes to arguing this, but here’s my take on judging treatments.
- Each individual I see has a unique presentation, and my ‘working hypotheses’ for why they have presented this way at this time, are just that: hypotheses that need to be tested for this person in his or her situation
- The good thing about science is that all hypotheses are testable – if they can’t be tested, then it’s not really science, it’s faith or belief
- Over time, evidence accumulates, giving more or less support to any particular hypothesis
- Hypotheses are drawn from theories – and theories are judged on how well they explain what we see in the real world, with the least number of special assumptions, and the ability to explain the most
- Theories are modified as more evidence from testing various hypotheses is gathered – this is why there are often so many conflicting views in the media about what risk we face from eating chocolate, drinking wine, enjoying the sun or exercising
- Over time we should see an accumulation of evidence to support a theory – or not. And we certainly see this with respect to the view that ‘humours’ cause disease, ‘bloodletting’ cures anything – and (dare I be contentious?) homeopathy d0es nothing.
Why do I raise this when thinking about Dr Sarno’s theory? Well, it relies on some untestable hypotheses, it was developed many years ago but hasn’t been subjected to RCT’s, and instead of scientific evidence for it ‘working’, it draws on numerous case studies, anecdotes and books being sold as evidence. This isn’t my way of practising.
I’ve included this paper on patient autonomy for managing chronic conditions, because it illustrates the dilemma ‘consumers’ and health providers face when they look for treatment. While I really support patient autonomy (after all, I AM a patient – aren’t you? at least sometimes?), at the same time I want to provide something that I know is effective on the basis of systematically controlled studies. If a person comes to me wanting rolfing, deep tissue massage, crystals, or ‘healing the inner child’ as a way to help them manage their persistent pain, I can’t, in all honesty, go ahead and do it (or send them off to find someone else to do it). I believe I have an ethical responsibility to let them know the state of play in the peer-reviewed, scientific literature. I also have to make a judgement about the health literacy of the individual I’m seeing. How much do they weigh up the various options, how much do they know, what about the ‘power’ relationship between me and them – and even how I phrase the various options can all influence whether someone will choose to adopt a treatment or not.
This doesn’t make me all that popular at times – either with patients who really want something that they feel comfortable with, or with providers who like to provide a certain treatment irrespective of the status of the scientific literature. But it is about honesty, truthfulness and using a systematic process to discover what has the greatest probability of working for that person.
The dilemma is whether to be patient-directed, or patient-centred. I hope I’m patient-centred. That is I hope I listen well, share openly, and measure what I do transparently. If a treatment approach isn’t working for this person at this time, using a hypothesis-testing approach to treatment at least means I can evaluate the elements I was working with, and begin again – if it’s a ‘preference’, ‘belief’, ‘assumption’ or responding to a sales pitch, then it’s not so easy to work out why it hasn’t worked. And I can draw on the experiences of many clinicians, researchers and patients to inform my choice of intervention.
Naik, A., Dyer, C., Kunik, M., & McCullough, L. (2009). Patient Autonomy for the Management of Chronic Conditions: A Two-Component Re-Conceptualization The American Journal of Bioethics, 9 (2), 23-30 DOI: 10.1080/15265160802654111