Well, maybe that’s a misnomer for today’s post, but it does strike to the very heart of some of the more heated debates that I see when I browse the interweb. With all the conflicting research reports into all the various interventions for chronic pain (well, for anything really), how does a clinician decide when the time is right to start incorporating a new practice (such as working with acceptance or mirror therapy or laterality), or begin to phase out an old practice (like distraction or core stability or muscle imbalance)?
This paper, one of a series of excellent papers in Best Practice & Research Clinical Rheumatology on the management of low back pain, discusses in a really accessible way, the various problems that face an earnest clinician who wants to ‘do the right thing’ by his or her patients.
What are the issues?
Most clinicians would be well aware that the ways in which research is conducted can be a whole lot different from the daily reality of clinical work. Researchers have the requirement to carefully select participants, control extraneous variables, ensure the intervention is rigorously applied and follow the participants up for a good period of time.
Researchers need to spend a long time setting up a trial – but don’t necessarily have to consider the pragmatics of how to do this with a mixed bag of participants, in a clinical setting that can’t control for so many things (like case managers insisting on a certain timeframe, a GP who won’t support the approach, team members who are keen to use their own approach, and managers who wonder why so much time is spent on measuring outcomes for months after the person has been discharged).
Some clinicians use systematic reviews, best practice guidelines or evidence-based summaries to guide practice.
These synthesise the findings of many studies, and assemble recommendations on the basis of the strength of evidence for each component. Ostelo and colleagues note that for low back pain, 51 reviews have been completed on spinal manipulation with 17 deciding the evidence is neutral, and 34 giving a positive review. The methodology of the reviews, according to GRADE (Cochrane Collaboration’s grading system), were poor, but the reviews on spinal manipulation showed that those following the best methodology arrived at positive conclusions. But there were other factors involved in a positive review – just assessing one type of manipulation; including a clinician who used spinal manipulation on the review team; and completing a comprehensive literature search – but strong conclusions couldn’t be drawn because the review methodology was poor.
Well, if clinical guidelines summarise the evidence (to a certain degree anyway) to reduce some of the demands on busy clinicians, then another question can be asked – how well are guidelines actually implemented? And the answer is – well… even with specific training and support, one study cited in Ostelo and colleague’s paper found “only modestly improved implementation for certain portions of the recommendations in the Dutch LBP guideline by general practitioners and produced only small concomitant changes in patient management.”
And most guidelines are not presented with such a systematic and thorough implementation training. In fact, many are simply delivered through the mail – possibly read by the enthusiastic – but often ‘filed’.
These are the factors that seem to influence adoption of guidelines:
– lack of knowledge (of the guidelines, or the way to integrate new changes),
– a shortage of time to read and consider new approaches,
– disagreement with the guideline content or reluctance from colleagues to adhere to the guideline
– ‘getting lost’ in the large number of different guidelines available
Not to mention managing the expectations from patients – what? No x-ray for my sore back? What kind of a doctor are you?!
The conclusion this paper comes to is that ‘more coordination’ is needed to both develop and then integrate practice guidelines. That guidelines can’t simply synthesise the evidence, but also need to work alongside patients and practitioners (and funders and health managers!) to discuss and advise on how to use the new evidence in the real world.
What do I think?
I am an advocate of using clinical guidelines, with all their flaws. I’m also an advocate of clinicians reading original research papers to understand the ways in which the research is carried out and to make a judgement on how closely the ‘research’ approach fits with the clinical world of the practitioner.
I waited for about 12 months and about that many research papers to be published before I started to think about how mirror therapy and motor imagery could be implemented in my practice. I’m a slow adopter of new interventions. This is partly because I’ve seen so many waves of ‘new’ ‘improved’ treatments that I’m just a little wary of rushing in with enthusiasm. I’ve seen leg length disrepancy being touted as ‘the answer’, core stability, muscle imbalance, maintaining lumbar lordosis, various types of lifting practice, swiss balls, exercising with weights on the ankles, pulleys, hydrotherapy – oh the list goes on.
And in the end? A review of exercise in the same journal as this paper finds yet again that there is little evidence that any specific type of exercise is better than any other. This is the same finding I’ve seen since the mid 1990’s.
For me? Watch, wait, be critical, think carefully, and don’t be blown away by anything that promises magic. The people I work with have complex needs and have already been through a health system that has often failed them. I don’t want to promise more than I can deliver. Go gently, be kind, be flexible, be a little conservative about what can be achieved.
Ostelo, R., Croft, P., van der Weijden, T., & van Tulder, M. (2010). Challenges in using evidence to inform your clinical practice in low back pain Best Practice & Research Clinical Rheumatology, 24 (2), 281-289 DOI: 10.1016/j.berh.2009.12.006