evidence

When and how should new therapies become routine clinical practice?


ResearchBlogging.org

Following on from my last post about when to adopt new therapies – a wonderful colleague of mine (who shall remain nameless, but You Know I Know Who You Are) sent me a copy of this paper from a physiotherapy journal. Bo and Herbert argue that the current way that new therapies become integrated into our daily clinical work is ‘far from optimal because innovative therapies still become accepted practice on the basis of laboratory research alone.’ I agree. Worse still, old therapies that have little evidence to support them continue to be used – even in the face of clinical studies demonstrating that they have no greater effect than placebo.

Bo and Herbert suggest there are several ways that new therapies are adopted within physiotherapy practice. I suggest that there is little difference between the situation in physiotherapy and other health professions!

Clinical experience – this is the traditional way practice in health care evolved. Experienced therapists hand down ‘what works’ on the basis of their observations within their own practice. Sometimes this is more formalised within workshops or conferences, and sometimes case reports are published. The basis for adoption is mainly on the reputation (or charisma) of the founder of the method. This situation continues today within most health professions, but fails to account for biases that are present in clinical practice. Things like patient expectation, natural course of the disorder, placebo, reduction of distress and failing to control for other external sources of influence mean the ‘expert’ can be mistaken. There are some ways to ameliorate these biases such as using good outcome measures and making time for long-term follow ups, but clinical experience alone is insufficient to make good generalisations about ‘what works’.

Theories based on basic science or ‘preclinical’ research (laboratory findings)
In this paper, Bo and Herbert suggest that as physiotherapists began to develop an academic arm, researchers based their experiments on their knowledge of the basic sciences underlying the profession. Often the laboratory-based experiments lead almost immediately to alteration of clinical practice – alternatively, laboratory findings were used to justify existing practice if the results obtained were consistent.

The problem is that while laboratory findings can inspire further study, perhaps leading to the testing of new hypotheses, findings don’t directly translate to what occurs in a real clinical patient. Patients don’t conform to the very tight parameters required for experiments. There are multiple variables involved in their clinical presentation that may be quite important but too complex to be incorporated into basic science experiments. Bo and Herbert suggest that laboratory studies measure impairment-level outcomes, while treatments need to measure disease-specific or disability/quality of life changes. Clinical practice is very different from a laboratory!

The ‘gold standard’ for identifying whether an intervention provides an effect in real people is the randomised controlled trial. When they are well-designed, they can be used to clearly demonstrate that treatment X and only treatment X influences the outcome. I won’t detail why the RCT is such an important method today, only to say that it remains the best way to ensure extraneous variables are controlled for – but they’re expensive, time-consuming and difficult to conduct in a clinical setting. It’s not surprising that I don’t think I’ve ever read about RCT’s for raised toilet seats!

Bo and Herbert go on to describe a six stage protocol to be used when a new therapy is being considered.
Stage 1 – clinical observation, laboratory studies – development phase
A clinician or researcher observes something interesting. Preliminary studies demonstrate that this ‘interesting finding’ is able to be consistently obtained. Some hypotheses are developed and tested in a controlled environment.
Stage 2 – clinical exploration – development phase
The hypotheses are explored in a clinical setting, maybe a prototypical treatment is carried out amongst volunteers, trial and error (or ongoing hypothesis testing based on a theory or model) shapes the treatment. This is a strongly exploratory phase – and quite exciting!
Stage 3 – pilot studies – development phase
Once one or two specific interventions become confirmed, pilot studies in a controlled but clinical environment are conducted. These might be small-scale group studies, case series studies, or small RCT studies. Further refinement of the practicalities of this treatment is made.
Stage 4 – randomised clinical trials – testing phase
If pilot studies are ‘promising’, RCT’s are carried out. A single swallow does not a summer make, however, so one study is rarely sufficient data on which to base wholesale adoption of a new treatment. Replications with different settings, different clinicians, different patients need to be made.
Stage 5 – refinement – refinement and dissemination phase
This stage may involve ‘head to head’ comparison of the new approach with other, more established treatments. Larger RCT’s are needed to identify how subgroups respond to the intervention.
Stage 6 – active dissemination
Now the word can be spread using guidelines, teaching curricula, continuing education – other professionals will learn about it, patients can be informed of the option and even the general population can be advised.

Now I have absolutely no argument with this staged approach to developing therapy. I will, however, suggest that there are very few physiotherapy, occupational therapy, or even psychology interventions that have reached Stage 5 before the treatment is already adopted and described as ‘evidence-based’!

Lack of research expertise notwithstanding, obtaining funding for these studies is difficult. The hoops that need to be gone through, at least in New Zealand, to prepare a research proposal for even a Stage 1 or 2 study makes it challenging for therapists in full-time clinical work to contemplate conducting even basic observational studies. There is an inherent lack of interest from managers of health services to allow clinicians to spend time on non-treatment-related activities, it simply doesn’t pay. Allied health clinicians rarely have the expertise to carry out methodologically strong studies without requiring support from within the profession, within the clinical adminstration and management, and without support from an academic institution.

I’ll jump off that soapbox quickly now!

Where does that leave clinicians when thinking about adopting a new therapy? More tomorrow on what to say to patients, but in the meantime I want to leave you with this thought: until an RCT is able to demonstrate that treatment X has an effect attributable only to itself, and is applicable to the kind of patients within the kind of setting the therapist is working in, all treatments are really ‘experimental’.

BO, K., & HERBERT, R. (2009). When and how should new therapies become routine clinical practice? Physiotherapy, 95 (1), 51-57 DOI: 10.1016/j.physio.2008.12.001

Doing, being, and creating a myth


A couple of things have drawn my thoughts to this topic: the first is a post on the Salford University Occupational Therapy Blog called ‘Create your own destiny’ in which they ask how educational institutions should prepare new graduate occupational therapists for the Brave New World of health care in which we work. They suggest we look for opportunities to promote ‘thinking out of the box’ and working from a nonmedical model – but this poses the question of how, in doing this, occupational therapists will manage to still meet the needs of those within a medically-based health care system.

One of the respondents to this post made the point that ‘I believe that occupational therapy is about occupation, health and well being’. I replied with the thought that ‘occupational therapists identify ‘doing’ (occupational performance) as their core domain, and all their clinical efforts are focused on helping ensure that people can ‘do”. (more…)

On evidence and practise


An opinion piece to restart my blogging after my lovely holiday…

I’ve been reading ABC Therapeutics blog where Chris Alterio writes in response to a long comment by Michele Karnes suggesting that occupational therapists (and by inference all health care providers) ‘should be made aware of treatments that are offered to clients/patients, whether it is traditional or non-traditional, a long existing treatment or new one. This enables our OT profession and professionals to better educate the people they treat and interact with.’

I don’t have any particular concerns about this part of Michele’s comment – but I do have a problem with this part ‘while Evidence Based Practice is on all of our minds, and ultimately the best to utilize with our patients, if we only used treatments for all of these years we would have missed out on the many treatments that OT’s have historically (and still) use.’ (my emphasis)

It raises some concerning things for me – and while I don’t have answers for all of my concerns, I hope to stimulate some debate at least.

Chris writes in his blog ‘Just because people seek out alternative energy healing interventions doesn’t mean that it constitutes appropriate or ethical practice. In an article published in the Journal of the American Medical Association on this topic an author writes: “Given the extensive use of CAM services and the relative paucity of data concerning safety, patients may be putting themselves at risk by their use of these treatments. Only fully competent and licensed practitioners can help patients avoid such inappropriate use... Physicians can also ensure that patients do not abandon effective care and alert them to signs of possible fraud or danger.“‘

I’d add that licensing in itself does not inevitably lead to patients being helped to avoid inappropriate treatments. I also add this:

I think it also takes a critical and educated mind, a systematic approach to reviewing evidence, and considerable determination not to be swayed by forceful opinion. (more…)

Why am I doing this?


This is not a whining post, just that I thought it was time I mentioned why I write this blog.

I looked on the internet for ages to find a resource that gave me good information about nonmedical approaches to managing chronic pain and other chronic disorders.

If you use a search engine to look for ‘chronic pain’ or ‘back pain’ you’ll find endless listings for organisations (I used Google just now and found 8,320,000 in 0.34 seconds!)  and many of them are designed for patients, but not a lot for the nonmedical treatment providers who work with them! And we need to remember that the majority of health care providers working with people with chronic pain are nonmedical. We don’t prescribe!

You’ll see I also wrote ‘good information’.  The problem isn’t so much the amount of information available, as the quality of it. When I searched using Google, the advertisements on the right hand column of the search field included: ACC’s page, Ehlers-Danlos, Quantum Touch, biomag, natural health, craniosacral therapy, herniated disc relief, electrotherapy… a bit of a mix.

I also found the same lack of good quality information for nonmedical health providers when I searched using Yahoo.

I enjoy working in the field of pain management, but I’m worried that with so many nonmedical health providers and so little nonmedical health information that is based on science (and what is there is relatively inaccessible) that the field is wide open for well-meaning but misguided people to tout treatments that simply don’t work. OK I’m being charitable, the field is wide open for quacks, ‘alternative’ therapists, and lazy health providers who don’t have time or skills to delve into the scientific literature.

In my own field of occupational therapy (all right, I’m heavily warped by psychology), I find therapists gladly prescribing adaptive equipment including vehicle modifications and ‘ergonomic solutions’ for office settings for people with chronic pain with not a scrap of evidence that this is effective in the long term.  Therapists suggesting ‘pacing’ is all about working within your pain limits (therefore progressively reducing activity tolerance).  Therapists being unwilling or afraid to ask people with pain to develop skills to tolerate pain while they carry out activity, and as a result unwittingly supporting pain-related anxiety and avoidance.  Teaching people that there is one ‘correct’ way to lift items or they may risk ‘injury’. (more…)

Science and therapy


Yesterday I blogged about why I am so keen to use science to help me work ethically with clients. I talked about the basic onuses that we accept when we decide to become therapists, and showed how these are no more than what I would hope to receive if I saw a therapist or plumber or accountant.

I refer much to William Palya’s Research Methods pages not because it’s the last word on scientific methodology, but because it’s a starting point, and he writes in a very readable way.

Today I want to move on to being pragmatic.
This is the second onus that we usually accept – these are the skills that we need to be secure and successful in our clinical practice, and lead to the reason for using the scientific method as the way to meet both obligations. Once again, I’m quoting mainly from Palya’s work, but paraphrasing and applying it to health practice across all disciplines.

To be pragmatic, you need to:
a. Be a Good Consumer / Separate Illusion from Reality
All theories claim to be correct and all therapies claim to be right. If you are to become a good consumer or practitioner of health care knowledge you must be able to separate truth from fiction even when appearances are deceiving.
b. Ability to Implement Complex Information
You must be able to understand the advanced and sophisticated knowledge of health care in order to properly function as a therapist. Knowledge of people, health and therapy has exploded in the past 20 years and to sift through it all requires skill.
c. Solve Unique Problems by Applying Concepts
Technicians can cope with problems once they are trained to step through that particular solution. A professional on the other hand can solve problems which have never before occurred because they are trained how to identify underlying patterns and apply principles to novel situations. In general, a professional must have the analytical skills necessary to unravel complex behaviors into understood functional relationships, and the competency to design procedures which will clarify causal factors or which will alter behavior.
d. Make Consistent Progress
If you are to succeed at what you are doing you must be right more often than you are wrong. If you are to make consistent progress then you must know when things are getting better and when they are getting worse. With accurate feedback, errors can be eliminated and correct solutions obtained. ‘Common sense’ moves you back and forth in no consistent direction because there are so many competing and opposing ‘common beliefs’. (e.g., it’s never too late, you can’t teach an old dog new tricks / he who hesitates is lost, look before you leap).
e. Prove Effectiveness
You will be required to demonstrate the efficacy of what you do because when the people supplying your income become good consumers, they will demand it of you. This will include: Funding agencies, the Courts, to ensure ongoing employment.

To be both pragmatic and ethical, you’ll need to use a scientific perspective as the only perspective. Why?
Because you need good evidence that things are true before you believe in them. Think of the coin toss result hidden in my pocket – if I gain from your choice, why would you trust my word? You’d really want someone else (if not yourself) to check it out.

Finding ‘truth’ or what approximates it given the current state of knowledge is not as simple as it sounds.
1. Unfortunately, truth is not necessarily obvious, what you like, nor the easiest.
2. Neither is common sense an acceptable arbiter of reality. Common sense can be as dangerous as helpful. Common sense is often true only in the sense that ‘home truths’ predict everything, for example “opportunity knocks once”, and “it’s never too late.” One or the other is certainly true on any one occasion. The need is to know in advance not after the fact when it is too late.
3. Just because your mother, teacher, or best friend believes something does not make it true either. That your friends support your view is no help. Everyone, including a psychopathic murderer, has a mother, a best friend and a dog that believes in them.
4. The fact that something is popularly known is also no reason to believe in it. Everything that is now known to be wrong was once thought to be true by people in the street.
5. Knowing or feeling that you’re right is of no help. Even though most people do believe that they can be wrong, few people ever believe that they are wrong “this” time. Most people (including you) can be talked into believing a nonsensical theory especially if it’s full of jargon, and the person talking to you has power, seems charismatic and you’ve paid for their advice.

You need to accept that any special “inner ability to understand people and recognize the truth” could be the problem rather than the solution. The only way to move past guesswork or habit is to determine what in the past has been shown to produce truth as opposed to procedures which only produced strong emotional commitment but make little lasting change.

What’s truth? Now let’s leave the Great Debate to philosophers, simply put there must be rules to screen-out ‘knowing-that-you’re-right’, opinion, bias and conjecture from truth. Truth is an as-accurate-as-possible description of something that is real, or works, or explains the most with the fewest ‘special’ assumptions. If three people tell you three different combinations to a safe, the one that works is the truth. It means that the information has passed a reality test.

There are some tried and true ways to determine the truth of a claim: more on this next week.
In the meantime, let me know if this is interesting, challenging or just off the wall!  I know I never learned this when I trained as an occupational therapist years ago – I wish I had, because it has confirmed to me that in order to be honest and authentic in what I offer to people, I need to learn how to check the veracity of what I do.

Am I right or just dogmatic?


Even in health care, the loudest voice with the largest opinion can be the most persuasive – even with limited (or selective) use of scientific evidence. Sadly, fads exist in pain management too.

To counter our human biases we need to be critical of all research, and ask some serious questions about accepted practice as well. In most forms of allied health (as well as medical health) there are some ways of working that are based a lot more on ‘what we’ve always done’ and ‘it seems to work’, or even ‘but it works for this person’ or ‘I’ve seen it work for people like this’ than evidence from well-controlled trials.

Some people argue with me about this point saying ‘but if we only did what there was evidence for, we’d having nothing to offer’! Ummmm. That doesn’t mean that what you’re offering is doing any good!

So… what is critical appraisal? This link leads to a great pdf doc summary by Alison Hill and Claire Spittlehouse of just what questions you should consider if you’re reading a research article. I thought I’d summarise it briefly today, but I strongly encourage you to read the full article yourself.

Their definition of critical appraisal reads “Critical appraisal is the process of systematically examining research evidence to assess its validity, results and relevance before using it to inform a decision.”
They go on to say “Critical appraisal is an essential part of evidence-based clinical practice that includes the process of systematically finding, appraising and acting on evidence of effectiveness.”

They agree that sometimes carrying out a critical appraisal can be disheartening – research on clinical interventions can be flawed, have poor methodology, and can highlight just how little reliance we can have on what we do being helpful. We are sometimes working in the dark but putting on a good show to suggest that what we learned in our training ‘works’.

And I’m going to cut and paste the complete set of questions they recommend using when trying to appraise a piece of research. These questions are developed by Guyatt et al.
A. Are the results of the study valid?
Screening questions
1. Did the trial address a clearly focused research question?
Tip: a research question should be ‘focused’ in terms of:
l The population studied
l The intervention given
l The outcomes considered.
2. Did the authors use the right type of study?
Tip: the right type of study would:
l Address the research question
l Have an appropriate study design.

Is it worth continuing?
Detailed questions
3. Was the assignment of patients to treatments randomised?
Tip: consider if this was done appropriately.
4. Were all of the patients who entered the trial properly accounted for at its conclusion?
Tip: look for:
l The completion of follow-up
l Whether patients were analysed in the groups to which they were randomised.
5. Were patients, health workers and study personnel ‘blind’ to treatment?
Tip: this is not always possible, but consider if it was possible – was every effort made to ensure ‘blinding’?
6. Were the groups similar at the start of the study?
Tip: think about other factors that might effect the outcome such as age, sex, social class.
7. Aside from the experimental intervention, were the groups treated equally?
Tip: for example, were they reviewed at the same time intervals.
B. What are the results?
8. How large was the treatment effect?
9. How precise was the estimate of the treatment effect?
Tip: look for the confidence limits.
C. Will the results help locally?
10. Can the results be applied to the local population?
Tip: consider whether the patients covered by the trial are likely to be very different from your population.
11. Were all clinically important outcomes considered?
12. Are the benefits worth the harms and costs?
© Critical Appraisal Skills Programme

The three questions to ask yourself when you read research?
Three broad issues need to be considered when appraising research:
A Are the results of the study valid?
B What are the results?
C Will the results help locally?

I leave you with that today – I think it’s quite enough for a Tuesday. Stop and think about the treatment you are using today. Have you read a systematic review of the treatment you’re using? If you have, do the patients you see look anything like those included in the studies? And were the results robust enough for you to justify the treatment of your patients?

Hard questions but fair: let’s not just treat people on the basis that we ‘know we’re right’, or because we are dogmatic.
More tomorrow! Don’t forget to make comments, to subscribe using the RSS feed or to bookmark this blog. And it’s always great to know you’ve visited! You can email me too – go to the ‘About’ page for my email address.

Guyatt GH, Sackett DL, Cook DJ. Users guides to the
medical literature. II: how to use an article about therapy or
prevention. JAMA 1993; 270: 2598Ð2601 and 271: 59Ð63.

Fads, fiction and fact


In pain management over the past 15 or so years I have seen a number of treatments come and go – and I guess now I’m a wee bit hesitant when a NEW! Improved! treatment is put forward. Not that I’m not keen to innovate, or get excited over progress – I just feel a teeny bit cautious at times…

So… what have I seen? let’s take back pain, for example. When I was a baby OT in the mid-1980’s, back belts and in particular a ‘lifting’ belt promoted by a weight-lifter called Precious McKenzie was ‘the greatest thing’ to prevent back pain. He still promotes his version of ‘safe lifting’ on his website , and regularly receives excellent ratings for his presentation skills. The Cochrane Foundation has identified that back belts (and indeed any training in safe handling) have no evidence for reducing back pain – but in the mid-1980’s?!

Another example in back pain was the fashion for ‘muscle imbalance’, this was often provided alongside posture change advice… I don’t see evidence for (or against for that matter) this approach in the Cochrane reviews.

And another was the McKenzie (no relation to Precious) method for treating low back pain with lumbar extension…

Now we see ‘core stability’ and Pilates…

What seems common with each of the approaches I’ve identified is a particularly charismatic person who is very good at enthusing the people who provide the treatments. And another is the tendency to confuse short-term response (possibly relief of distress?) with long term outcome. And a somewhat dogmatic approach to critique.

Although it’s really difficult to tai hoa (‘whoa’ in Maori) when a new and groovy treatment seems to have some remarkable results I think there are some good reasons to wait a while (and some good ways to test when) before introducing it to your clinical population.

1. Often published results refer to a carefully selected group of patients who meet the research criteria
2. Follow up times are usually short in order to provide a ‘result’ (and a publication!) to the researchers
3. One or two studies by one or two researchers from the same institute are indicative rather than confirmatory – and it’s only by repeated results, from different areas, on different populations that we can be sure that the results can be counted on

Something I have observed is that when there is hope that a treatment will reduce pain, it will be accepted much more readily than well-researched methods that offer a way to manage pain.

So, for a while I’m going to take a wander through the scientific method, and how to apply evidence-based approaches in the ‘real’ world we live…

T-riffic site you must visit!


Something that is known to get my blood boiling and boiling very quickly is unverified, unsubstantiated claims for ‘treatments’ that promise to give ‘quick pain relief’ such as homeopathic remedies – or colour therapies – or magnetic underlays – or the therapeutic application of hands to ‘treat energies’ – or… well you get my drift.

Someone else who feels the same way, but does it much more eloquently than I can hope to is Ben Goldacre, medical doctor, author, broadcaster – and someone who can communicate scientific information in an incredibly satisfying way.

Go to his website Bad Science and read on – but not if you too are convinced that infinitessimal droplets of something diluted in gargantuan quantities of plain old water can ‘heal’ you…

On a more serious note, he gives some excellent commentary on unsubstantiated science, particularly health-related claims, and has been the recipient of some really vociferous and even ugly threats because he dares to make public many of the limitations of so-called ‘alternative’ therapies.

As Prof Denis Dutton has said ‘Would you fly in an airplane based on alternative physics?’ – No way sez me!! I want to travel in an airplane that has been designed on tried and tested facts in this world, not the next! In the same way I’m not going to have my body used as an experiment to see whether ‘alternative’ medicine will work, I want to know that the therapies I try have some sound science to back up their use. Visit Prof Dutton’s websites for some lively and articulate debate – Arts and Letters Daily, and his philosphy site

So, two sites that are really thoughtful, challenging – and humourous! T-riffic, I think!