poppy

Evidence into practice: but wait, there’s more!


ResearchBlogging.org
I pondered a bit about writing this post today. Yesterday I discussed some of the challenges of transferring research into daily practice, and maybe I’ve done enough on the topic – then again, there are some issues that can take a long time to explore. One of them for me is how to integrate client-centred practice with research evidence – how do I use the data gathered in strict research conditions, where grouped data is the outcome when I come face-to-face with a person who is unique in presentation, outlook, values, hopes and dreams?  Do I have answers that will help this person make changes so that he or she can be and do what they hope to, despite their persistent pain?

My take on this problem is to spend time working with the person to develop a set of clinical hypotheses or explanations that can be tested and in doing so become the intervention.

For example, if a person I see is having trouble sleeping, I’ll explore their habits around sleep, add in information about other daily habits such as using medication, caffiene and exercise, assess mood and anxiety, and arrive at a set of possible explanations for the sleep problem.  Treatment is then based on collaboratively testing each of these explanations in order to determine whether they hold true for that person – and in doing this, the sleep problem is directly influenced.  I’ve identified a number of outcomes that I aim to influence, and if they don’t change in the direction that I hypothesise they should, then I need to go back to my original hypotheses and revise them.

The question is: how do I know what to assess?  If I’m uninformed about the literature on sleep problems, I may focus on inappropriate aspects of the presentation such as the mattress hardness, or pillow shape.  I may not look at sleep architecture, or sleep apnoea as a possible contribution to the sleep problem, for example.

Here is one way in which delving into the literature is a very good thing indeed.  By keeping up-dated in my knowledge about aspects of performance/function that an individual could likely have problems with, I have broad understanding of possible contributing factors. 

The grouped data that is collected in most research is incredibly helpful, and the process of evidence based health care of (1) formulating the clinical question, (2) searching efficiently for the best available evidence, (3) critically analyzing evidence for its validity and usefulness, (4) integrating the appraisal with personal clinical expertise and clients’ preferences, and (5) evaluating one’s performance or outcomes of actions is a well-accepted strategy.

In the words of Lin, Murphy and Robinson (2010), this information answers ‘background’ questions –  questions that refer to general aspects of a phenomenon (i.e., who, what, where, when, how, and why; Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000).

Therefore, one way to answer the critics of EBHC (or science-based health care, if you will) who suggest that grouped data is useless for individual cases, is to think of this as broad-brush, general information that informs the clinician as to areas that should be considered when assessing the individual, specific and unique person.  Lin, Murphy and Robinson cite Sackett and colleagues in defining best practice as using three critical ingredients:
(1) best available external evidence;
(2) clinical expertise; and
(3) consideration of client’s contexts, rights, and preferences (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996).

The ‘external evidence’ is the collective wisdom drawn from the individuals who have participated in research, while it’s the ‘clinical expertise’ that works to establish how this applies to the individual.  Because I espouse a collaborative approach, the client/patient/person with pain’s individual contexts, rights and preferences are integral to the process of developing a formulation (aka set of hypotheses or explanations) to explain how this person presents in this way at this time. 

You could call this ‘client-centred’, but I’m reluctant to use that term because it can rapidly become client-driven  and this pathway can lead to some very dubious practice – what to do if the client will only consider crystals or colour therapy or further surgery and so on.

How does an evidence-based clinician acknowledge a person’s preferences for inappropriate or ineffective treatment?  How do we incorporate a client’s value or belief that says it’s OK to continue with a ‘boom and bust’ approach to activity, or who doesn’t value returning to work when the right to compensation is based on participating in efforts to promote return to work?

My take on this is to use motivational interviewing strategies, giving, with permission, clear information on the logical consequences of taking one path or another – and because MI is such a helpful approach, typically the person is able to give me the ‘not so good’ consequences of following a path that they’ve been down before or that is not helpful.  By helping the person establish for themselves how a certain action might impact upon important values, it’s so much easier for them to take action that is motivated by internal levels of importance, while I might work with them on confidence.

I’d love to hear about other ways clinicians have been able to work with client values that may be in conflict with an evidence based approach.  Your comments please!

Lin SH, Murphy SL, & Robinson JC (2010). Facilitating evidence-based practice: process, strategies, and resources. The American journal of occupational therapy. : official publication of the American Occupational Therapy Association, 64 (1), 164-71 PMID: 20131576

5 comments

  1. I thin the problem with “averages are not applicable to an individual” can come up because certain doctors do apply the as mindless recipes. I have met more than my share of those. Then there is also the issue of “only the best available treatment is worth exploring”. I have had doctor tell me, sounding absolutely certain, that research shows that CBT helps all patients with chronic pain, and nothing else is effective. The real picture (as far as I know) is that Cochrane review concluded that the effects are weak, and even the most positive studies I saw reported about 70% of patients improving, which still leaves 1 in 3 for whom this particular treatment did not work.

    What really struck me about this post is that when you described your method, it matched perfectly the method used by physical therapists who were effective for me. I.e., come up with a set of hypotheses about what may be causing my pain, come up with a set of possible solutions (which usually include a combination of hands-on work and exercise), figure out what is likely to be relevant given my individual circumstances, and then test the hypotheses systematically. My current physio also has extensive electronic records for all patients, carefully tracking various functional measures to evaluate the effectiveness of treatments used.

    I also met very ineffective physical therapists. Those were the ones that perfectly matched the caricature you used several times in your posts of why physical therapy does not work: “you have back pain? We have to teach you to lift safely” or “you have back pain? We have to strengthen your core muscles”

    I think the reason “averages are not good for individuals” objection comes up because too often the doctors who claim to practice evidence-based medicine seem to take whatever the recent recommendation (evidence based or not) as “one size fits all”. But science used the way you describe, to provide a framework of things to investigate, is the right way to go.

    1. Thanks Mary – it sounds as if a clinician who follows a client-centred (to use that word that I don’t like!) approach and includes the person with pain in the process of developing a treatment/management plan is more likely to succeed in this than a clinician who thinks that ‘one size fits all’ – and uses EBP as the argument for his or her approach. I recognise that working collaboratively can be quite scary at times, but to me it’s a much more effective approach, and more importantly in some ways, it’s much more respectful!

      1. I would agree with it being more effective. Not only in “practical” terms of having patients improve, but also in terms of having patients follow the advice. This reminded me of another story. I was getting some physical therapy, not so long ago, when my case came up for review with an orthopaedic specialist (after 6-month long NHS queue, hence the PT in the interim). He looked it over, and immediately said “OK, I think you should switch from physio to primarily exercise-based program”. When I tried to get the reasoning behind it, the answer (which he put in writing) was “more active treatment tends to be more effective, and the best evidence indicates that physical therapy should be given for at most 12 weeks”. Even though “evidence based”, at the time the recommendation really seemed to be based on the general philosophy (“active” is better than “passive”), and a fixed number from some study somewhere. He hadn’t even asked about the details of what I was doing with my PT.

        So I simply ignored that advice (and that doctor lost some of my trust as a clinician). But about 6 weeks later my physical therapist said “I think we should move you on to home exercise without hands-on intervention now”. I happily accepted the recommendation. The difference was, in that instance it was based on a discussion where we agreed that I have reached a stable state with respect to both pain and function, and that the time and effort I would spend in coming to further sessions would not justify the returns. That mattered a whole lot more than an “evidence-based” constraint that didn’t really take into account the specifics of my situation.

  2. I think you have raised an important point. I think as therapists our role is often to suggest different tools that the client can use to address his or her individual problems. It is helpful to know the evidence so that we are suggesting tools that have a high likelihood of being effective, and also so that we can say to the client that this strategy has worked for many others, so there is a good chance it could work for you. We still need the clinical skills to be able to accurately assess what might be causing the issue, and to choose which evidence-based strategies to try. That does not mean we will always find all the answers on the first try, but as OTs, I think we are good at adapting. My concern is how difficult it is to find evidence for or against many commonly used OT interventions.

    1. Thanks Linda – it’s also about being explicit about our own reasoning process, and taking into account our cognitive biases and errors. We’re not very good always at recognising that we do have biases and how they affect our reasoning – and even when well informed we can make some odd decisions simply because we’re not that good at spotting when we go wrong. And you are SO right – there isn’t a lot of evidence for (or against) occupational therapy interventions. Food for many a PhD I think!
      Thanks for taking the time to comment!
      cheers
      Bronnie

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s