I pondered a bit about writing this post today. Yesterday I discussed some of the challenges of transferring research into daily practice, and maybe I’ve done enough on the topic – then again, there are some issues that can take a long time to explore. One of them for me is how to integrate client-centred practice with research evidence – how do I use the data gathered in strict research conditions, where grouped data is the outcome when I come face-to-face with a person who is unique in presentation, outlook, values, hopes and dreams? Do I have answers that will help this person make changes so that he or she can be and do what they hope to, despite their persistent pain?
My take on this problem is to spend time working with the person to develop a set of clinical hypotheses or explanations that can be tested and in doing so become the intervention.
For example, if a person I see is having trouble sleeping, I’ll explore their habits around sleep, add in information about other daily habits such as using medication, caffiene and exercise, assess mood and anxiety, and arrive at a set of possible explanations for the sleep problem. Treatment is then based on collaboratively testing each of these explanations in order to determine whether they hold true for that person – and in doing this, the sleep problem is directly influenced. I’ve identified a number of outcomes that I aim to influence, and if they don’t change in the direction that I hypothesise they should, then I need to go back to my original hypotheses and revise them.
The question is: how do I know what to assess? If I’m uninformed about the literature on sleep problems, I may focus on inappropriate aspects of the presentation such as the mattress hardness, or pillow shape. I may not look at sleep architecture, or sleep apnoea as a possible contribution to the sleep problem, for example.
Here is one way in which delving into the literature is a very good thing indeed. By keeping up-dated in my knowledge about aspects of performance/function that an individual could likely have problems with, I have broad understanding of possible contributing factors.
The grouped data that is collected in most research is incredibly helpful, and the process of evidence based health care of (1) formulating the clinical question, (2) searching efficiently for the best available evidence, (3) critically analyzing evidence for its validity and usefulness, (4) integrating the appraisal with personal clinical expertise and clients’ preferences, and (5) evaluating one’s performance or outcomes of actions is a well-accepted strategy.
In the words of Lin, Murphy and Robinson (2010), this information answers ‘background’ questions – questions that refer to general aspects of a phenomenon (i.e., who, what, where, when, how, and why; Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000).
Therefore, one way to answer the critics of EBHC (or science-based health care, if you will) who suggest that grouped data is useless for individual cases, is to think of this as broad-brush, general information that informs the clinician as to areas that should be considered when assessing the individual, specific and unique person. Lin, Murphy and Robinson cite Sackett and colleagues in defining best practice as using three critical ingredients:
(1) best available external evidence;
(2) clinical expertise; and
(3) consideration of client’s contexts, rights, and preferences (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996).
The ‘external evidence’ is the collective wisdom drawn from the individuals who have participated in research, while it’s the ‘clinical expertise’ that works to establish how this applies to the individual. Because I espouse a collaborative approach, the client/patient/person with pain’s individual contexts, rights and preferences are integral to the process of developing a formulation (aka set of hypotheses or explanations) to explain how this person presents in this way at this time.
You could call this ‘client-centred’, but I’m reluctant to use that term because it can rapidly become client-driven and this pathway can lead to some very dubious practice – what to do if the client will only consider crystals or colour therapy or further surgery and so on.
How does an evidence-based clinician acknowledge a person’s preferences for inappropriate or ineffective treatment? How do we incorporate a client’s value or belief that says it’s OK to continue with a ‘boom and bust’ approach to activity, or who doesn’t value returning to work when the right to compensation is based on participating in efforts to promote return to work?
My take on this is to use motivational interviewing strategies, giving, with permission, clear information on the logical consequences of taking one path or another – and because MI is such a helpful approach, typically the person is able to give me the ‘not so good’ consequences of following a path that they’ve been down before or that is not helpful. By helping the person establish for themselves how a certain action might impact upon important values, it’s so much easier for them to take action that is motivated by internal levels of importance, while I might work with them on confidence.
I’d love to hear about other ways clinicians have been able to work with client values that may be in conflict with an evidence based approach. Your comments please!
Lin SH, Murphy SL, & Robinson JC (2010). Facilitating evidence-based practice: process, strategies, and resources. The American journal of occupational therapy. : official publication of the American Occupational Therapy Association, 64 (1), 164-71 PMID: 20131576