outcomes

Values and goals


I can’t recall exactly where I heard it, but values are like a compass – they provide general guidance as to what is important in life, while goals are the map of how we are going to get there. I’ve been mulling over this as I worked with some people over the past couple of weeks, reviewing how they will measure whether the programme they’re on will have done anything for them.

Each person entering a programme of intervention has dreams of what it will achieve – pain reduction, better sleep, less grumpiness, more flexibility. And while assessment tools such as the Canadian Occupational Performance Measure (COPM) tap into constructs that reflect individual values, I personally find them difficult to use and a bit ‘clunky’. For those who don’t know the COPM, it’s an occupational therapy-only assessment of occupational performance domains (areas of activity in life, for the non-OT’s reading this!) where importance and current satisfaction with performance are rated before and after interventions.

Now the idea of this is quite good – we work to find out what the person wants, see whether they’re happy with how well they’re doing these things, and check to see if change in a positive direction occurs (hopefully as a result of our input!). Problems for me are that with long-term disability, and especially with low mood, at the beginning of a programme, people may not have much idea of the possibility of achieving ‘goals’.

So, I’m toying with the idea of helping people find out what is important to them in a couple of ways – values sort cards are great, and this one with instructions is a great resource (from the Motivational Interviewing website), but requires good reading skills and fairly good concentration. It’s pretty verbal and visual too, so not so terrific for people who are more practical or kinesthetic.
The set developed for people with schizophrenia is somewhat better, but I’d like something more pictorial (yes, I like visual stuff!!).

Another way is to work through the ‘downward arrow’ technique – again this is quite verbal, but not quite as ‘teachy’ as the card sort. Downward arrow technique starts with the person identifying something that has occurred, or that they’ve done in the past, or an activity they would like to do. As they describe the ‘thing’, you can ask them questions like:

‘Why is that important to you?’
‘What’s significant about that to you?’
‘Why do you want that?’
‘What would it mean to you to do that?’

I have scanned a load of pages on the internet looking for other ways to identify values – without an awful lot of success, I’m afraid. Many pages are about career values, or business values, which doesn’t often relate to the kind of people I’m working with.

So here’s an alternative – if the person is visual, use a couple of magazines, and ask them to clip out pictures of things that they like such as people doing activities, things people can buy, headlines and so on. Then ask them to sort them in order of importance. And then go through the process of asking (gently) why it’s important to them (using the downward arrow technique).

Let me know how that works for you – try it out yourself, you may be surprised at what you find out about yourself!

Once values are identified, it can be a lot easier to find out where the current gaps are for the person in terms of actually getting those values expressed in daily life. For example, if a person says that they really value time with their family, but they identify that they spend more time resting than they want to spend with their family, the goal is then to find a way to carry out the valued activities.

And where there is a conflict between current action and things the person values, this provides an opportunity to discuss the priority that is being placed on the actions currently being undertaken. For example, if the person is spending more time resting than being with the family, what is being valued more than family time is relief from pain.

This can be quite a shock to someone who doesn’t recognise that their current actions are really all about what is important in their life. Actions equal intentions equal actions. If they’re not happy with the outcomes at the moment, what are they really making important in their life? This helps with establishing real goals that the person can hold on to because they have clarified the importance for them.

Now I don’t advocate using this as a programme outcome measure – simply because the statistics we can do on 0 – 10 ‘satisfaction’ scales are not as robust as for other measures. But they do reflect patient/client satisfaction with the intervention. So as part of a set of outcome measures, they may supplement measures of other constructs.

Let me know what you think of this approach – and if you’ve enjoyed this post, want to read more, don’t forget you can subscribe using RSS feed (at the top left of the header, just click!), and you can always leave me comments (I love them!).

Evaluation of a CBT informed pain management programme


ResearchBlogging.org

A few posts ago I discussed the challenges of transferring research into practice, and discussed the examples of laterality training and graded exposure for CRPS. It’s difficult to know exactly what results to expect when moving from carefully selected participants to all-comers, and from highly detailed and prescribed protocols to more general principles and often varied application by a range of clinicians.
This report, then, by Morley, Williams and Hussein is a welcome review of the ‘real’ effects of ‘real’ therapy on ‘real’ patients in a typical clinical situation.

CBT-based pain management programmes have been implemented internationally and are probably the gold standard intervention for people who have completed all the biomedical interventions available, and have been told to ‘learn how to live with it’. CBT programmes are all about ‘how to live with it’!
This study reviews more than 1000 patients, over a 10 year period, accepted into a 4 week programme. ‘Data from more than 800 patients was available at pre-treatment and at one month post-treatment and for around 600 patients at pre-treatment and at 9 months follow-up. Measures reported in this analysis were pain experience and interference, psychological distress (depression and anxiety), self-efficacy, catastrophizing, and walking.’

As the authors of this study remark, quoting Barkham and Mellor-Clark (2003), ‘a model of research into efficacious and effective treatment should be based on a cycle in which evidence based practice (EBP) informs clinical practice which then generates evidence and questions (practice-based evidence) for testing under more controlled conditions of EBM.’

Now what is interesting about this study, apart from the realism of the setting and patients included, is the use of ‘reliable change index (RCI)/clinically significant change (CSC) methodology’ to evaluate the outcomes of the programme. This differs from the conventional inferential statistics (P values), the assumptions of which are routinely broken in clinical research – such as having a control group (the waiting list is not quite the same as a randomly assigned control group), and this type of statistic isn’t referenced to any external criteria – such as ‘does this change matter clinically?’, and it’s also sensitive to sample size.

I won’t review the programme content, measures used, or sample characteristics – to me, the programme content is ‘standard CBT’, and is based on Fordyce, Keefe and Turk’s work. The measures included the battery of pain intensity, Beck Depression Inventory, Coping Skills Questionnaire and Pain Self Efficacy Questionnaire. Nothing new in these outcome measures that have been used for many years! And the patients are very similar to those attending any chronic pain centre – mean pain duration of 113 months, 95% taking medication, few were employed (6%), average age of around 45 years, and just over half were women.

Using conventional statistical analyses, the results were significant. The authors reason that this may be because of the large sample size, and given that the effect sizes were small (.3–.49) to medium (.5–.8), the statistics cannot be used to support claims that any change was ‘above and beyond that produced by the measurement error inherent in the scales, or that the changes were clinically important.’

So, the authors set about using ‘Reliable Change Index’ and ‘Clinically Significant Change’, which are methods for determining whether change is about measurement error (RCI) or clinically significant (CSC). So, by predetermining ‘acceptable’ change criteria, the real value of the scores on various measures are made useful.

RCI is determined from obtaining the standard deviation of the measure (obtainable from the sample) and an estimate of the reliability of the measure, usually taken from the literature. CSC, using Jacobson’s determination employs statistical criteria to establish cut scores for continuous variables that essentially use the properties of the normal distribution.

Quoting directly from the article:

The criteria (a), (b) and (c) are defined
as follows: (a) is achieved when the post-treatment (or follow-
up) score lies outside of the range of the dysfunctional
population, where the range is defined as extending 2 SD units
beyond the mean for that population in the direction of a functioning
population; (b) is achieved when the post-treatment (or
follow-up) score lies within the range of the functioning population,
where the range is defined as within 2 SD units of the
mean of the functioning population; and (c) is defined as when
the post-treatment (or follow-up) score is statistically more
likely to be in the functional population – i.e. nearer to the
mean of the functional population than the mean of the dysfunctional
population. For practical reasons the sample of participants
was regarded as the population.

After using this approach, the following results were obtained:for measures of pain, emotional distress and self efficacy between one third and one fifth of patients achieved clinically significant outcome. A considerably smaller number (6%, or 1 in 17) achieved a clinically significant change on a measure of behavioural activity, the 5-minute walk test.

The RCI/CSC methodology explicitly separated patients above and below CSC cut points pre-treatment,
a feature which has not generally been reported in RCTs of psychological treatments for chronic pain. The results from this study show that factors other than statistics can be used to evaluate the effects of an intervention – but the method to derive the ‘clinically significant’ results are underpinned by sound statistics. Almost the best of both worlds!

Problems? Well, it is difficult to find treatment providers using the same clinical measures – a minimum data set would help so that programmes can review the effectiveness of their programme, and perhaps allow aggregation of data from different programmes as well.

Barkham M, Mellor-Clark J. Bridging evidence-based practice
and practice-based evidence: developing a rigorous and relevant
knowledge for the psychological therapies. Clin Psychol Psychother
2003;10:319–27.

Morley, S., Williams, A., Hussein, S. (2008). Estimating the clinical effectiveness of cognitive behavioural therapy in the clinic: Evaluation of a CBT informed pain management programme. Pain, epub

This is a recording…this is a recording…


Preventing relapse has to be one of the most difficult parts of pain management – what do you do to keep someone going with their new skills while at the same time not allowing them to become dependent on your encouragement?

Some strategies have included spacing the final few sessions some time after the bulk of the intervention; providing access to a support group; providing explicit instruction on ‘ways to manage high risk situations’; periodic telephone consultations – and now, ‘an automated, telephone-based tool for maintenance enhancement’ (Naylor,MR., Keefe, FJ., Brigidi, B., Naud, S., Helzer, JE., 2008).

Therapeutic Interactive Voice Response (TIVR) has four components: a daily self-monitoring questionnaire, a didactic review of coping skills, pre-recorded behavioral rehearsals of coping skills, and monthly personalized feedback messages from the CBT therapist based on a review of the patient’s daily reports.  The first three components are pre-recorded and all four can be accessed remotely by patients via touch-tone telephone on demand.

Sounds great – and the response looks favourable.  Maybe this is one way to maximise outcomes, while minimising cost and therapist time? My only concern is the need for participants to be (1) adherent to completing questionnaires on a very regular basis, and (2) comfortable with auditory-only feedback and using a telephone.  Both of these aspects require high levels of commitment to the process – and good literacy.

Nevertheless, it does demonstrate that technology can provide a way for therapy to maintain input with lower costs, which can only be good for our patients.

Naylor MR, Keefe FJ, Brigidi B, Naud S, Helzer JE, (2008). Therapeutic Interactive Voice Response for chronic pain reduction and relapse prevention. Pain. 134(3):335-45. Epub 2008 Jan 4.