outcomes

textured gold

Numbers on a scale: How bad did you say your pain was?


ResearchBlogging.org
Have you ever been asked to give your pain rating on a scale of 0 – 10 (where 0 = no pain at all and 10 = most extreme pain you can imagine)? Have you ever tried to work out whether today’s pain is worse than yesterdays? What does a pain rating tell us?

I’ve struggled to work out how “bad” my pain is many times, is it the pain intensity that makes it troublesome? Or, in the case of a migraine, is it the quality of the pain that makes it bad (or the nausea?). Health professionals often ask people to summarise their pain experience into a form that (hopefully) we can all understand – but just what does a pain that’s around 4/10 on a VAS actually mean?

Why do we use rating scales?

We know that pain is subjective, just like taste and colour. While we might be able to agree that both of us are tasting something we call “banana”, we don’t know whether the banana taste I experience is the same as the banana taste you experience. We can see that both of us are eating the same fruit, but we don’t know how our body/brain processes that experience. Instead we assume, or infer, that we’re experiencing it in a similar way because of the similarities in context.

With pain, the situation is even more complex: we can’t determine whether the pain I feel is similar to the pain another person feels, and we don’t even have the benefit of similar “tissue damage” in the case of a migraine headache.

So, we have to infer something about the experience through some sort of common mechanism. Mostly that’s language. We hope that someone can understand that a higher number means greater pain. We hope the person can recognise what “no pain” feels like and where it might be represented on a scale. We ask the person to remember their current pain intensity, translate it into a number that in turn represents to us some kind of common understanding of what pain given that number might feel like.

Of course, there are problems with numbers on a scale. For a child who doesn’t understand the association between numbers on a scale and intensity, we use the “Faces” scale. For a person with cognitive problems (brain injury, stroke, dementia), we observe their behaviour (and hope we can translate well enough). For a person who doesn’t speak the same language as us, we might try a sliding scale with green at the bottom and red at the top, to represent increasing intensity – appealing, perhaps, to a common understanding that green = OK and red = not OK.

Worse than the difficulty translating from experience to a number is the common misunderstanding that pain severity alone represents the “what it is like” to experience pain. We know personally that it doesn’t – after all, who has had a toothache that represents “Oh no, I need a root canal and that’s going to cost a bomb!”, or “Ouch! That lemon juice in the paper cut on my finger is really annoying”, or “I feel so sick, this migraine is horrible”.

Hopefully most health professionals are taught that to use just one measure of pain is not enough. It’s important to also include other aspects of pain such as quality, how it affects function (interference), how confident we are to deal with life despite the pain (self efficacy).

So we use rating scales as a shorthand way to get to understand a tiny bit of what it is like to have pain. But the Visual Analogue Scale (VAS) is used many times to estimate whether “this person’s pain is so bad they need medication”, or “this person’s pain means we can’t expect her to help move herself from the ambulance trolley to the wheelchair”. The VAS can be used in many ways it shouldn’t be.

Studying the relationship between VAS pain intensity and disability (SF36)

The study by Boonstra, Schiphorst Preuper, Balk & Stewart (in press) aimed to identify cut-off points on the VAS to establish “mild”, “moderate” and “severe” using three different statistical approaches.  They measured pain using a Verbal Rating Scale (mild, moderate and severe), the VAS, and used several scales from the SF36 (a measure of general health quality) to establish interference from pain.

What they found was that while “mild” pain was fairly equally determined (less than or equal to 3.5), and correlated with both severity and function, when it came to “moderate” and “severe” pain, there was far less agreement. In fact, this group found that individuals could verbally rate their pain as “moderate” but at the same time report severe levels of interference. This means verbal descriptors under-represent the impact of pain on performance.

They also found that the cut-off point between “mild” and “moderate” pain in terms of interference with activity ranged between 2.5 – 4.5, and for moderate to severe pain between 4.5 – 7.4.  The associations between pain intensity and disability or interference were low to moderate and as a result these authors argue that it is “questionable” to translate VAS scores into verbal descriptors, because the different instruments measure different things.

What does this tell us?

It should be easy by now to tell that although we use numbers as a shorthand for “how bad is your pain?” in reality, they don’t directly translate the “what it is like” to have pain. Neither does the VAS correlate well with measures of disability or interference from pain. While people with mild pain might be also experiencing only a little disability, when the numbers go up the relationship between intensity and function disappear.

I think we might be trying to quantify an experience as a quick way to make clinical decisions. Really, when we ask “how bad is your pain”, depending on the context, we may be asking “do you need pain relief?”, “do you need help to move?”, “did my treatment help?” or any myriad other questions. The trouble is in research, we can’t do statistics nearly as easily on a “yes” or “really bad” or “it didn’t change much” answer. But how many of us work routinely in research settings?

I wonder whether it’s worth asking ourselves: do I need to ask for a pain rating, or should I ask a more useful question? And take the time to listen to the answer.

 

Anne M. Boonstra, Henrica R. Schiphorst Preuper, Gerlof A. Balk, & Roy E.Stewart (2014). Cut-off points for mild, moderate and severe pain on the VAS for pain for patients with chronic musculoskeletal pain Pain DOI: http://dx..org/10.1016/j.pain.2014.09.014

3184143696_1c6592480c_z

What matters to people with persistent pain?


ResearchBlogging.org
I’ve read many written expectations of people coming for pain management – and without a doubt, the majority of people want to get on with life, go back to doing what they enjoy, and feel better in themselves. The only problem with that? Most of them preface their goals with ‘reduce my pain so I can…’, or words to that effect. And the reality is that for many of them, that particular goal is frustratingly difficult to achieve.

I would think that most clinicians working in pain management want to practice patient-centred care – but what is it that patients really want when pain can’t be completely eliminated? Luckily for us (maybe), the team developing the IMMPACT (Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials) recommendations have taken the time to “obtain the perspectives of individuals with chronic pain regarding what they consider the most relevant and important outcomes of treatments for chronic pain.”

The way they went about this was to ask people – radical huh?! Seriously, they recruited people to participate in focus groups from a number of clinics providing pain treatment, including cancer treatment.  Four separate focus groups were held with 31 people,  discussing symptoms of pain, impact of pain on daily life, and experiences with treatments.  These groups were moderated and structured using questions designed to keep participants talking about pain (and not straying off topic!).

Once these groups had been held, the discussions were analysed using content analysis, and an item pool of outcomes relevant to individuals who have chronic pain were drawn up.  These statements were then rated in a web-based survey to identify the degree of importance of each one, and participants in the survey also completed several other standardised questionnaires to help determine their general pain status (this also helps us as clinicians determine whether the people in the survey ‘look like’ the type of people we see – a nice touch!).

What did they find?

The key domains discussed in the focus groups were general pain symptoms; physical activities; family life; social/recreational activities; and emotional wellbeing.

The statements used in the web-based study were these:

1. Falling asleep at night
2. Staying asleep at night
3. Sex life
4. Taking care of family such as children, spouses, parents or other relatives
5. Relations with family, relatives or significant others
6. Relations with friends
7. Employment
8. Household activities (cleaning cooking, running errands)
9. Planning activities
10. Participating in family events/activities
11. Participating in recreational and social activities
12. Physical activities (walking, climbing stairs, bending, squatting, lifting)
13. Hobbies
14. Enjoyment of life
15. Emotional well-being (feeling sad, depressed, less motivated)
16. Fatigue, feeling tired
17. Weakness
18. Difficulty concentrating
19. Difficulty remembering things

Quite a list!  And not surprising really – what is interesting is how well (or not) we currently measure these aspects of life.  This has enormous relevance for us as clinicians – note how highly valued home and family responsibilities are, and how poorly we currently measure these areas.  This was commented upon by a group of researchers looking at how well our disability measures match with the ICF domains – we really don’t assess family and community participation very well.

In the pain management centre where I work, we are reviewing our outcome measures.  I reflected to the team the somewhat pitiful measures we have to assess functional ability (and disability).  Currently we use a 10 – item measure with an 11 point likert-type scale.  Each domain of functioning is weighted equally, so self care (which includes breathing, sleeping, eating) is just as important as sexual activity which is just as important as work.  Does that really reflect the value you and I would place on each area?  And this measure tends not to be responsive to change, particularly improved functioning, and also has both a ceiling and a floor effect.  In other words, it doesn’t measure very high levels of disability, or very low levels of disability particularly well.

Maybe it’s time to begin developing a patient-centred set of functional outcomes, where the weighting given to individual items reflects the degree of value people place on these activities.  While it’s always going to be different for individuals, the results in this study showed quite consistent ratings of importance across age and pain diagnosis.

Should all treatments be measured against these patient-centred outcomes?

Well, the authors of this study suggest no, it would be unrealistic for an analgesic to affect every area of functioning.  On the other hand, maybe if we can work to identify treatments that do target these important areas of functioning, we might help people feel better and live a life that meets their needs and expectations.

 

Turk, D., Dworkin, R., Revicki, D., Harding, G., Burke, L., Cella, D., Cleeland, C., Cowan, P., Farrar, J., & Hertz, S. (2008). Identifying important outcome domains for chronic pain clinical trials: An IMMPACT survey of people with pain Pain, 137 (2), 276-285 DOI: 10.1016/j.pain.2007.09.002

Who will do well, who will not?


This post was chosen as an Editor's Selection for ResearchBlogging.org
If I had a crystal ball, and could decide who would do well in self managing their pain, and who would not, what would I do? A holy grail for insurance companies and health economists and yes, clinicians, is to find some precise way to decide who needs the most help with their pain, and who will manage well without as much assistance – with the ultimate aim to reduce disabilty and therefore costs (both human and fiscal).

The problem is that so far, the experience of pain can’t be objectively measured, and ultimately, it’s the person experiencing the pain who will do or not do, and we have no way of deciding whether the person ‘should’ or ‘should not’ be expected to function well.

This poses an enormous problem in health and insurance. How on earth do we decide who should remain ‘sick’ and receiving support, be excused from working, and remain a patient; how do we decide who should be obliged to pull themselves together and get back to work?

Oh if only it were simple.  The problem is that in pain, the relationship between things like pain intensity, tissue damage (impairment) and disability is not at all straightforward – and to add to the mix, doctors may not really believe in or have the confidence to manage the very factors that seem to have the most influence.

One assessment instrument I know well is the Multidimensional Pain Inventory (MPI). This was first developed by Turk and Rudy 1988, and aims to use the responses from the quite lengthy questionnaire to profile patients.  Three main classifications are found – Adaptive Copers, Interpersonally Distressed and Dysfunctional. Because this instrument has been used widely in both research and clinical settings, it seems reasonably clear that the three classifications are stable and can be used to predict current and future disability and even response to self management of pain.

Adaptive copers are found to be managing reasonably well, if they continue managing their pain the way they are, they will probably become somewhat less disabled and distressed over time; Interpersonally distressed people are likely to feel socially isolated or socially unsupported, and their relationships may be at risk – and this is likely to have long term impact on their disability; people classified into the  Dysfunctional group are probably going to remain disabled if they don’t receive assistance to manage their pain well – and even if they do receive treatment, may not respond as well to it as those in other groupings.

In the paper I’ve cited today, in the period of 7 years after a pain management programme, patients were monitored for their use of disability or sick days.  What was found was that those who were classified at treatment phase into the Dysfunctional group continued to use sick leave the most of the three groups even after treatment.

While this at face value looks rather sad for the utility of the pain management programmes, I should add that the difference between the Adaptive copers and the Dysfunctional group wasn’t statistically significant.  A more telling characteristic was that those in the Dysfunctional group who had used a lot of sick leave prior to the pain management programme were more likely to continue to do so after the programme.

What does this suggest?

1.  Psychological profiles can, and do, accurately predict outcomes even up to 7 years later.  Impairment/diagnosis/pain intensity doesn’t predict nearly as well, so why oh why is there so much reliance on these in compensation and insurance management?

2. A cynic might say ‘Why even try to treat people in the Dysfunctional group?’  After all, despite treatment, they continue to use sick leave, why not ignore that group and simply treat the others – catch the ‘low hanging fruit’ as it were.  Focus rehabilitative efforts on those who will benefit, and stop pouring resources at people who may not manage well despite our best efforts.

There’s some sense in doing just this.  Choose people who will benefit from a specific strategy, give them the best, then watch the results come in – it makes their lives better, it ensures your programmes outcomes are good, and it’s not nearly as challenging (and therefore it’s much easier) for clinicians to do the work.

The fly in the ointment is that the ongoing costs associated with those people who don’t do well is enormous.  Not only in direct costs of compensation or benefits, and not only in the human costs of their distress and suffering, but also in the associated costs of other health services people in this group end up using.  It’s this group of people who tend to seek more treatment for their pain, often of the biomedical kind, who develop comorbidities due to other health conditions, and who need supportive services like home help and social support more than those who do well.

It would be grand if psychosocial factors were truly accepted by all treatment providers (including doctors looking to abolish the pain) as more influential on outcomes than impairment and pain intensity.  The implication being that then these factors would get the attention much earlier and more intensively than they are currently – and maybe fewer people would land up in the ‘Dysfunctional’ group.

I would love to see self management promoted by all clinicians working in chronic pain management as the ideal.  I’d also love it if self management wasn’t seen as a failure of pain reduction approaches.

Bergström, G., Bergström, C., Hagberg, J., Bodin, L., & Jensen, I. (2010). A 7-year follow-up of multidisciplinary rehabilitation among chronic neck and back pain patients. Is sick leave outcome dependent on psychologically derived patient groups? European Journal of Pain, 14 (4), 426-433 DOI: 10.1016/j.ejpain.2009.06.008

“Process serving People”


RTW matters latest newsletter advises why they wish they hadn’t had that tattoo done last year – and I couldn’t agree more.
<a href="Process SERVING People“>This brief excerpt from their update:

Last year, RTWMatters’ New Year’s Resolution was to flex our collective bicep, bite the pain bullet and get a “People over Process” tattoo.
A reader and soon-to-be blogger for RTW Matters wrote saying:

“I’ve been struggling with one of your resolutions—People over Process. I do understand the sentiment that drives you to that tattoo but I’ve spent a working life focusing on improving processes!

“If the staff of an organisation have no carefully thought through and established processes then they will be mired in uncontrollable work, forced to learn the same lessons over and over, to reinvent ways to do things again and again and have no time to deal with people.

The secret is to be clear about the purpose of the organisation (“what are we here for”) so the processes are not an end in themselves but exist to deliver better outcomes for people.

Developing, and more importantly, implementing and using ‘good’ processes can be bloody difficult. It might sound easy, but good intentions are not simply enough.

Why?

RTWMatters’ Publisher Robert Hughes believes that, “in some instances process does become an end in itself and then it can lose sight of the problem it was intended to resolve. This kind of lost process is often that which is developed at arm’s length from the problem the process is notionally intended to resolve.”

Oh yes indeed. It’s the same argument I have had for some time about ‘quality management’. Let’s not get all excited about ‘tidying up’ some of the messy processes involved in helping people with chronic pain – let’s think first about what we’re hoping to achieve by it, and how we’re going to measure whether it’s worked. Then once that’s identified I’m sure there will be more ways than one to get to the same end point – and that variation is what distinguishes humans from machine parts.

I hope you enjoy this taste of RTW Matters, and take a peek at their content – and maybe subscribe, it’s worth it!

A wish list for a pain management programme


ResearchBlogging.org
After coming up with some of the content and structure for a programme, and discussing the need for a stable clinical team with effective skills in group-based CBT and an applied behavioural focus, today I thought I’d add in something about selecting, assessing and follow-up that’s required.

I get absolutely frustrated with reading and hearing about interventions that are either not required to furnish, or don’t consider outcomes – both psychometric questionnaire results (thought to indicate change in the ‘real world’) and real world outcomes.  The art of making sure that what we do makes a difference, and knowing how to do this well seems to be quite lost on many clinicians and it really frustrates me.  I’d find it professionally unsatisfying if I carried out an intervention and never had a clue as to whether it made a difference long after I’d finished seeing the person, so I can’t understand why so many clinicians (a) don’t measure outcomes (b) measure them poorly either through inadequate pre-treatment measures, inappropriate timing of post-treatment measures, or using irrelevant measures (c) measure outcomes using only psychometric questionnaires, or unidimensional measures.

I’ve mentioned outcomes first because part of selecting people for a programme is about taking baseline measures so that you know where you’re starting from, and you can ensure the programme is appropriate for the person’s needs.  Selection also includes identifying the person’s readiness to move towards self managing pain, because if someone’s not ready they can become resistant (just think of all the ways people avoid doing what they don’t want to do! And include yourself), become ‘innoculated’ against the concepts (‘tried that, it didn’t work’), and influence group process negatively.

Just as a surgeon selects people for surgery after careful assessment, and declines surgery for people who are unlikely to benefit from it, so we need to be similarly selective in pain management.  Pain management programmes are not ‘the last resort’ after everything has failed, they are a positive step forward for people who need to and want to take over the management of their own situation.  Like any other self management programme like alcohol and drug rehabilitation, until the person is ready to do what will be very difficult and life-long, it’s unlikely they’ll benefit.

Every participant for a pain management programme needs to be comprehensively, and recently, assessed from a biopsychosocial perspective. Medical issues need to be managed before programmes commence, and the person needs to be reassured that they are safe to begin to do things again – and I’m afraid, this almost always needs to be reassurance from a doctor.  Psychsocial issues influencing the person need to be identified – note the word psychosocial. Without considering the social it’s unlikely the situational and contextual factors that often constrain behaviour change, including things like litigation, family, case management issues and work issues.  These factors influence beliefs, attitudes, behaviours and emotions and it’s critical that the person is seen as one person within a whole network of others.

When screening to establish readiness for pain management, other factors to consider are concurrent activities like vocational management, other investigations and pending treatments, evne things like holidays and training.  Some of the other areas that might make it difficult for someone to participate are communication style, cognitive functioning, learning style, fatigue, activity level, and needs that don’t ‘fit’ with the majority of the programme participants and content or structure.

I’ll post a screening semi-structured interview later today, that I’ve used to help identify whether a participant is ready and appropriate for a group pain management programme.

All participants need to have some baseline measurements taken before a programme.  In fact, there should be one set at comprehensive assessment, and a second set before a programme, another set at completion of the programme, then at least two, but preferably three times after – I think 1, 4 and 9 months later, or thereabouts.  As time progresses, the intervening variables confound outcomes, and the number of respondents also drops, so it can be a challenge to obtain enough responses and for them to reflect programme changes over time.

What to measure and how?  I’ll leave that for another day, suffice to say that questionnaire results are not enough.  Not that they’re unimportant, because they are – but until they have had predictive validity established within the community in which your patients live, they may not reflect much useful information.  Real world actions are far more valid, but are much more difficult to measure accurately – on the other hand, I think I’d like a valid measure that actually measures something important and useful, than to measure something irrelevant but do so incredibly accurately.  Otherwise we could all give participants a blood test for glucose levels and be done with it!

References?  Loads and loads of ’em.  Where do I start?

The first and probably most comprehensive reference is either of the two editions of Pain Management: Practical applications of the biopsychosocial perspective in clinical and occupational settings by Main, Sullivan & Watson.  The first edition was by Main & Spanswick, it’s a Churchill Livingstone publication under the Elsivier imprint.  The first edition contains almost a ‘recipe’ for how to run this type of programme, while the second edition contains more conceptual material but provides excellent information to support clinical practice.

Further references:

Fordyce, W E (1976). Behavioral methods for chronic pain and illness.  CV Mosby, St Louis, MS.

Turk, D, Meichenbaum, D, Genest, M. (1983). Pain and behavioral medicine: a cognitive-behavioural perspective.  The Guilford Press, New York.

Loeser, J., Sullivan, M. (1995). Disability in the chronic low back pain patient may be iatrogenic. Pain Forum, 4: 114-121

Main, C., Parker, H. (1989). The evaluation and outcome of pain management programmes for chronic low back pain.  In Roland, M., Jenner, J. (Eds.) Back pain: New approaches to rehabilitation and education.  Manchester University Press, Manchester, pp 129-156.

Keefe, F., & van Horn, Y. (1993). Cognitive-behavioral treatment of rheumatoid arthritis pain maintaining treatment gains Arthritis Care & Research, 6 (4), 213-222 DOI: 10.1002/art.1790060408

When patients set the goals of therapy…


ResearchBlogging.org

If you’ve been following my blog over the past week or so, you’ll see I’ve been discussion goal setting as part of pain management rehabilitation.  I’ve looked at the things patients may ask for, and the difference between these goals and the goals that clinicians may need to set directly related to the treatment aims. I’ve also looked at the place of goals in life generally, the subskills used to develop and achieve goals, and what happens to people when they can’t achieve the goals they set. I’ve also looked at using Goal Attainment Scaling as a form of outcome measurement. Today I want to look at a study where the effect of patients being involved in goal-setting was measured. Unfortunately, it’s not a study within the chronic pain setting, instead it’s about goal-setting within an inpatient neurological rehabilitation unit. There are clear differences between the model this unit uses compared with most chronic pain management settings – but there are also things we can learn, so here goes!

This study was carried out in The Neurological Rehabilitation Unit at the National Hospital for Neurology and Neurosurgery, London, UK, and involved 200 in-patients, half of whom were involved in ‘normal practice’ and the other half were involved in a programme where increased participation in goal-setting was encouraged.  The patients had a range of neurological conditions including stroke, multiple sclerosis, spinal cord lesions, and a variety of other less common conditions like peripheral nerve disease and central nervous system tumours.

This Unit has a care pathway, which is a set of interlinked goals relating to five main dimensions:

(1) health maintenance,

(2) cognitive functioning,

(3) personal activities of daily living,

(4) participation and

(5) communication.

Method:

All admissions to the unit over an 18 month period were included in the study, except if they had limited ability to communicate in English.  This meant that a total of five patients were excluded.    A ‘repetitive block design’ was used to determine the treatment protocol used, with each block lasting 3 months.

‘At the onset of each phase all staff (physiotherapists, occupational therapists, speech and language therapists, nurses and doctors) working on the neurological rehabilitation unit attended a training session on either the ‘‘usual practice’’ (phase A) or the ‘‘increased participation’’ (phase B) approach which was to be used.’

Measures

Four measures were used: Patients’ beliefs about their involvement rated on a four point scale; Goal relevance was measured as a global rating using a 10 cm visual analogue scale, with the anchors being ‘not at all relevant’ and ‘highly relevant’; each goal was rated on a five point scale from (1) highly relevant to (5) of not relevance whatsoever; patients overall satisfaction was rated on a 10 cm visual analogue scale; the distribution of the goal components was recorded on the five care pathway dimensions , as were the outcome or end status of the goals, and reasons for non-completion of goals (variances) were collected for comparative purposes.  Three functional outcome measures were also taken on admission and discharge (unfortunately, not long-term outcomes) – Functional Independence Measure (FIM),  London Handicap Scale, and  General Health Questionnaire (GHQ-28).

The flowchart shows you the two pathways patients may follow, depending on the month they are admitted.

flowchart

In the experimental group, patients are asked to complete a structured goal-setting workbook before admission, and attend clinical goal-setting meetings.  In the ‘treatment as usual’, goals are set by therapists without direct input from patients.

Results

There were no significant differences between the two groups, except mean age for the ‘‘increased participation’’ group was 4 years younger than ‘treatment as usual’.

Patients correctly identified the origin of goals, depending on the group they were in, and they also identified that their goals were both more relevant, and that they were more satisfied if they were in the experimental group.

Different types of goals were set when participants increased their input to goal-setting.  Notably, people in the experimental group identified a greater number of ‘participation’ goals compared with the ‘treatment as usual’ group.  There were fewer overall goals set in the experimental group, but there was no difference between the number of goals achieved in each group.  Finally, there were no differences in the functional outcome measures between the groups at discharge.

So, what does this mean?

Well, one aspect that did differ between the groups was the degree of satisfaction with treatment – in pain management at least, there is some evidence that expectations that are well-met during treatment are associated with slightly better outcomes. It probably also meant (although this wasn’t directly studied), that adherence to treatment activities during treatment was probably a little higher.  Happier patients probably means happier staff!

It’s interesting that there was no data on the long-term adherence to treatment activities, nor on outcomes between the groups.  I’m inclined to think that people who believe their goals are more relevant to their real life would be more likely to carry on with the treatment activities and therefore the outcomes over time might be more durable.

The main difficulty with generalising from this example to other settings is about the methodology.  This isn’t a blinded RCT.  It would be really difficult to set up a full-blown RCT, but it is a limitation.  There could be systematic differences between the two groups that weren’t readily identifiable – or perhaps there were differences in the way the staff facilitated the processes.

Another difficulty is that this was an in-patient setting, while most pain management in New Zealand, anyway, is within an outpatient setting.  I’m not sure how much this would influence the processes, apart from probably a much closer team working environment in an in-patient setting compared with outpatient.

Patients experiencing chronic pain often report that they feel they are not listened to, and that their concerns are not addressed.  Perhaps by following a systematic process of setting goals, similar to this study, this concern could be addressed.

On the other hand, ACC asks for claimants to determine their ‘functional goals ‘.  As I mentioned when I first posted about goal-setting, lots of patients simply want their pain to be gone, and life to return to normal. It’s not easy to elicit clear goals from patients as many haven’t set goals routinely, and given the lack of direction and sense of disillusionment that many face as part of having chronic pain, perhaps we need to use a structured process to help people establish goal areas, then work through how they might achieve the goals – during both therapy and afterward.

This isn’t the last post from me on goal-setting.  I’m on the hunt to find some good material on which to base a process for developing good goal setting strategies.

What strikes me, though, is the real lack of good clinically-based research demonstrating the effect of goal-setting on patients, and to work out what works, and what doesn’t.  Even some nice descriptive studies would be worthwhile to explore the experiences of patients as they participate in goal-setting!  This is a challenge, folks!  Let’s do it!

R. C Holliday, S. Cano, J. A Freeman, E D. Playford (2007). Should patients participate in clinical decision making? An optimised balance block design controlled study of goal setting in a rehabilitation unit Journal of Neurology, Neurosurgery & Psychiatry, 78 (6), 576-580 DOI: 10.1136/jnnp.2006.102509

Success! Why measuring outcome is so rewarding


Not a research post today, but a great experience that I hope will encourage anyone who is not already a fan of regular outcome measurement to get on with it!

I saw a person yesterday who has had pain for about 3 years.  Superficially she’d been managing quite well – still working, having a social life, managing all her household activities and in general, looking good.  BUT – and you knew there would be a ‘but’ – once I started to look a little deeper, it was absolutely amazing to see how much she had adapted her life to avoid specific movements.

I used the PHODA (photographs of daily activities) to assess the specific movements and activities she didn’t like to do.  I’ve blogged about PHODA (Kugler et al, 1999) before – a set of photographs of everyday activities in a variety of settings that can be used to identify and score fearfulness and avoidance.  The findings showed that although this woman was able to do things, the way she did them was to avoid ANY bending, twisting, reaching, jarring or lifting.  She was the original Gadget Queen with things to help her do everything WITHOUT bending.  An occupational therapists dream! (more…)

A couple of interesting pain sites


It’s been a while since I linked to pain websites, so I did a little trawl through the web pages to find these ones.

The NPEC (National Pain Education Council) has some FREE resources – notably some pdf documents on Patient forms, several pain assessment tools, two functional assessment tools, and two quality of life measures. Worth a look – and if you go to the home page, and are prepared to log in, there are some CME activities, and it’s FREE!

If you’ve ever struggled to find a pain assessment, Hardin Library at the University of Iowa describes some search strategies that can be used in common databases – and a bunch of web-based health assessment resources – so it’s a great place to go to refine your search techniques. You will need to have a way to obtain the commercial assessments, but for many pain assessments, you can find the original research article and either contact the researcher direct, or find out the place to buy the assessment if that’s necessary.

For the ACC (in New Zealand) ‘Pain Assessment Compendium’, you can go to the ACC ‘Provider publications order form’ and fill out the blanks to obtain a CD of the compendium. This provides you with a large number of psychometrically valid assessments – but be warned, they are not outcome measures, and as normal with pain measures, the normative data is North American or British, and won’t directly translate to New Zealand populations.

Chirogeek, despite the name, has some very useful resources online. Head to this page for links to four measures often used in musculoskeletal pain assessment: the Oswestry Disability Index, Roland-Morris Disability Questionnaire, Stanford Score, and the Neck Disability Index.

The final website I want to include today is the PROQOLID, or the Patient-Reported Outcome and Quality of Life Instruments Database. This lists a range of Quality of Life measures across various disabilities and describes the author, purpose, population, and other details to help you decide whether it would be helpful in your population. Details of the questionnaires are restricted to subscribers, but the summary alone is helpful – and you can always search for the original or for publishers with the name and author listed.

I hope this has been helpful – let me know what you think! And remember it’s lonely out here in cyberland – comments are always welcome and I DO respond! and you can subscribe using the RSS feed and/or bookmarking. I post most days except the weekend – so there’s usually something new to read!

Am I right or just dogmatic?


Even in health care, the loudest voice with the largest opinion can be the most persuasive – even with limited (or selective) use of scientific evidence. Sadly, fads exist in pain management too.

To counter our human biases we need to be critical of all research, and ask some serious questions about accepted practice as well. In most forms of allied health (as well as medical health) there are some ways of working that are based a lot more on ‘what we’ve always done’ and ‘it seems to work’, or even ‘but it works for this person’ or ‘I’ve seen it work for people like this’ than evidence from well-controlled trials.

Some people argue with me about this point saying ‘but if we only did what there was evidence for, we’d having nothing to offer’! Ummmm. That doesn’t mean that what you’re offering is doing any good!

So… what is critical appraisal? This link leads to a great pdf doc summary by Alison Hill and Claire Spittlehouse of just what questions you should consider if you’re reading a research article. I thought I’d summarise it briefly today, but I strongly encourage you to read the full article yourself.

Their definition of critical appraisal reads “Critical appraisal is the process of systematically examining research evidence to assess its validity, results and relevance before using it to inform a decision.”
They go on to say “Critical appraisal is an essential part of evidence-based clinical practice that includes the process of systematically finding, appraising and acting on evidence of effectiveness.”

They agree that sometimes carrying out a critical appraisal can be disheartening – research on clinical interventions can be flawed, have poor methodology, and can highlight just how little reliance we can have on what we do being helpful. We are sometimes working in the dark but putting on a good show to suggest that what we learned in our training ‘works’.

And I’m going to cut and paste the complete set of questions they recommend using when trying to appraise a piece of research. These questions are developed by Guyatt et al.
A. Are the results of the study valid?
Screening questions
1. Did the trial address a clearly focused research question?
Tip: a research question should be ‘focused’ in terms of:
l The population studied
l The intervention given
l The outcomes considered.
2. Did the authors use the right type of study?
Tip: the right type of study would:
l Address the research question
l Have an appropriate study design.

Is it worth continuing?
Detailed questions
3. Was the assignment of patients to treatments randomised?
Tip: consider if this was done appropriately.
4. Were all of the patients who entered the trial properly accounted for at its conclusion?
Tip: look for:
l The completion of follow-up
l Whether patients were analysed in the groups to which they were randomised.
5. Were patients, health workers and study personnel ‘blind’ to treatment?
Tip: this is not always possible, but consider if it was possible – was every effort made to ensure ‘blinding’?
6. Were the groups similar at the start of the study?
Tip: think about other factors that might effect the outcome such as age, sex, social class.
7. Aside from the experimental intervention, were the groups treated equally?
Tip: for example, were they reviewed at the same time intervals.
B. What are the results?
8. How large was the treatment effect?
9. How precise was the estimate of the treatment effect?
Tip: look for the confidence limits.
C. Will the results help locally?
10. Can the results be applied to the local population?
Tip: consider whether the patients covered by the trial are likely to be very different from your population.
11. Were all clinically important outcomes considered?
12. Are the benefits worth the harms and costs?
© Critical Appraisal Skills Programme

The three questions to ask yourself when you read research?
Three broad issues need to be considered when appraising research:
A Are the results of the study valid?
B What are the results?
C Will the results help locally?

I leave you with that today – I think it’s quite enough for a Tuesday. Stop and think about the treatment you are using today. Have you read a systematic review of the treatment you’re using? If you have, do the patients you see look anything like those included in the studies? And were the results robust enough for you to justify the treatment of your patients?

Hard questions but fair: let’s not just treat people on the basis that we ‘know we’re right’, or because we are dogmatic.
More tomorrow! Don’t forget to make comments, to subscribe using the RSS feed or to bookmark this blog. And it’s always great to know you’ve visited! You can email me too – go to the ‘About’ page for my email address.

Guyatt GH, Sackett DL, Cook DJ. Users guides to the
medical literature. II: how to use an article about therapy or
prevention. JAMA 1993; 270: 2598Ð2601 and 271: 59Ð63.

Hippocrates


Now I’m not going to post a lot about Hippocrates himself, but I want to start todays post by quoting something that he is supposed to have said: ‘There are, in fact, two things, science and opinion; the former begets knowledge, the latter ignorance’.

Hippocrates proposed that if a new treatment was to be tried, we should use science to decide whether or not it works rather than relying on somebody’s opinion.

What makes science different? Apart from its reliance on experiments, observations, trials, argument and discussion – and its supposed adherence to objectivity – science continues to question what is accepted and assumed just in case it has got it wrong.

And this is important for us as clinicians – instead of relying on big budgets for advertising, incredible sales talk or persuasion, or even ‘received wisdom handed down the ages’ – we are encouraged, in fact required to maintain a critical eye on what we do, why we do it, and how we do it, to learn from our outcomes, and to endeavour to be objective about what occurs.

Of course this doesn’t happen nearly as wonderfully well as Hippocrates wanted, but it is an aim for us all. It means we need to know something about scientific method or how to systematically investigate our outcomes, and it means we really must know something about how to measure what we do, and some of the confounds that get in the way of being ‘objective’.

There are some hot debates about the place of objectivity in many parts of health care – especially nursing, occupational therapy, social work – areas where individual experience or constructions are important, and where the context of what happens is seen as influencing both the event under observation and the observer.

Actually if you look up constructivism in wikipedia, you are greeted by a whole page of different links (take a look!).
In some ways, hard science, such as empiricism where experiments and facts and statistics have held sway, has been given a bad name in circles where constructivism has been emphasised.  I think you can be both a constructivist and an empiricist – and later this week I’ll show you why.

Why would it matter to us as health care people? Well to me it’s important to know that what I do with a person is less about me personally and all my wonderful charisma, and much more about the methods and skills I have learned. Otherwise I’m concerned that once I’m not here the world will be soooo much less able to manage and I’ll have to live forever doing what I do!!

I may joke about this, but seriously, I want to know what is working in the mix of inputs I provide to someone, so hopefully I can learn to do it more effectively, and have the results last longer.

I’m also keen to know that the effects are not temporary ‘feel good’ effects – and I don’t want to find that the effects are all about natural remission, or ‘regression to the mean’ or reducing distress, when I think it’s something completely different!

If you’re interested in some of the things that do influence treatment effects that aren’t necessarily about the treatment itself, this paper although old, and on a site that has been criticised heavily (check it out for yourself and make your own mind up), has some good information.

More tomorrow!
Don’t forget that you’re welcome to make comments, argue the point, agree with me (!), and you can subscribe using the RSS feed above – or just bookmark and head on back tomorrow!