Assessment

you-are-my-sunshine

What do we do with those questionnaires?


Courtesy of many influences in pain management practice, you’d have to have been hiding under a rock or maybe be some sort of dinosaur not to have noticed the increasing emphasis on using questionnaires to measure factors such as pain catastrophising, depression or avoidance. The problem is I’m not sure we’ve all been certain about what to do with the results. It’s not uncommon for me to hear people saying “Oh but once I see psychosocial factors there, I just refer on”, or “they’re useful when the person’s not responding to my treatment, but otherwise…”, “we use them for outcome measures, but they’re not much use for my treatment planning”.

I think many clinicians think psychosocial questionnaires are all very well – but “intuition”  will do “…and what difference would it make to my treatment anyway?”

Today I thought I’d deconstruct the Pain Catastrophising Scale and show what it really means in clinical practice.

The Pain Catastrophising Scale is a well-known and very useful measure of an individual’s tendency to “think the worst” when they’re considering their pain. Catastrophising is defined as “an exaggerated negative mental set brought to bear during actual or anticipated painful experience” (Sullivan et al., 2001). The questionnaire was first developed by Sullivan, Bishop and Pivik in 1995, and the full copy including an extensive manual is available here. Keep returning to that page because updates are made frequently, providing more information about the utility of the measure.

The questionnaire itself is a 13-item measure using a 0 – 4 Likert-type scale from 0 = “not at all” to 4 = “all the time”. Respondents are instructed to “indicate the degree to which you have these thoughts and feelings when you are experiencing pain”.

There are three subscales measuring three major dimensions of catastrophising: rumination “I can’t stop thinking about how much it hurts”; magnification “I worry that something serious may happen”; and helplessness “It’s awful and I feel that it overwhelms me”.

To score the instrument, simply sum all the responses to all 13 items, but to get a better idea of how to help a person, the subscale calculations involve the following:

Rumination: sum items 8,9,10, and 11

Magnification: sum items 6,7, and 13

Helplessness: sum items 1,2,3,4,5, and 12

There’s not a lot of point in having numbers without knowing what they mean, so the manual provides means and standard deviations relating to a population individuals with injury leading to lost time from work in Nova Scotia, Canada.

thingClinicians are typically interested in whether the person sitting in front of them is likely to have trouble managing their pain, so the manual also provides “cut off”scores for what could be described as “clinically relevant” levels of catastrophising. A total score of 30 or more is thought to represent the 75th percentile of scores obtained by individuals with chronic pain.

The “so what” question

Cutting to the chase, the question is “so what”? What difference will getting this information from someone make to my clinical reasoning?

Leaving aside the enormous body of literature showing a relationship between high levels of catastrophising and generally poor responses to traditional treatments that address pain alone (including surgery for major joint replacement, recovery from multiple orthopaedic trauma, low back pain, shoulder pain etc), I think it’s helpful to dig down into what the three subscales tell us about the person we’re working with. It’s once we understand these tendencies that we can begin to work out how our approach with someone who has high levels of rumination might differ from what we’ll do when working with someone who has high levels of helplessness.

As an aside and being upfront, I think it’s important to remember that a questionnaire score will only tell you what a person wants you to know. Questionnaires are NOT X-rays of the mind! They’re just convenient ways to ask the same questions more than once, to collect the answers and compare what this person says with the responses from a whole lot of other people, and they allow us to organise information in a way that we might not think to do otherwise.  I also think it’s really important NOT to label a person as “a catastrophiser” as if this is a choice the person has made. People will have all sorts of reasons for tending to think the way they do, and judging someone is unprofessional and unethical.

Rumination

Rumination is that thing we do when a thought just won’t get out of our mind. You know the one – the ear worm, the endless round and round, especially at night, when we can’t get our mind off the things we’re worrying about. If a person has trouble with being able to drag his or her attention away, there are some useful things we can suggest. One theory about rumination is that it’s there as a sort of problem solving strategy, but one that has gone haywire.

Mindfulness can help so that people can notice their thoughts but not get hooked up into them. I like to use this both as a thought strategy, but also as a way of scanning the body and just noticing not only where the pain is experienced, but also where it is not.

“Fifteen minutes of worry” can also help – setting aside one specific time of the day (I like 7.00pm – 7.15pm) where you have to write down everything you’re worried about for a whole fifteen minutes without stopping. By also telling yourself throughout the day “I’m not worrying about this until tonight” and afterwards saying “I’ve already worried about this so I don’t need to right now”, worrying and ruminating can be contained. By being present with the thoughts during that 15 minutes, the threat value of the thought content is also reduced.

Magnification

This is the tendency to think of the worst possible thing rather than the most likely outcome, and it’s common! Magnification can really increase the distress and “freeze” response to a situation. If a person is thinking of all the worst possible outcomes it’s really hard for them to focus on what is actually happening in the here and now. There’s some adaptive features to magnification – if I’ve prepared for the worst, and it doesn’t happen, then I’m in a good situation to go on, but in some people this process becomes so overwhelming that their ability to plan is stopped in its tracks.

Once again, mindfulness can be really useful here, particularly paying attention to what is actually happening in the here and now, rather than what might happen or what has happened. Mindful attention to breathing, body and thoughts can help reduce the “freeze” response, and allow some space for problem solving.

Of course, accurate information presented in nonthreatening terms and in ways the person can process is important to de-threaten the experience of pain. This is at the heart of “explain pain” approaches – and it’s useful. What’s important, however, is to directly address the main concern of the person – and it may not be the pain itself, but the beliefs about what pain will mean in terms of being a good parent, holding down a job, maintaining intimacy, being responsible and reliable. It’s crucial to find out what the person is really concerned about – and then ensure your “reassurance” is really reassuring.

Helplessness

Helplessness is that feeling of “there’s nothing I can do to avoid this awful outcome so I won’t do anything”. It’s a precursor to feelings of depression and certainly part of feeling overwhelmed and out of control.

When a person is feeling helpless it’s important to help them regain a sense of self efficacy, or confidence that they CAN do something to help themselves, to exert some sort of control over their situation. It might be tempting to aim for focusing on pain intensity and helping them gain control over pain intensity, but because it’s often so variable and influenced by numerous factors, it might be more useful to help the person achieve some small goals that are definitely achievable. I often begin with breathing because it’s a foundation for mindfulness, relaxation and has a direct influence over physiological arousal.

You might also begin with some exercise or daily activities that are well within the capabilities of the person you’re seeing. I like walking as a first step (no pun intended) because it doesn’t require any equipment, it’s something we all do, and it can be readily titrated to add difficulty. It’s also something that can be generalised into so many different environments. In a physiotherapy situation I’d like to see PTs consider exercises as their medium for helping a person experience a sense of achievement, of control, rather than a means to an end (ie to “fix” some sort of deficit).

To conclude
Questionnaires don’t add value until they’re USED. I think it’s unethical to administer a questionnaire without knowing what it means, without using the results, and without integrating the results into clinical reasoning. The problem is that so many questionnaires are based on psychological models and these haven’t been integrated into physiotherapy or occupational therapy clinical reasoning models. Maybe it’s time to work out how do this?

Sullivan M J L, Bishop S, Pivik J. The Pain Catastrophizing Scale: Development and validation. Psychol Assess 1995, 7: 524-532.

Main, C. J., Foster, N., & Buchbinder, R. (2010). How important are back pain beliefs and expectations for satisfactory recovery from back pain? Best Practice & Research Clinical Rheumatology, 24(2), 205-217. doi:doi:10.1016/j.berh.2009.12.012

Sturgeon, J. A., Zautra, A. J., & Arewasikporn, A. (2014). A multilevel structural equation modeling analysis of vulnerabilities and resilience resources influencing affective adaptation to chronic pain. PAIN®, 155(2), 292-298. doi:http://dx.doi.org/10.1016/j.pain.2013.10.007

dawn

Ambiguity and uncertainty


Humans vary in how comfortable we are with uncertainty or ambiguity: Tolerance of ambiguity is a construct discussed in cognitive and experimental research literature, and refers to the willingness to prefer black and white situations, where “there is an aversive reaction to ambiguous situations because the lack of information makes it difficult to assess risk and correctly make a decision. These situations are perceived as a threat and source of discomfort. Reactions to the perceived threat are stress, avoidance, delay, suppression, or denial” (Furnham & Marks, 2013, p. 718).  Tolerance to uncertainty is often discussed in relation to response to stress and emotions associated with being in an ambiguous situation, or it may refer to a future-oriented trait where an individual is responding to an ambiguous situation in the present. Suffice to say, for some individuals the need to be certain and clear means they find it very difficult to be in situations where multiple outcomes are possible and where information is messy. As a result, they find ways to counter the unease, ranging from avoiding making a decision to authoritatively dictating what “should” be done (or not done).

How does this affect us in a clinical setting? Well, both parties in this setting can have varying degrees of comfort with ambiguity.

Our clients may find it difficult to deal with not knowing their diagnosis, the cause of their painful experience, the time-frame of its resolution, and managing the myriad uncertainties that occur when routines are disrupted by the unexpected. For example, workers from the UK were interviewed about their unemployment as a result of low back pain. Uncertainty (both physical and financial) was given as one of the major themes from interviews of their experience of unemployment (Patel, Greasley, Watson, 2007).  Annika Lillrank, in a study from 2003, found that resolving diagnostic uncertainty was a critical point in the trajectory of those living with low back pain (Lillrank, 2003).

But it’s not just clients who find it hard to deal with uncertainty – clinicians do too. Slade, Molloy and Keating (2011) found that physiotherapists believe patients want a clear diagnosis but feel challenged when they’re faced with diagnostic uncertainty. What then happens is a temptation to be critical of the patients if they fail to improve, to seek support from other more senior colleagues, and end up feeling unprepared by their training to deal with this common situation. The response to uncertainty, at least in this study, was for clinicians to “educate” care-seekers about their injury/diagnosis despite diagnostic uncertainty (my italics), and a strong desire to see rapid improvements, and tend to attribute lack of progress to the client when either the client doesn’t want “education” or fails to improve (Slade, Molloy & Keating, 2003).

Physiotherapists are not alone in this tendency: There is a large body of literature discussing so-called “medically unexplained diseases” which, naturally, include chronic pain disorders. For example Bekkelund and Salvesen (2006) found that more referrals were made to neurologists when the clinician felt uncertain about a diagnosis of migraine. GP’s, in a study by Rosser (1996) were more likely to refer to specialists in part because they were uncertain – while specialists, dealing as they do with a narrower range of symptoms and body systems, deal with less diagnostic uncertainty. Surprisingly, despite the difference in degree of uncertainty, GP’s order fewer tests and procedures yet often produce identical outcomes!

How do we manage uncertainty and ambiguity?

Some of us will want to apply subtypes, groupings, algorithms – means of controlling the degree of uncertainty and ambiguity in our clinical practice. Some of the findings from various tests (eg palpation or tender point examination) are used as reasons for following a certain clinical rule of thumb. In physiotherapy, medicine and to a certain extent my own field of occupational therapy, there is a tendency to “see nails because all I have is a hammer” in an attempt to fit a client into a certain clinical rule or process. We see endless publications identifying “subtypes” and various ways to cut down the uncertainty within our field, particularly with respect to low back pain where we really are dealing with uncertainty.

Some of these subgroupings may appear effective – I remember the enthusiasm for leg length discrepancies, muscle “imbalance”, and more recently neutral spine and core stability – because for some people these approaches were helpful! Over time, the enthusiasm has waned.

Others of us apply what we could call an eclectic approach – a bit of this, a bit of that, something I like to do, something that I just learned – and yes, even some of these approaches seem to work.

My concern is twofold. (1) What is the clinical reasoning behind adopting either a rule-governed algorithm or subtyping approach or an eclectic approach? Why use X instead of Y? And are we reasoning after the fact to justify our approach? (2) What do we do if it doesn’t work? Where does that leave us? As Slade, Molloy & Keating (2003), do we begin blaming the patient when our hammer fails to find a nail?

I’ve long advocated working to generate multiple hypotheses to explain how and why a person is presenting in this way at this time. It’s a case formulation approach where, collaborating with the person and informed by broad assessment across multiple domains that are known to be associated with pain, a set of possible explanations (hypotheses) are generated. Then we systematically test these either through further clinical assessment, or by virtue of providing an intervention and carefully monitoring the outcome. This approach doesn’t resolve uncertainty – but it does allow for some time to de-bias our clinical reasoning, it involves the client in sorting out what might be going on, it means we have more than one way to approach the problem (the one the client identifies, not just our own!), and it means we have some way of holding all this ambiguous and uncertain information in place so we can see what’s going on. I know case formulations are imperfect, and they don’t solve anything in themselves (see Delle-Vergini & Day (2016) for a recent review of case formulation in forensic practice – not too different from ordinary clinical practice in musculoskeletal management IMHO) . What they do is provide a systematic process to follow that can incorporate uncertainty without needing a clinician to jump to conclusions.

I’d love your thoughts on managing uncertainty as a clinician in your daily practice. How do you deal with it? Is there room for uncertainty and ambiguity? What would happen if we could sit with this uncertainty without jumping in to treat for just a little longer? Could mindfulness be useful? What if you’re someone who experiences a great deal of empathy for people who distressed – can you sit with not knowing while in the presence of someone who is hurting?

 

Bekkelund, S., & Salvesen, R. (2006). Is uncertain diagnosis a more frequent reason for referring migraine patients to neurologist than other headache syndromes? European Journal of Neurology, 13(12), 1370-1373. doi:http://dx.doi.org/10.1111/j.1468-1331.2006.01523.x
Delle-Vergini, V., & Day, A. (2016). Case formulation in forensic practice: Challenges and opportunities. The Journal of Forensic Practice, 18(3), null. doi:doi:10.1108/JFP-01-2016-0005
Furnham, A., & Marks, J. (2013). Tolerance of ambiguity: A review of the recent literature. Psychology, Vol.04No.09, 12. doi:10.4236/psych.2013.49102
Lillrank, A. (2003). Back pain and the resolution of diagnostic uncertainty in illness narratives. Social Science & Medicine, 57(6), 1045-1054. doi:http://dx.doi.org/10.1016/S0277-9536%2802%2900479-3
Patel, S., Greasley, K., Watson, P. J. (2007). Barriers to rehabilitation and return to work for unemployed chronic pain patients: A qualitative study. European Journal of Pain: Ejp, 11(8), 831-840.
Rosser, W. W. (1996). Approach to diagnosis by primary care clinicians and specialists: Is there a difference? Journal of Family Practice, 42(2), 139-144.
Slade, S. C., Molloy, E., & Keating, J. L. (2012). The dilemma of diagnostic uncertainty when treating people with chronic low back pain: A qualitative study. Clinical Rehabilitation, 26(6), 558-569. doi:10.1177/0269215511420179
daisy do

Did it help? Questions and debate in pain measurement


Pain intensity, quality and location are three important domains to consider in pain measurement. And in our kete*of assessment tools we have many to choose from! A current debate (ongoing debate?) in the august pages of Pain (International Association for the Study of Pain) journal shows that the issue of how best to collate the various facets of our experience of pain is far from decided – or even understood.

The McGill Pain Questionnaire (MPQ) is one of the most venerable old measurement instruments in the pain world.  It is designed to evaluate the qualities of pain – the “what does it feel like” of sensory-discriminative components, evaluative components, and cognitive-affective components. There are 20 categories in the tool, and these examine (or attempt to measure) mechanical qualities, thermal qualities, location and time.  Gracely (2016), in an editorial piece, compares the McGill to a set of paint colour samples – if pain intensity equals shades of grey, then the other qualities are other coloures – blue, green, red – in shades or tints, so we can mix and match to arrive at a unique understanding of what this pain is “like” for another person.

To begin to understand the MPQ, it’s important to understand how it was developed. Melzack recognised that pain intensity measurement, using a dolimeter (yes, there is such a thing – this is not an endorsement, just to prove it’s there), doesn’t equate with the qualities of pain experienced, nor of the impact of previous experiences. At the time, Melzack and Wall were working on their gate control theory of pain, so it’s useful to remember that this had not yet been published, and specificity theory was holding sway – specificity theory arguing that pain is a “specific modality of cutaneous sensation”, while pattern theory held that the experience reflects the nervous systems ability to “select and abstract” relevant information (Main, 2016).  So Melzack adopted a previous list of 44 words, carried out a literature review, and recorded the words used by his patients. Guided by his own three dimensional model of pain, he generate three groups of descriptors to begin to establish a sort of “quality intensity scale”. These were then whittled down to 78 words that have been used since, and by used I mean probably the most used instrument ever! Except for the VAS.

There are arguments against the MPQ – I’m one who doesn’t find it helpful, and this undoubtedly reflects that I work in a New Zealand context, with people who may not have the language repertoire of those that Melzack drew on. The people I work with don’t understand many of the words (‘Lancinating‘ anyone?), and like many pain measures, the importance or relevance of terms used in this measure are based on expert opinion rather than the views of those who are experiencing pain themselves. This means the measure may not actually tap into aspects of the experience of pain that means a lot to people living with it. Main (2016) also points out that interpreting the MPQ is problematic, and perhaps there are alternative measures that might be more useful in clinical practice. Some of the criticisms include the difficulty we have in separating the “perceptual” aspects of pain from the way pain functions in our lives, and the way we communicate it, and the MPQ doesn’t have any way to factor in the social context, or the motivational aspects of both pain and its communication.

In a letter to the editor of Pain, Okkels, Kyle and Bech (2016) propose that there should be three factors in the measurement – symptom burden (they suggest pain intensity), side effects (or medication – but what if there’s no medication available?), and improved quality of life (WHO-5). But as Sullivan and Ballantyne (2016) point out in their reply – surely the point of treatment is to improve patient’s lives – “we want to know if it is possible for the patient’s life to move forward again. However it is also important that we do not usurp patients’ authority to judge whether their life has improved” (p. 1574). What weighting we give to, for example, pain reduction vs improved quality of life? I concur. Even the MPQ with all its history doesn’t quite reflect the “what it means to me to experience this pain”.

Did it help? Answering this critical question is not easy. Pain measurement is needed for furthering our understanding of pain, to ensure clinical management is effective, and to allow us to compare treatment with treatment. But at this point, I don’t know whether our measures reflect relevant aspects of this common human experience.  Is it time to revisit some of these older measures of pain and disability, and critically appraise them in terms of how well they work from the perspectives of the people living with pain? Does this mean taking some time away from high tech measurement and back to conversations with people?

 

(*pronounced “keh-teh” – Maori word for kitbag, and often used to represent knowledge)

Gracely, R. H. (2016). Pain language and evaluation. Pain, 157(7), 1369-1372.

Main, C. J. (2016). Pain assessment in context: A state of the science review of the mcgill pain questionnaire 40 years on. Pain, 157(7), 1387-1399.

Okkels, N., Kyle, P. R., & Bech, P. (2016). Measuring chronic pain. Pain, 157(7), 1574.

Sullivan, M. D., & Ballantyne, J. (2016). Reply. Pain, 157(7), 1574-1575.

 

cold sea

Pain measurement: Measuring an experience is like holding water


Measurement in pain is complicated. Firstly it’s an experience, so inherently subjective – how do we measure “taste”, for example? Or “joy”? Secondly, there’s so much riding on its measurement: how much pain relief a person gets, whether a treatment has been successful, whether a person is thought sick enough to be excused from working, whether a person even gets treatment at all…

And even more than these, given it’s so important and we have to use surrogate ways to measure the unmeasurable, we have the language of assessment. In physiotherapy practice, what the person says is called “subjective” while the measurements the clinician takes are called “objective” – as if, by them being conducted by a clinician and by using instruments, they’re not biased or “not influenced by personal feelings or opinions in considering and representing facts”. Subjective, in this instance, is defined by Merriam Webster as “ relating to the way a person experiences things in his or her own mind. : based on feelings or opinions rather than facts.”  Of course, we know that variability exists between clinicians even when carrying out seemingly “objective” tests of, for example, range of movement, muscle strength, or interpreting radiological images or even conducting a Timed Up and Go test (take a look here at a very good review of this common functional test – click)

In the latest issue of Pain, Professor Stephen Morley reflects on bias and reliability in pain ratings, reminding us that “measurement of psychological variables is an interaction between the individual, the test material, and the context in which the measure is taken” (Morley, 2016). While there are many ways formal testing can be standardised to reduce the amount of bias, it doesn’t completely remove the variability inherent in a measurement situation.

Morley was providing commentary on a study published in the same journal, a study in which participants were given training and prompts each day when they were asked to rate their pain. Actually, three groups were compared: a group without training, a group with training but no prompts, and a group with training and daily prompts (Smith, Amtmann, Askew, Gewandter et al, 2016). The hypothesis was that people given training would provide more consistent pain ratings than those who weren’t. But no, in another twist to the pain story, the results showed that during the first post-training week, participants with training were less reliable than those who simply gave a rating as usual.

Morley considers two possible explanations for this – the first relates to the whole notion of reliability. Reliability is about identifying how much of the variability is due to the test being a bit inaccurate, vs how much variability is due to the variability of the actual thing being measured, assuming that errors or variability are only random. So perhaps one problem is that pain intensity does vary a great deal from day-to-day.  The second reason is related to the way people make judgements about their own pain intensity. Smith and colleagues identify two main biases (bias = systematic errors) – scale anchoring effects (that by giving people a set word or concept to “anchor” their ratings, the tendency to wander off and report pain based only on emotion or setting or memory might be reduced), and that daily variations in context might also influence pain. Smith and colleagues believed that by providing anchors between least and “worst imaginable pain”, they’d be able to guide people to reflect on these same imagined experiences each day, that these imagined experiences would be pretty stable, and that people could compare what they were actually experiencing at the time with these imagined pain intensities.

But, and it’s a big but, how do people scale and remember pain? And as Morley asks, “What aspect of the imagined pain is reimagined and used as an anchor at the point of rating?” He points out that re-experiencing the somatosensory-intensity aspect of pain is rare (though people can remember the context in which they experienced that pain, and they can give a summative evaluative assessment such as “oh it was horrible”). Smith and colleagues’ study attempted to control for contextual effects by asking people to reflect only on intensity and duration, and only on pain intensity rather than other associated experiences such as fatigue or stress. This, it must be said, is pretty darned impossible, and Morley again points out that “peak-end” phenomenon (which means that our estimate of pain intensity depends a great deal on how long we think an experience might go on, disparities between what we expect and what we actually feel, and differences between each of us) will bias self-report.

Smith et al (2016) carefully review and discuss their findings, and I strongly encourage readers to read the entire paper themselves. This is important stuff – even though this was an approach designed to help improve pain intensity measurement within treatment trials, what it tells us is that our understanding of pain intensity measurement needs more work, and that some of our assumptions about measuring our pain experience using a simple numeric rating scale might be challenged. The study used people living with chronic pain, and their experiences may be different from those with acute pain (eg post-surgical pain). The training did appear to help people correctly rank their pain in terms of least pain, average pain, and worst pain daily ratings.

What can we learn from this study? I think it’s a good reminder to us to think about our assumptions about ANY kind of measurement in pain. Including what we observe, what we do when carrying out pain assessments, and the influences we don’t yet know about on pain intensity ratings.

Morley, S. (2016). Bias and reliability in pain ratings. Pain, 157(5), 993-994.

Smith, S. M., Amtmann, D., Askew, R. L., Gewandter, J. S., Hunsinger, M., Jensen, M. P., . . . Dworkin, R. H. (2016). Pain intensity rating training: Results from an exploratory study of the acttion protecct system. Pain, 157(5), 1056-1064.

steadfast

Using a new avoidance measure in the clinic


A new measure of avoidance is a pretty good thing. Until now we’ve used self report questionnaires (such as the Tampa Scale for Kinesiophobia, or the Pain Catastrophising Scale), often combined with a measure of disability like the Oswestry Disability Index to determine who might be unnecessarily restricting daily activities out of fear of pain or injury. These are useful instruments, but don’t give us the full picture because many people with back pain don’t see that their avoidance might be because of pain-related fear – after all, it makes sense to not do movements that hurt or could be harmful, right?

Behavioural avoidance tests (BAT) are measures developed to assess observable avoidance behaviour. They’ve been used for many years for things like OCD and phobias for both assessments and treatments. The person is asked to approach a feared stimulus in a standardised environment to generate fear-related behaviours without the biases that arise from self-report (like not wanting to look bad, or being unaware of a fear).

This new measure involves asking a person to carry out 10 repetitions of certain movements designed to provoke avoidance. The link for the full instructions for this test is this: click

Essentially, the person is shown how to carry out the movements (demonstrated by the examiner/clinician), then they are asked to do the same set of movements ten times.  Each set of movements is rated 0 = performs exactly as the clinician does; 1 = movement is performed but the client uses safety behaviours such as holding the breath, taking medication before doing the task, asking for help, or motor behaviours such as keeping the back straight (rotation and bending movements are involved); 2 = the person avoids doing the movement, and if the person performs fewer than 10 repetitions, those that are not completed are also coded 2. The range of scores obtainable are 0 – 60.

How and when would you use this test?

It’s tempting to rush in and use a new test simply because it’s new and groovy, so some caution is required.

My questions are: (1) does it help me (or the person) obtain a deeper understanding of the contributing factors to their problem? (2) Is it more reliable or more valid than other tests? (3) Is it able to be used in a clinical setting? (4) Does it help me generate better hypotheses as to what’s going on for this person? (5) I also ask about cost, time required, scoring and whether special training is required.

This test is very useful for answering question (1). It provides me with a greater opportunity to review the thoughts, beliefs and behaviours of a person in the moment. This means I can very quickly identify even the subtle safety behaviours, and obtain the “what’s going through your mind” of the person. If I record the movements, I can show the person what’s going on. NB This is NOT intended to be a test of biomechanical efficiency, or to identify “flaws” in movement patterns. This is NOT a physical performance test, it’s a test of behaviour and belief. Don’t even try to use it as a traditional performance test, or I will find you and I will kill (oops, wrong story).

It is more valid than other tests – the authors indicate it is more strongly associated with measures of disability than measures of pain-related fear and avoidance behaviour. This is expected, because it’s possible to be afraid of something but actually do it (public speaking anyone?), and measures of disability don’t consider the cause of that disability (it could be wonky knees, or a dicky ticker!).

It’s easy to do in a clinical setting – A crate of water bottles (~8 kg) and a table (heights ~68 cm) are needed to conduct the BAT-Back. The crate weighed  7.8 kg including six one-litre plastic bottles. One could argue that people might find doing this test in a clinic is less threatening than doing it in real life, and this is quite correct. The setting is contained, there’s a health professional around, the load won’t break and there’s no time pressure, so it’s not ecologically valid for many real world settings – but it’s better than doing a ROM assessment, or just asking the person!

Does it help me generate better hypotheses? Yes it certainly does, provided I take my biomechanical hat off and don’t mix up a BAT with a physical performance assessment. We know that biomechanics are important in some instances, but when it comes to low back pain it doesn’t seem to have as much influence as a person’s thoughts and beliefs – and more importantly, their tendency to just not do certain movements. This test allows me to go through the thoughts that flash through a person’s mind as they do the movement, thus helping me and the person more accurately identify what it is about the movement that’s bothering them. Then we can go on to test their belief and establish whether the consequences are, in fact, worse than the effects of avoidance.

Finally, is it cost-effective? Overall I’d say yes – with a caveat. You need to be very good at spotting safety behaviours, and you need to have a very clear understanding about the purpose of this test, and you may need training to develop these skills and the underlying conceptual understanding of behavioural analysis.

When would I use it? Any time I suspect a person is profoundly disabled as a result of their back pain, but does not present with depression, other tissue changes (limb fracture, wonky knees or ankles etc) that would influence the level of disability. If a person has elevated scores on the TSK or PCS. If they have elevated scores on measures of disability. If I think they may respond to a behavioural approach.

Oh, the authors say very clearly that one of the confounds of this test is when a person has biological factors such as bony changes to the vertebrae, shortened muscles, arthritic knees and so on. So you can put your biomechanical hat on – but remember the overall purpose of this test is to understand what’s going on in the person’s mind when they perform these movements.

Scoring and normative data has not yet been compiled. Perhaps that’s a Masters research project for someone?

Holzapfel, S., Riecke, J., Rief, W., Schneider, J., & Glombiewski, J. A. (in press). Development and validation of the behavioral avoidance test – back pain (bat-back) for patients with chronic low back pain. Clinical Journal of Pain.

 

 

give it a whirl

Fibro fog or losing your marbles: the effect of chronic pain on everyday executive functioning


ResearchBlogging.org

There are days when I think I’m losing the plot! When my memory fades, I get distracted by random thin—-ooh! is that a cat?!

We all have brain fades, but people with chronic pain have more of them. Sometimes it’s due to the side effects of medication, and often it’s due to poor sleep, or low mood – but whatever the cause, the problem is that people living with chronic pain can find it very hard to direct their attention to what’s important, or to shift their attention away from one thing and on to another.

In an interesting study I found today, Baker, Gibson, Georgiou-Karistianis, Roth and Giummarra (in press), used a brief screening measure to compare the executive functioning of a group of people with chronic pain with a matched set of painfree individuals. The test is called Behaviour Rating Inventory of Executive Function, Adult version (BRIEF-A) which measures Inhibition, Shift, Emotional Control, Initiate, Self-Monitor, Working Memory, Plan/Organize, Task Monitor, and Organization of Materials.

Executive functioning refers to “higher” cortical functions such as being able to attend to complex situations, make the right decision and evaluate the outcome. It’s the function that helps us deal with everyday situations that have novel features – like when we’re driving, doing the grocery shopping, or cooking a meal. It’s long been known that people living with chronic pain experience difficulty with these things, not just because of fatigue and pain when moving, but because of limitations on how well they can concentrate. Along with the impact on emotions (feeling irritable, anxious and down), and physical functioning (having poorer exercise tolerance, limitations in how often or far loads can be lifted, etc), it seems that cognitive impairment is part of the picture when you’re living with chronic pain.

Some of the mechanisms thought to be involved in this are the “interruptive” nature of pain – the experience demands attention, directing attention away from other things and towards pain and pain-related objects and situations; in addition, there are now known to be structural changes in the brain – not only sensory processing and motor function, but also the dorsolateral prefrontal cortex which is needed for complex cognitive tasks.

One of the challenges in testing executive functions in people living with chronic pain is that usually they perform quite well on standard pen and paper tasks – when the room is quiet, there are no distractions, they’re rested and generally feeling calm. But put them in a busy supermarket or shopping mall, or driving a car in a busy highway, and performance is not such an easy thing!

So, for this study the researchers used the self-report questionnaire to ask people about their everyday experiences which does have some limitations – but the measure has been shown to compare favourably with real world experiences of people with other conditions such as substance abuse, prefrontal cortex lesions, and ADHD.

What did they find?

Well, quite simply they found that 50% of patients showed clinical elevation on Shift, Emotional Control, Initiate, and Working Memory subscales with emotional control and working memory the most elevated subscales.

What does this mean?

It means that chronic pain doesn’t only affect how uncomfortable it might be to move, or sit or stand; and it doesn’t only affect mood and anxiety; and it’s not just a matter of being fogged with medications (although these contribute), instead it shows that there are clear effects of experiencing chronic pain on some important aspects of planning and carrying out complex tasks in the real world.

The real impact of these deficits is not just on daily tasks, but also on how readily people with chronic pain can adopt and integrate all those coping strategies we talk about in pain management programmes. Things like deciding to use activity pacing means – decision making on the fly, regulating emotions to deal with frustration of not getting jobs done, delaying the flush of pleasure of getting things completed, having to break a task down into many parts to work out which is the most important, holding part of a task in working memory to be able to decide what to do next. All of these are complex cortical activities that living with chronic pain can affect.

It means clinicians need to help people learn new techniques slowly, supporting their generalising into daily life by ensuring they’re not overwhelming, and perhaps using tools like smartphone alarms or other environmental cues to help people know when to try using a different technique. It also means clinicians need to think about assessing how well a person can carry out these complex functions at the beginning of therapy – it might change the way coping strategies are learned, and it might mean considering changes to medication (avoiding opiates, but not only these because many pain medications affect cognition), and thinking about managing mood promptly.

The BRIEF-A is not the last word in neuropsych testing, but it may be a helpful screening measure to indicate areas for further testing and for helping people live more fully despite chronic pain.

 

Baker, K., Gibson, S., Georgiou-Karistianis, N., Roth, R., & Giummarra, M. (2015). Everyday Executive Functioning in Chronic Pain The Clinical Journal of Pain DOI: 10.1097/AJP.0000000000000313

Tui

Faking pain – Is there a test for it?


One of the weird things about pain is that no-one knows if you’re faking. To date there hasn’t been a test that can tell whether you’re really in pain, or just faking it. Well, that’s about to change according to researchers in Israel and Canada.

While there have been a whole range of approaches to checking out faking such as facial expression, responses to questionnaires, physical testing and physical examinations, none of these have been without serious criticism. And the implications are pretty important to the person being tested – if you’re sincere, but someone says you’re not, how on earth do you prove that you’re really in pain? For clinicians, the problem is very troubling because allegations of faking can strain a working relationship with a person, and hardly lead to a sense of trust. Yet insurance companies routinely ask clinicians to make determinations about fraudulent access to insurance money – and worst of all, clinicians often feel they have little choice other than to participate.

In this study by Kucyi, Sheinman and Defrin, three hypotheses were tested: 1) Whether feigned performance could be detected using warmth and pain threshold measurements; 2) whether there were changes in the statistical properties of performance when participants were faking; and 3) whether an “interference” or distractor presented during testing interferes with the ability to fake and therefore provide a clue to when someone is being sincere or not.

Using university students (I hope they got course credits for participating!) who were not health science students, and were otherwise healthy, the investigators gave very little information about the procedure or hypotheses to minimise expectancy bias. Participants were then tested using a thermal stimulator to obtain “warmth sensation threshold” and “heat-pain thresholds” – this is a form of quantitative sensory testing (QST). TENS was used as a distractor in the experimental case, applied for 2 minutes before measuring the pain threshold, and during the heat pain threshold test. This was repeated with first the threshold test, then TENS. Participants were asked to pretend they were in an insurance office, being tested to establish whether they were experiencing genuine pain, after being told the test would be able to tell whether their pain was real.

What did they find out?

Well in situation one, where both threshold and warmth detection were used, and participants were asked to fake the pain intensity, respondents gave higher warmth detection ratings than normal. Not only this, but the ability to repeat the same response with the same temperature was poorer.  Heat pain threshold was also consistently different between the sincere and faked conditions, with heat pain threshold lower when people were faking (to around 3 degrees).

When the second testing option was carried out (using TENS to distract), heat pain threshold was significant lower when participants were faking, and the variance of the feigned + interference condition was three times that of the sincere condition, and the CV of the feigned + interference condition was twice that of the sincere condition.

What does this mean?

Well first of all, it means there are some consistent effects of faking in response to tests of warmth and heat-pain threshold when a distractor like TENS is used. Increased reports of warmth threshold and reduced heat pain threshold were observed, and where statistically significant. Interestingly, it was only when a distractor was used that the variability of reports were found – these authors suggest that people are pretty skilled at giving consistent reports when they’re not being distracted by an additional sensory stimulus.

Now here’s where I begin to pull this apart from a clinical and practical perspective. The authors, to give them credit, indicate that the research is both new and that it may identify some people who do have pain as malingerers. My concerns are that people with chronic pain may not look at all like healthy young university students.

We know very little about the responses to QST by people with different forms of chronic pain. We already know that people with impaired descending noxious inhibitory control respond differently to some forms of QST. We also know that contextual factors including motivation can influence how nervous systems respond to input. But my concerns are far more about the potential harm to those who are tested and found to be malingering when they’re not.

What do you do if you think a person is faking? How do you deal with this? What good does it do to suggest to someone their pain is not real, or isn’t nearly as bad as they make out? Once the words are out of your mouth (or written in a report) any chance of a therapeutic relationship has just flown right out the door. And you’re still left with a person who says they’re having trouble – but now you have an angry, resentful person who has a strong need to prove that they DO have pain.

You see, I think it might be more fruitful to ask why is this person faking pain? If it’s simply for money, surely there are far easier ways to get money than pretending to be disabled by pain? If it’s the case that a person is off out fishing or playing golf or living it up when “supposed” to be in pain, wouldn’t it make more sense to reframe their response as either recovering well (doing what’s healthy) and therefore get on with returning to work; or use a private investigator to demonstrate that he or she is actually capable of doing more than they indicate?

The presence or absence of pain is not really the problem, IMHO. To me we need to address the degree of disability that’s being attributed to pain and work on that. Maybe a greater focus on reducing disability rather than on expensive procedures to remove pain or otherwise get rid of pain is in order?

Kucyi, A., Sheinman, A., Defrin, R. (in press). Distinguishing feigned from sincere performance in psychophysical pain testing. The Journal of Pain.

The bad boys made us do it

How good is the TSK as a measure of “kinesiophobia”?


The Tampa Scale for Kinesiophobia is a measure commonly used to determine whether a person is afraid of moving because of beliefs about harm or damage, with a second scale assessing current avoidance behaviour. It has been a popular measure along with the pain-related fear and avoidance model and together with the model and measures of disability, catastrophising and pain-related anxiety, has become one of the mainstays within pain assessment.

There have been numerous questions raised about this measure in terms of reliability and validity, but the measure continues to be one that is widely used. The problems with reliability relate mainly to a long version (TSK-17) in which several items are reverse scored. Reverse scored items often state a negative version of one of the concepts being assessed by the measure, but pose problems to people completing the measure because it’s hard to respond to a double negative.  In terms of validity, although the measure has been used a great deal and the original studies examining the psychometric properties of the instrument showed predictive validity, the TSK’s ability to predict response to treatment hasn’t been evaluated.

Chris Gregg and colleagues from The Back Institute and CBI Health Group studied a cohort of 313 people with low back pain attending one of the rehabilitation clinics in New Zealand. Participants completed the TSK at the beginning of treatment, and again at programme completion.  Along with the TSK, participants also completed a numeric pain scale, a modified Low Back Outcome score, and indicated whether they were working or not. These latter measures were considered to be “Quality of Life” measures, although they’re not officially QoL scales.

Before I turn to the study design and statistics, I’ll take a look at the modified Low Back Outcome score. Now I don’t know if you’ve ever searched for something like this, but believe me when I say there are SO many versions of SO many different “modified” back pain questionnaires, it’s really hard to work out exactly which one is the one used in this study, nor how it was modified. I’m assuming that it’s the one mentioned in Holt, Shaw, Shetty and Greenough (2002) because it’s mentioned in the references, but I don’t know the modifications made to it.  The LBOS is a fairly brief 12 item measure looking at pain intensity “on average” over the last week, work status, functional activities, rest, treatment seeking, analgesic use, and another five broad activities (sex life, sleeping, walking, traveling and dressing). It’s been described as having good internal consistency and test-retest reliability but validity isn’t mentioned in the 2002 paper.

Now, coming to this study, overall people improved at the completion of the programme. Pain reduced by 1.84 on the NPS, m-LBOS scores increased by 10.4 (a 28% improvement), and the TSK scores also improved by 5.5. Of course, we’d hope that at the end of a programme people would be doing better – though I’d prefer to see outcomes measured at least another three to 9 months after programme completion.

The authors looked at the relationship between the TSK and initial scores – there were small  statistical relationships between these measures. They then examined the scores between pre-treatment TSK and QoL measures at the end of treatment to establish whether there was a relationship between kinesiophobia and eventual outcome. There wasn’t. At least, not much of a relationship. These authors conclude that the TSK is therefore not a good measure to employ to predict those at high risk of chronicity due to fear of movement. I was a bit disappointed to see that a subscale analysis of the TSK wasn’t carried out – so it’s not possible to know whether change was associated with reduced beliefs about fear of harm/reinjury or whether it was due to reduced avoidance, or both.

Now here’s where I get a bit tangled up. Wouldn’t you expect the underlying constructs of the TSK (fear of harm/reinjury, and avoidance) to be the targets of a back pain related treatment? Especially one that includes cognitive behavioural therapy, education and movement? If we’re using a measure I think we should USE it within our clinical reasoning, and deliberately target those factors thought to be associated with poor outcomes. If we’re successful, then we should be able to see a change in domains thought to be associated with those constructs. In this programme, given that people were given treatment based on sub-typing, including education and CBT, I would hope that pain-related fear and avoidance would be directly targeted so that people develop effective ways of dealing with unhelpful beliefs and behaviours. To establish whether that had happened I’d want to look at the association between post-treatment TSK and measures of function or disability.

And getting back to the timing of outcome assessment, given that we’re interested in people managing any residual back pain (and in this study people were left with pain scores on the NPS of 3.4 (+/- 2.4) they still had some pain), wouldn’t you be interested in how they were managing a bit further down the track? We can (almost) guarantee that people will make changes directly as an effect of attention and structured activities. Measuring what occurs immediately at the completion of a programme may not show us much about what happens once that person has carried on by him or herself for a few months. My experience with chronic pain programmes shows a typical pattern of improvement immediately at the end of a programme, then six weeks later, what can be called regression to the mean, or what we often described as “the dip” or “the slump” as reality hits the road. At a further six months down the track, results had improved a bit, and these were usually sustained (or thereabouts) at the following twelve month follow-up.

So, does this study provide us with evidence that the TSK isn’t useful as a predictive tool? I’m not so sure. I think it does show that there are improvements in TSK, pain, disability and work status immediately at the end of a programme. Unfortunately TSK scores at the end of the programme are not analysed into subscales, so we don’t know which aspects of pain-related fear and avoidance were affected – but we know that they were.

For clinicians working in chronic pain programmes, where people are referred after having remained disabled and/or distressed despite having had prior treatment, the TSK may not be the most useful tool ever. The problems I’ve had with it are that scores in the fear of injury/reinjury subscale are lower when people have been given good pain “education” – but often present with a combined high score because of very high scores on the avoidance subscale.

A lovely study by Bunzli, Smith, Watkins, Schütze and O’Sullivan (2014) looked at what people actually believe about their pain and the associated TSK items. They found that many people DO believe their pain indicates harm, and they also found that people were worried about the effect pain would have on other things – and it’s this part that I find particularly interesting. It may not be the pain that matters as much as the anticipated losses and disruption to normal life that could occur.

The original authors of the “fear-avoidance” model, Vlaeyen and Linto (2012) reviewed the model after 12 years, and agree there is much to be done to refine assessment of pain-related fear. Self-report measures are only as good as the ability, insight and willingness of participants to complete them accurately.

So, is it time to throw the TSK out the window? I don’t think so – at least not yet. There’s more we need to do to understand pain-related fear and subsequent avoidance.

 

Chris D. Gregg, Greg McIntosh, Hamilton Hall, Heather Watson, David Williams, Chris Hoffman, The relationship between the tampa scale of kinesiophobia and low back pain rehabilitation outcomes, The Spine Journal (2015), http://dx.doi.org/doi:10.1016/j.spinee.2015.08.018.

Bunzli, S., Smith, A., Watkins, R., Schütze, R., & O’Sullivan, P. (2014). “What Do People who Score Highly on the Tampa Scale of Kinesiophobia Really Believe? A Mixed Methods Investigation in People with Chronic Non Specific Low Back Pain The Clinical Journal of Pain DOI: 10.1097/AJP.0000000000000143

Vlaeyen, J. W., & Linton, S. J. (2012). Fear-avoidance model of chronic musculoskeletal pain: 12 years on. Pain, 153(6), 1144-1147. doi: dx.doi.org/10.1016/j.pain.2011.12.009

tanglewood

Central sensitisation – can a questionnaire help find out who is, and who isn’t?


My orthopaedic colleagues have been asking for a way to identify which surgical candidate is unlikely to have a good outcome after major joint surgery. They know that between 10 – 50% of people undergoing surgery will have chronic pain.  5 – 10% of those people experiencing pain that’s rated >5/10 on a numeric rating scale where 0 = no pain, and 10 = most severe pain you can imagine ( Kehlet, Jensen, & Woolf, 2006). The people with severe pain are the kind of people who hear “well the surgery I did went well…” and can be left wondering why they ever decided to go ahead with their surgery.

Two main factors seem to be important in postsurgical chronic pain: the presence of central sensitisation (usually indicated by reporting chronic pain in at least two other areas of the body) and catastrophising. I’ve discussed catastrophising a great deal here and here .

What I haven’t talked about is central sensitisation. Now, the idea that people can experience chronic pain associated with changes in the way the nervous system responds to stimuli isn’t new, but the neurobiology of it is still slowly being unravelled.  I’m not going to get into definitions or whether having changes in the nervous system equates with “chronic pain” (because pain is an experience and the neurobiology is just the scaffolding that seems present, the two are not equivalent). I want to talk about the measurement of this “sensitisation” and whether a pen and paper tool might be one way of screening people who are at greatest risk of developing problems if they proceed with surgery.

First of all, what symptoms come under this broad heading of “response to an abnormally sensitised nervous system”? Well, Yunus (2007) proposed that because there are similarities between several so-called “medically unexplained symptoms” such as fibromyalgia, chronic fatigue, irritable bowel disorder and so on, perhaps there is a common aetiology for them. Based on evidence that central sensitisation involves enhanced processing of many sensory experiences, Yunus proposed the term “central sensitivity syndrome” – basically a disorder of the nociceptive system. Obviously it’s pretty complicated, but various researchers have proposed that “dysregulation in both ascending and descending central nervous system pathways as a result of physical trauma and sustained pain impulses, and the chronic release of pro-inflammatory cytokines by the immune system, as a result of physical trauma or viral infection… including a dysfunction of the stress system, including the hypothalamic–pituitary–adrenal axis (Mayer, Neblett, Cohen, Howard, Choi et al, 2012, p. 277)”. (what are “pain impulses”?!)

By proposing this mechanism, various researchers have been able to pull together a number of symptoms that people experience, and their premise is that the more symptoms individuals endorse, the more likely it is that they have an underlying central sensitisation disorder.

The authors completed a literature review to identify symptoms and comorbidities associated with fibromyalgia and the other disorders they believe indicate a sensitised central nervous system. they then develop a self-report instrument and asked people with these problems to complete it, and compared their results with a group of people who wouldn’t usually be thought to have any sensitisation problems (students and staff at a University – we could argue this, but let’s not!).

What they found, after much statistical analysis, is a four factor measure:

Factor 1 – Physical Symptoms (30.9%)
Factor 2 – Emotional Distress (7.2%)
Factor 3 – Headache/Jaw Symptoms (10.1%)
Factor 4 – Urological Symptoms (5.2%)

Test-retest reliability was established, and because the questionnaire could discriminate between those who reported widespread pain (aka fibromyalgia) and those who had no pain, it’s thought to have discriminant validity as well. (BTW a copy of this measure is included in the appendix of the Mayer, Neblett, Cohen, Howard, Choi, Williams et al (2012) paper – go get it!)

The researchers then went on to look at some norms for the measure and found that amongst people with chronic pain, referred to an outpatient multidisciplinary pain centre, those with more diagnosed “central sensitisation syndromes” scored more highly on this measure, and that a score of 40 on the measure was able to discriminate between those who didn’t have sensitisation and those who did (Neblett, Cohen, Choi, Hartzell, Williams, Mayer & Gatchel, 2013).

Well and good. What does it actually mean?

This is where I think this measure can come unstuck. I like the idea of people being asked about their pain and associated symptoms. We often don’t have time in a clinical interview to ask about the enormous range of symptoms people experience, so being able to get people to fill out a pen and paper measure to take stock of the different things people know about themselves is a good thing.

What this measure doesn’t yet do is indicate whether there is any underlying common causal link between these experiences. It’s tautological to list the symptoms people might experience with central sensitisation based on the literature, then ask them to indicate which ones they experience and then conclude “oh yes! this means they have central sensitisation!” All it means is that these people report similar symptoms.

What needs to happen, and is now beginning to occur, are studies examining central nervous system processing and the scores individuals obtain on this measure. That, and establishing whether, by completing this questionnaire, it is possible to predict who is more or less likely to develop things like post-surgical chronic pain. Now that would be a really good measure, and very likely to be used by my orthopaedic colleagues.

In the meantime, whatever this measure indicates, it seems to be able to differentiate between people who are more likely to report “medically unexplained symptoms” and people who don’t. This might be useful as we begin to look at targeting treatment to suit different types of persistent pain. At this point in time, though, I think this measure is more useful in research than clinical practice.

 

Kehlet H, Jensen TS, Woolf CJ. Persistent postsurgical pain: risk factors and prevention. Lancet. 2006;367:1618–1625

Mayer, T.G., Neblett, R., Cohen, H., Howard, K.J., Choi, Y.H., Williams, M.J., . . . Gatchel, R.J. (2012). The development and psychometric validation of the central sensitization inventory. Pain Practice, 12(4), 276-285. doi: 10.1111/j.1533-2500.2011.00493.x

Neblett, R., Cohen, H., Choi, Y., Hartzell, M.M., Williams, M., Mayer, T.G., & Gatchel, R.J. (2013). The central sensitization inventory (csi): Establishing clinically significant values for identifying central sensitivity syndromes in an outpatient chronic pain sample. The Journal of Pain, 14(5), 438-445. doi: http://dx.doi.org/10.1016/j.jpain.2012.11.012

Roussel, N.A., Nijs, J., Meeus, M., Mylius, V., Fayt, C., & Oostendorp, R. (2013). Central sensitization and altered central pain processing in chronic low back pain: Fact or myth? Clin J Pain, 29, 625-638. doi: 10.1097/AJP.0b013e31826f9a71

Van Oosterwijck, J., Nijs, J., Meeus, M., & Paul, L. (2013). Evidence for central sensitization in chronic whiplash: A systematic literature review. European Journal of Pain, 17(3), 299-312. doi: 10.1002/j.1532-2149.2012.00193.x

Yunus, M.B. (2007). Fibromyalgia and overlapping disorders: The unifying concept of central sensitivity syndromes. Seminars in Arthritis & Rheumatism, 36(6), 339-356.

calm still afternoon

Accepting pain – or are we measuring something else?


Acceptance. Ask a person living with chronic pain whether they accept their pain and the answer is highly probably a resounding “No!”. It’s a word that evokes resignation, feeling helpless and giving up. Or at least that’s what many qualitative papers seem to show (Afrell, Biguet, Rudebeck, 2007; Baker, Gallois, Driedger & Santesso, 2011; Budge, Carryer & Boddy, 2012; Clarke & Iphofen, 2007; Lachapelle, Lavoie & Boudreau, 2008; Risdon, Eccleston, Crombez & McCracken, 2003). I remember when hearing a person tell me “Oh I accept my pain” thinking that this was often a clear indication that underneath it all, the person was pretty angry about the unfairness of pain impacting on their life.

Acceptance is defined in Acceptance and Commitment Therapy (ACT) as “a willingness to remain in contact with and to actively experience particular private experiences (Hayes, Jacobson, Follette, Dougher, 1994) (eds): Acceptance and Change: Content and Context in Psychotherapy. Reno, Context Press, 1994), and from this Lance McCracken and colleagues developed the Chronic Pain Acceptance Questionnaire. This measure has two dimensions: willingness to experience pain and engaging in values-directed activity despite pain.  The other way acceptance has been defined draws from self-regulation, and argues that withdrawing from goals that can’t be achieved, in order to turn to goals that can be achieved is a positive way to cope with life – acceptance is defined as disengaging from a goal to get rid of pain and instead, re-engaging in other goals that aren’t affected as much by pain.

Lauwerier, Caes, Van Damme, Goubert and Rosseel (2015) have recently published a paper reviewing the various instruments that purport to measure pain acceptance. In their analysis, a coding scheme was developed consisting of the three main aspects of acceptance that seem to represent the concept: disengaging from controlling pain, pain willingness (in certain circumstances), and engaging in other valued activities. These three concepts were drawn from the literature – and then there were the left-over concepts that were also present in measures of acceptance. These are the interesting ones!

The addition five codes were: controlling pain, pain costs, pain benefits, unclear and no fit.

They identified 18 difference instruments, of which five didn’t specifically measure acceptance of chronic pain or illness and were therefore excluded from the study, leaving 13 measures to review. The one mentioned the most in the studies reviewed was the Chronic Pain Acceptance Questionnaire-2o.

Moving on to the results, what did these researchers find? And of course, why does it matter

Well, most of the instruments were measuring aspects of acceptance – the Brief Pain Coping Inventory, Chronic Pain Acceptance Questionnaire-A and CPAQ-20, and the Pain Solutions Questionnaire. The original CPAQ and the PASOL were the only two measures with moderate (but the highest percentage) of items with all three acceptance features (disengagement from pain control, pain willingness, and engaging in activity other than pain control), and interestingly, most instruments included “engaging in activities other than pain control”, while the other two factors were less well-represented.

Even more interesting is that many of the items in these instruments were classified as “controlling pain” – in other words, measuring how willing individuals are to carry on with life without trying to control pain. At the same time, many of the instruments also measured “pain costs” – such as “because of my illness, I miss the things I like to do best”.

Then these researchers did some pretty fancy analysis, looking at dimensions contained within all the items from all the measures. What they found was a 2-dimensional solution, with one dimension going from “fully engaged in valued activities” (my description!) to “pain costs”, and the other axis going from “pain willingness” to “controlling pain”.

Conclusions and why this is important

Most of the assessment measures contained some of the concepts thought to be important in pain acceptance, but the aspect most commonly found was engaging in activities other than controlling pain. Items measuring disengaging from trying to control pain, and pain willingness were found less often, while many measures incorporate pain control, and some that reflected pain costs or were unclear. This research seems to show that engagement in activities other than pain control and pain willingness are key features of items measuring acceptance, but at the same time show that not many measures look at both of these concepts together.  Additionally, this research shows that many supposedly “acceptance” instruments actually measure attempts to control pain but then reverse score these items – this can mean that people using these measures interpret them as avoidance measures rather than willingness to experience pain – appealing to quite a different theoretical model (the avoidance or fear-avoidance model) rather than a pain acceptance model.

Why is this research important? Well, acceptance is still a relatively new concept in pain research and clinical practice. While it has been talked about a great deal, and there are numerous studies of acceptance, the instruments developed for such research have not been around very long, and as we can see, don’t always adequately represent the fullness of the theoretical domains. Some aspects are not well-represented or are at risk of being misinterpreted. What works in a research setting may not always be accurately transferred to a clinical setting, especially if clinicians pick up a new measure without reading the theoretical basis for its development.

I also argue on the basis of my research that “disengaging from trying to control pain” doesn’t only need to be represented by items suggesting that people no longer seek treatment. From my findings based on people who live well with chronic pain, treatment is still a feature – but the investment in the outcome of treatment is far less. It’s less important that the pain is removed, treatment is “an option” rather than a necessary part of “returning to normal”.

I also argue that pain willingness is conditional upon the values placed on the activities the individual wants to do. So, if the activity is boring, unpleasant, hard work or doesn’t have rewards to the individual, the person is more than likely to avoid it, but if it’s highly valued then pain becomes a less dominant factor in the decision to do it.

Why should clinicians care? Because acceptance is an exciting and fruitful aspect of living well with pain that we can incorporate into our treatments. Acceptance is about learning to live well, “being with” or “making space” for the presence of pain, so that the other aspects of life are able to be engaged in. That’s important given how few people can have their pain completely reduced.

 

 
Lauwerier, E., Caes, L., Van Damme, S., Goubert, L., Rosseel, Y., & Crombez, G. (2015). Acceptance: What’s in a Name? A Content Analysis of Acceptance Instruments in Individuals With Chronic Pain The Journal of Pain, 16 (4), 306-317 DOI: 10.1016/j.jpain.2015.01.001

Afrell, M., Biguet, G., & Rudebeck, C.E. (2007). Living with a body in pain — between acceptance and denial. Scandinavian Journal of Caring Sciences, 21(3), 291-296.

Baker, S.C., Gallois, C., Driedger, S., & Santesso, N. (2011). Communication accommodation and managing musculoskeletal disorders: Doctors’ and patients’ perspectives. Health Communication, 26(4), 379-388. doi: http://dx.doi.org/10.1080/10410236.2010.551583

Budge, C., Carryer, J., & Boddy, J. (2012). Learning from people with chronic pain: Messages for primary care practitioners. Journal of Primary Health Care, 4(4), 306-312.

Clarke, K.A., & Iphofen, R. (2007). Accepting pain management or seeking pain cure: An exploration of patients’ attitudes to chronic pain. Pain Management Nursing, 8(2), 102-110.

Eccleston C, Crombez G. (2007). Worry and chronic pain: A misdirected problem solving model. Pain, 132(3), 233-236.

Hayes, Jacobson, Follette, Dougher. (eds)(1994).  Acceptance and Change: Content and Context in Psychotherapy. Reno, Context Press.

Lachapelle, D.L., Lavoie, S., & Boudreau, A. (2008). The meaning and process of pain acceptance. Perceptions of women living with arthritis and fibromyalgia. Pain Research & Management, 13(3), 201-210.

Risdon, A., Eccleston, C., Crombez, G., & McCracken, L. (2003). How can we learn to live with pain?: A q-methodological analysis of the diverse understandings of acceptance of chronic pain. Social Science & Medicine, 56(2), 375-386. doi: dx.doi.org/10.1016/S0277-9536(02)00043-6