Assessment

Targeting the people who need it most


A couple of things came to mind today as I thought about this post: the first was an article in the local newspaper about a man complaining that the government is “promoting disability” because he couldn’t get surgery for a disc prolapse – and the pain was affecting his ability to work. The second was how to direct the right treatment at the right person at the right time – and how we can be derailed by either wholesale over-servicing “everyone needs treatment X”, or by overburdening people with assessment just to give a fairly basic treatment.

Now with the first man, I don’t know his clinical situation – what I do know is that there are many people every day who must learn to live with their pain because there simply is not an effective treatment of any kind, and that amongst these people are those go on to live wonderful lives despite their pain. I wonder if this man has ever been offered comprehensive self management for while he waits for his surgery. Whether the government could spread some funding away from surgery as the primary option for such pain problems – and instead provide better funding for the wider range of approaches offered through the interdisciplinary pain management centres (approaches which include injection procedures, physiotherapy, psychology, occupational therapy and medications). When there is an effective treatment (and this is arguable in the case of disc prolapse – in fact, it’s difficult to know whether even MRI imaging can give a clear indication of who might respond best to what treatment (Steffens, Hancock, Pereira et al, 2016), we should be able to give it, provided it fits within our country’s health budget. Ahh – that’s the problem, isn’t it… expensive treatments mean fewer people can get basic treatment. And with lumbar disc prolapse, the evidence for surgery is less favourable than many people recognise (Deyo & Mirza, 2016) – they state:

“Patients with severe or progressive neurologic deficits require a referral for surgery. Elective surgery is an option for patients with congruent clinical and MRI findings and a condition that does not improve within 6 weeks. The major benefit of surgery is relief of sciatica that is faster than relief with conservative treatment, but results of early surgical and prolonged conservative treatment tend to be similar at 1 year of follow-up. Patients and physicians should share in decision making.”

So here we have a person with lots of pain, experiencing a great deal of distress, and reducing his work because of pain and disability. My question now (and not for this person in particular) is whether being distressed is equivalent to needing psychological help. How would we know?

There’s been a tendency in pain management to bring in psychologists to help people in this kind of situation. Sometimes people being referred for such help feel aggrieved: “My problem isn’t psychological!” they say, and they’re quite correct. But having a problem that isn’t psychological doesn’t mean some psychological help can’t be useful – unless by doing so, we deny people who have serious psychological health problems from being seen. And in New Zealand there are incredible shortages in mental health service delivery – in Christchurch alone we’ve had an increase in use of mental health services of more than 60% over the past six years since the massive 2010/2011 earthquakes (The Press).

People living with persistent pain often do experience depression, anxiety, poor sleep, challenges to relationships and in general, feeling demoralised and frustrated.  In a recent study of those attending a specialist pain management centre, 60% met criteria for “probable depression” while 33.8% met criteria for “severe depression” (Rayner, Hotopf, Petkova, Matcham, Simpson & McCracken, 2016). BUT that’s 40% who don’t – and it’s my belief that providing psychological services to this group is allocating resources away from people who really need it.

So, what do we do? Well one step forward might be to use effective screening tools to establish who has a serious psychological need and who may respond just as well to reactivation and return to usual activities with the support of the less expensive (but no less skilled) occupational therapy and physiotherapy teams. Vaegter, Handberg, & Kent (in press) have just published a study showing that brief psychological screening measures can be useful for ruling out those with psychological conditions. While we would never use just a questionnaire for diagnosis, when combined with clinical assessment and interview, brief forms of questionnaires can be really helpful for establishing risk and areas for further assessment. This study provides some support for using single item questions to identify those who need more in-depth assessment, and those who don’t need this level of attention. I like that! The idea that we can triage those who probably don’t need the whole toolbox hurled at them is a great idea.

Perhaps the New Zealand politicians, as they begin the downhill towards general elections at the end of the year, could be asked to thoughtfully consider rational distribution of healthcare, and a greater emphasis on targeted use of allied health and expensive surgery.

 

Deyo, R. A., & Mirza, S. K. (2016). Herniated Lumbar Intervertebral Disk. New England Journal of Medicine, 374(18), 1763-1772.

Hahne, A. J., Ford, J. J., & McMeeken, J. M. (2010). Conservative management of lumbar disc herniation with associated radiculopathy: A systematic review. Spine, 35(11), E488-504.

Koffel, E., Kroenke, K., Bair, M. J., Leverty, D., Polusny, M. A., & Krebs, E. E. (2016). The bidirectional relationship between sleep complaints and pain: Analysis of data from a randomized trial. Health Psychology, 35(1), 41-49.

Rayner L, Hotopf M, Petkova H, Matcham F, Simpson A, McCracken LM. Depression in patients with chronic pain attending a specialised pain treatment centre: prevalence and impact on health care costs. Pain. 2016;157(7):1472-1479. doi:10.1097/j.pain.0000000000000542

Steffens, D., Hancock, M.J., Pereira, L.S. et al.(2016) Do MRI findings identify patients with low back pain or sciatica who respond better to particular interventions? A systematic review. European Spine Journal 25: 1170. doi:10.1007/s00586-015-4195-4

Vaegter, H. B. P., Handberg, G. M. D., & Kent, P. P. Brief psychological screening questions can be useful for ruling out psychological conditions in patients with chronic pain. Clinical Journal of Pain.

What do we do with those questionnaires?


Courtesy of many influences in pain management practice, you’d have to have been hiding under a rock or maybe be some sort of dinosaur not to have noticed the increasing emphasis on using questionnaires to measure factors such as pain catastrophising, depression or avoidance. The problem is I’m not sure we’ve all been certain about what to do with the results. It’s not uncommon for me to hear people saying “Oh but once I see psychosocial factors there, I just refer on”, or “they’re useful when the person’s not responding to my treatment, but otherwise…”, “we use them for outcome measures, but they’re not much use for my treatment planning”.

I think many clinicians think psychosocial questionnaires are all very well – but “intuition”  will do “…and what difference would it make to my treatment anyway?”

Today I thought I’d deconstruct the Pain Catastrophising Scale and show what it really means in clinical practice.

The Pain Catastrophising Scale is a well-known and very useful measure of an individual’s tendency to “think the worst” when they’re considering their pain. Catastrophising is defined as “an exaggerated negative mental set brought to bear during actual or anticipated painful experience” (Sullivan et al., 2001). The questionnaire was first developed by Sullivan, Bishop and Pivik in 1995, and the full copy including an extensive manual is available here. Keep returning to that page because updates are made frequently, providing more information about the utility of the measure.

The questionnaire itself is a 13-item measure using a 0 – 4 Likert-type scale from 0 = “not at all” to 4 = “all the time”. Respondents are instructed to “indicate the degree to which you have these thoughts and feelings when you are experiencing pain”.

There are three subscales measuring three major dimensions of catastrophising: rumination “I can’t stop thinking about how much it hurts”; magnification “I worry that something serious may happen”; and helplessness “It’s awful and I feel that it overwhelms me”.

To score the instrument, simply sum all the responses to all 13 items, but to get a better idea of how to help a person, the subscale calculations involve the following:

Rumination: sum items 8,9,10, and 11

Magnification: sum items 6,7, and 13

Helplessness: sum items 1,2,3,4,5, and 12

There’s not a lot of point in having numbers without knowing what they mean, so the manual provides means and standard deviations relating to a population individuals with injury leading to lost time from work in Nova Scotia, Canada.

thingClinicians are typically interested in whether the person sitting in front of them is likely to have trouble managing their pain, so the manual also provides “cut off”scores for what could be described as “clinically relevant” levels of catastrophising. A total score of 30 or more is thought to represent the 75th percentile of scores obtained by individuals with chronic pain.

The “so what” question

Cutting to the chase, the question is “so what”? What difference will getting this information from someone make to my clinical reasoning?

Leaving aside the enormous body of literature showing a relationship between high levels of catastrophising and generally poor responses to traditional treatments that address pain alone (including surgery for major joint replacement, recovery from multiple orthopaedic trauma, low back pain, shoulder pain etc), I think it’s helpful to dig down into what the three subscales tell us about the person we’re working with. It’s once we understand these tendencies that we can begin to work out how our approach with someone who has high levels of rumination might differ from what we’ll do when working with someone who has high levels of helplessness.

As an aside and being upfront, I think it’s important to remember that a questionnaire score will only tell you what a person wants you to know. Questionnaires are NOT X-rays of the mind! They’re just convenient ways to ask the same questions more than once, to collect the answers and compare what this person says with the responses from a whole lot of other people, and they allow us to organise information in a way that we might not think to do otherwise.  I also think it’s really important NOT to label a person as “a catastrophiser” as if this is a choice the person has made. People will have all sorts of reasons for tending to think the way they do, and judging someone is unprofessional and unethical.

Rumination

Rumination is that thing we do when a thought just won’t get out of our mind. You know the one – the ear worm, the endless round and round, especially at night, when we can’t get our mind off the things we’re worrying about. If a person has trouble with being able to drag his or her attention away, there are some useful things we can suggest. One theory about rumination is that it’s there as a sort of problem solving strategy, but one that has gone haywire.

Mindfulness can help so that people can notice their thoughts but not get hooked up into them. I like to use this both as a thought strategy, but also as a way of scanning the body and just noticing not only where the pain is experienced, but also where it is not.

“Fifteen minutes of worry” can also help – setting aside one specific time of the day (I like 7.00pm – 7.15pm) where you have to write down everything you’re worried about for a whole fifteen minutes without stopping. By also telling yourself throughout the day “I’m not worrying about this until tonight” and afterwards saying “I’ve already worried about this so I don’t need to right now”, worrying and ruminating can be contained. By being present with the thoughts during that 15 minutes, the threat value of the thought content is also reduced.

Magnification

This is the tendency to think of the worst possible thing rather than the most likely outcome, and it’s common! Magnification can really increase the distress and “freeze” response to a situation. If a person is thinking of all the worst possible outcomes it’s really hard for them to focus on what is actually happening in the here and now. There’s some adaptive features to magnification – if I’ve prepared for the worst, and it doesn’t happen, then I’m in a good situation to go on, but in some people this process becomes so overwhelming that their ability to plan is stopped in its tracks.

Once again, mindfulness can be really useful here, particularly paying attention to what is actually happening in the here and now, rather than what might happen or what has happened. Mindful attention to breathing, body and thoughts can help reduce the “freeze” response, and allow some space for problem solving.

Of course, accurate information presented in nonthreatening terms and in ways the person can process is important to de-threaten the experience of pain. This is at the heart of “explain pain” approaches – and it’s useful. What’s important, however, is to directly address the main concern of the person – and it may not be the pain itself, but the beliefs about what pain will mean in terms of being a good parent, holding down a job, maintaining intimacy, being responsible and reliable. It’s crucial to find out what the person is really concerned about – and then ensure your “reassurance” is really reassuring.

Helplessness

Helplessness is that feeling of “there’s nothing I can do to avoid this awful outcome so I won’t do anything”. It’s a precursor to feelings of depression and certainly part of feeling overwhelmed and out of control.

When a person is feeling helpless it’s important to help them regain a sense of self efficacy, or confidence that they CAN do something to help themselves, to exert some sort of control over their situation. It might be tempting to aim for focusing on pain intensity and helping them gain control over pain intensity, but because it’s often so variable and influenced by numerous factors, it might be more useful to help the person achieve some small goals that are definitely achievable. I often begin with breathing because it’s a foundation for mindfulness, relaxation and has a direct influence over physiological arousal.

You might also begin with some exercise or daily activities that are well within the capabilities of the person you’re seeing. I like walking as a first step (no pun intended) because it doesn’t require any equipment, it’s something we all do, and it can be readily titrated to add difficulty. It’s also something that can be generalised into so many different environments. In a physiotherapy situation I’d like to see PTs consider exercises as their medium for helping a person experience a sense of achievement, of control, rather than a means to an end (ie to “fix” some sort of deficit).

To conclude
Questionnaires don’t add value until they’re USED. I think it’s unethical to administer a questionnaire without knowing what it means, without using the results, and without integrating the results into clinical reasoning. The problem is that so many questionnaires are based on psychological models and these haven’t been integrated into physiotherapy or occupational therapy clinical reasoning models. Maybe it’s time to work out how do this?

Sullivan M J L, Bishop S, Pivik J. The Pain Catastrophizing Scale: Development and validation. Psychol Assess 1995, 7: 524-532.

Main, C. J., Foster, N., & Buchbinder, R. (2010). How important are back pain beliefs and expectations for satisfactory recovery from back pain? Best Practice & Research Clinical Rheumatology, 24(2), 205-217. doi:doi:10.1016/j.berh.2009.12.012

Sturgeon, J. A., Zautra, A. J., & Arewasikporn, A. (2014). A multilevel structural equation modeling analysis of vulnerabilities and resilience resources influencing affective adaptation to chronic pain. PAIN®, 155(2), 292-298. doi:http://dx.doi.org/10.1016/j.pain.2013.10.007

Ambiguity and uncertainty


Humans vary in how comfortable we are with uncertainty or ambiguity: Tolerance of ambiguity is a construct discussed in cognitive and experimental research literature, and refers to the willingness to prefer black and white situations, where “there is an aversive reaction to ambiguous situations because the lack of information makes it difficult to assess risk and correctly make a decision. These situations are perceived as a threat and source of discomfort. Reactions to the perceived threat are stress, avoidance, delay, suppression, or denial” (Furnham & Marks, 2013, p. 718).  Tolerance to uncertainty is often discussed in relation to response to stress and emotions associated with being in an ambiguous situation, or it may refer to a future-oriented trait where an individual is responding to an ambiguous situation in the present. Suffice to say, for some individuals the need to be certain and clear means they find it very difficult to be in situations where multiple outcomes are possible and where information is messy. As a result, they find ways to counter the unease, ranging from avoiding making a decision to authoritatively dictating what “should” be done (or not done).

How does this affect us in a clinical setting? Well, both parties in this setting can have varying degrees of comfort with ambiguity.

Our clients may find it difficult to deal with not knowing their diagnosis, the cause of their painful experience, the time-frame of its resolution, and managing the myriad uncertainties that occur when routines are disrupted by the unexpected. For example, workers from the UK were interviewed about their unemployment as a result of low back pain. Uncertainty (both physical and financial) was given as one of the major themes from interviews of their experience of unemployment (Patel, Greasley, Watson, 2007).  Annika Lillrank, in a study from 2003, found that resolving diagnostic uncertainty was a critical point in the trajectory of those living with low back pain (Lillrank, 2003).

But it’s not just clients who find it hard to deal with uncertainty – clinicians do too. Slade, Molloy and Keating (2011) found that physiotherapists believe patients want a clear diagnosis but feel challenged when they’re faced with diagnostic uncertainty. What then happens is a temptation to be critical of the patients if they fail to improve, to seek support from other more senior colleagues, and end up feeling unprepared by their training to deal with this common situation. The response to uncertainty, at least in this study, was for clinicians to “educate” care-seekers about their injury/diagnosis despite diagnostic uncertainty (my italics), and a strong desire to see rapid improvements, and tend to attribute lack of progress to the client when either the client doesn’t want “education” or fails to improve (Slade, Molloy & Keating, 2003).

Physiotherapists are not alone in this tendency: There is a large body of literature discussing so-called “medically unexplained diseases” which, naturally, include chronic pain disorders. For example Bekkelund and Salvesen (2006) found that more referrals were made to neurologists when the clinician felt uncertain about a diagnosis of migraine. GP’s, in a study by Rosser (1996) were more likely to refer to specialists in part because they were uncertain – while specialists, dealing as they do with a narrower range of symptoms and body systems, deal with less diagnostic uncertainty. Surprisingly, despite the difference in degree of uncertainty, GP’s order fewer tests and procedures yet often produce identical outcomes!

How do we manage uncertainty and ambiguity?

Some of us will want to apply subtypes, groupings, algorithms – means of controlling the degree of uncertainty and ambiguity in our clinical practice. Some of the findings from various tests (eg palpation or tender point examination) are used as reasons for following a certain clinical rule of thumb. In physiotherapy, medicine and to a certain extent my own field of occupational therapy, there is a tendency to “see nails because all I have is a hammer” in an attempt to fit a client into a certain clinical rule or process. We see endless publications identifying “subtypes” and various ways to cut down the uncertainty within our field, particularly with respect to low back pain where we really are dealing with uncertainty.

Some of these subgroupings may appear effective – I remember the enthusiasm for leg length discrepancies, muscle “imbalance”, and more recently neutral spine and core stability – because for some people these approaches were helpful! Over time, the enthusiasm has waned.

Others of us apply what we could call an eclectic approach – a bit of this, a bit of that, something I like to do, something that I just learned – and yes, even some of these approaches seem to work.

My concern is twofold. (1) What is the clinical reasoning behind adopting either a rule-governed algorithm or subtyping approach or an eclectic approach? Why use X instead of Y? And are we reasoning after the fact to justify our approach? (2) What do we do if it doesn’t work? Where does that leave us? As Slade, Molloy & Keating (2003), do we begin blaming the patient when our hammer fails to find a nail?

I’ve long advocated working to generate multiple hypotheses to explain how and why a person is presenting in this way at this time. It’s a case formulation approach where, collaborating with the person and informed by broad assessment across multiple domains that are known to be associated with pain, a set of possible explanations (hypotheses) are generated. Then we systematically test these either through further clinical assessment, or by virtue of providing an intervention and carefully monitoring the outcome. This approach doesn’t resolve uncertainty – but it does allow for some time to de-bias our clinical reasoning, it involves the client in sorting out what might be going on, it means we have more than one way to approach the problem (the one the client identifies, not just our own!), and it means we have some way of holding all this ambiguous and uncertain information in place so we can see what’s going on. I know case formulations are imperfect, and they don’t solve anything in themselves (see Delle-Vergini & Day (2016) for a recent review of case formulation in forensic practice – not too different from ordinary clinical practice in musculoskeletal management IMHO) . What they do is provide a systematic process to follow that can incorporate uncertainty without needing a clinician to jump to conclusions.

I’d love your thoughts on managing uncertainty as a clinician in your daily practice. How do you deal with it? Is there room for uncertainty and ambiguity? What would happen if we could sit with this uncertainty without jumping in to treat for just a little longer? Could mindfulness be useful? What if you’re someone who experiences a great deal of empathy for people who distressed – can you sit with not knowing while in the presence of someone who is hurting?

 

Bekkelund, S., & Salvesen, R. (2006). Is uncertain diagnosis a more frequent reason for referring migraine patients to neurologist than other headache syndromes? European Journal of Neurology, 13(12), 1370-1373. doi:http://dx.doi.org/10.1111/j.1468-1331.2006.01523.x
Delle-Vergini, V., & Day, A. (2016). Case formulation in forensic practice: Challenges and opportunities. The Journal of Forensic Practice, 18(3), null. doi:doi:10.1108/JFP-01-2016-0005
Furnham, A., & Marks, J. (2013). Tolerance of ambiguity: A review of the recent literature. Psychology, Vol.04No.09, 12. doi:10.4236/psych.2013.49102
Lillrank, A. (2003). Back pain and the resolution of diagnostic uncertainty in illness narratives. Social Science & Medicine, 57(6), 1045-1054. doi:http://dx.doi.org/10.1016/S0277-9536%2802%2900479-3
Patel, S., Greasley, K., Watson, P. J. (2007). Barriers to rehabilitation and return to work for unemployed chronic pain patients: A qualitative study. European Journal of Pain: Ejp, 11(8), 831-840.
Rosser, W. W. (1996). Approach to diagnosis by primary care clinicians and specialists: Is there a difference? Journal of Family Practice, 42(2), 139-144.
Slade, S. C., Molloy, E., & Keating, J. L. (2012). The dilemma of diagnostic uncertainty when treating people with chronic low back pain: A qualitative study. Clinical Rehabilitation, 26(6), 558-569. doi:10.1177/0269215511420179

Did it help? Questions and debate in pain measurement


Pain intensity, quality and location are three important domains to consider in pain measurement. And in our kete*of assessment tools we have many to choose from! A current debate (ongoing debate?) in the august pages of Pain (International Association for the Study of Pain) journal shows that the issue of how best to collate the various facets of our experience of pain is far from decided – or even understood.

The McGill Pain Questionnaire (MPQ) is one of the most venerable old measurement instruments in the pain world.  It is designed to evaluate the qualities of pain – the “what does it feel like” of sensory-discriminative components, evaluative components, and cognitive-affective components. There are 20 categories in the tool, and these examine (or attempt to measure) mechanical qualities, thermal qualities, location and time.  Gracely (2016), in an editorial piece, compares the McGill to a set of paint colour samples – if pain intensity equals shades of grey, then the other qualities are other coloures – blue, green, red – in shades or tints, so we can mix and match to arrive at a unique understanding of what this pain is “like” for another person.

To begin to understand the MPQ, it’s important to understand how it was developed. Melzack recognised that pain intensity measurement, using a dolimeter (yes, there is such a thing – this is not an endorsement, just to prove it’s there), doesn’t equate with the qualities of pain experienced, nor of the impact of previous experiences. At the time, Melzack and Wall were working on their gate control theory of pain, so it’s useful to remember that this had not yet been published, and specificity theory was holding sway – specificity theory arguing that pain is a “specific modality of cutaneous sensation”, while pattern theory held that the experience reflects the nervous systems ability to “select and abstract” relevant information (Main, 2016).  So Melzack adopted a previous list of 44 words, carried out a literature review, and recorded the words used by his patients. Guided by his own three dimensional model of pain, he generate three groups of descriptors to begin to establish a sort of “quality intensity scale”. These were then whittled down to 78 words that have been used since, and by used I mean probably the most used instrument ever! Except for the VAS.

There are arguments against the MPQ – I’m one who doesn’t find it helpful, and this undoubtedly reflects that I work in a New Zealand context, with people who may not have the language repertoire of those that Melzack drew on. The people I work with don’t understand many of the words (‘Lancinating‘ anyone?), and like many pain measures, the importance or relevance of terms used in this measure are based on expert opinion rather than the views of those who are experiencing pain themselves. This means the measure may not actually tap into aspects of the experience of pain that means a lot to people living with it. Main (2016) also points out that interpreting the MPQ is problematic, and perhaps there are alternative measures that might be more useful in clinical practice. Some of the criticisms include the difficulty we have in separating the “perceptual” aspects of pain from the way pain functions in our lives, and the way we communicate it, and the MPQ doesn’t have any way to factor in the social context, or the motivational aspects of both pain and its communication.

In a letter to the editor of Pain, Okkels, Kyle and Bech (2016) propose that there should be three factors in the measurement – symptom burden (they suggest pain intensity), side effects (or medication – but what if there’s no medication available?), and improved quality of life (WHO-5). But as Sullivan and Ballantyne (2016) point out in their reply – surely the point of treatment is to improve patient’s lives – “we want to know if it is possible for the patient’s life to move forward again. However it is also important that we do not usurp patients’ authority to judge whether their life has improved” (p. 1574). What weighting we give to, for example, pain reduction vs improved quality of life? I concur. Even the MPQ with all its history doesn’t quite reflect the “what it means to me to experience this pain”.

Did it help? Answering this critical question is not easy. Pain measurement is needed for furthering our understanding of pain, to ensure clinical management is effective, and to allow us to compare treatment with treatment. But at this point, I don’t know whether our measures reflect relevant aspects of this common human experience.  Is it time to revisit some of these older measures of pain and disability, and critically appraise them in terms of how well they work from the perspectives of the people living with pain? Does this mean taking some time away from high tech measurement and back to conversations with people?

 

(*pronounced “keh-teh” – Maori word for kitbag, and often used to represent knowledge)

Gracely, R. H. (2016). Pain language and evaluation. Pain, 157(7), 1369-1372.

Main, C. J. (2016). Pain assessment in context: A state of the science review of the mcgill pain questionnaire 40 years on. Pain, 157(7), 1387-1399.

Okkels, N., Kyle, P. R., & Bech, P. (2016). Measuring chronic pain. Pain, 157(7), 1574.

Sullivan, M. D., & Ballantyne, J. (2016). Reply. Pain, 157(7), 1574-1575.

 

Pain measurement: Measuring an experience is like holding water


Measurement in pain is complicated. Firstly it’s an experience, so inherently subjective – how do we measure “taste”, for example? Or “joy”? Secondly, there’s so much riding on its measurement: how much pain relief a person gets, whether a treatment has been successful, whether a person is thought sick enough to be excused from working, whether a person even gets treatment at all…

And even more than these, given it’s so important and we have to use surrogate ways to measure the unmeasurable, we have the language of assessment. In physiotherapy practice, what the person says is called “subjective” while the measurements the clinician takes are called “objective” – as if, by them being conducted by a clinician and by using instruments, they’re not biased or “not influenced by personal feelings or opinions in considering and representing facts”. Subjective, in this instance, is defined by Merriam Webster as “ relating to the way a person experiences things in his or her own mind. : based on feelings or opinions rather than facts.”  Of course, we know that variability exists between clinicians even when carrying out seemingly “objective” tests of, for example, range of movement, muscle strength, or interpreting radiological images or even conducting a Timed Up and Go test (take a look here at a very good review of this common functional test – click)

In the latest issue of Pain, Professor Stephen Morley reflects on bias and reliability in pain ratings, reminding us that “measurement of psychological variables is an interaction between the individual, the test material, and the context in which the measure is taken” (Morley, 2016). While there are many ways formal testing can be standardised to reduce the amount of bias, it doesn’t completely remove the variability inherent in a measurement situation.

Morley was providing commentary on a study published in the same journal, a study in which participants were given training and prompts each day when they were asked to rate their pain. Actually, three groups were compared: a group without training, a group with training but no prompts, and a group with training and daily prompts (Smith, Amtmann, Askew, Gewandter et al, 2016). The hypothesis was that people given training would provide more consistent pain ratings than those who weren’t. But no, in another twist to the pain story, the results showed that during the first post-training week, participants with training were less reliable than those who simply gave a rating as usual.

Morley considers two possible explanations for this – the first relates to the whole notion of reliability. Reliability is about identifying how much of the variability is due to the test being a bit inaccurate, vs how much variability is due to the variability of the actual thing being measured, assuming that errors or variability are only random. So perhaps one problem is that pain intensity does vary a great deal from day-to-day.  The second reason is related to the way people make judgements about their own pain intensity. Smith and colleagues identify two main biases (bias = systematic errors) – scale anchoring effects (that by giving people a set word or concept to “anchor” their ratings, the tendency to wander off and report pain based only on emotion or setting or memory might be reduced), and that daily variations in context might also influence pain. Smith and colleagues believed that by providing anchors between least and “worst imaginable pain”, they’d be able to guide people to reflect on these same imagined experiences each day, that these imagined experiences would be pretty stable, and that people could compare what they were actually experiencing at the time with these imagined pain intensities.

But, and it’s a big but, how do people scale and remember pain? And as Morley asks, “What aspect of the imagined pain is reimagined and used as an anchor at the point of rating?” He points out that re-experiencing the somatosensory-intensity aspect of pain is rare (though people can remember the context in which they experienced that pain, and they can give a summative evaluative assessment such as “oh it was horrible”). Smith and colleagues’ study attempted to control for contextual effects by asking people to reflect only on intensity and duration, and only on pain intensity rather than other associated experiences such as fatigue or stress. This, it must be said, is pretty darned impossible, and Morley again points out that “peak-end” phenomenon (which means that our estimate of pain intensity depends a great deal on how long we think an experience might go on, disparities between what we expect and what we actually feel, and differences between each of us) will bias self-report.

Smith et al (2016) carefully review and discuss their findings, and I strongly encourage readers to read the entire paper themselves. This is important stuff – even though this was an approach designed to help improve pain intensity measurement within treatment trials, what it tells us is that our understanding of pain intensity measurement needs more work, and that some of our assumptions about measuring our pain experience using a simple numeric rating scale might be challenged. The study used people living with chronic pain, and their experiences may be different from those with acute pain (eg post-surgical pain). The training did appear to help people correctly rank their pain in terms of least pain, average pain, and worst pain daily ratings.

What can we learn from this study? I think it’s a good reminder to us to think about our assumptions about ANY kind of measurement in pain. Including what we observe, what we do when carrying out pain assessments, and the influences we don’t yet know about on pain intensity ratings.

Morley, S. (2016). Bias and reliability in pain ratings. Pain, 157(5), 993-994.

Smith, S. M., Amtmann, D., Askew, R. L., Gewandter, J. S., Hunsinger, M., Jensen, M. P., . . . Dworkin, R. H. (2016). Pain intensity rating training: Results from an exploratory study of the acttion protecct system. Pain, 157(5), 1056-1064.

Using a new avoidance measure in the clinic


A new measure of avoidance is a pretty good thing. Until now we’ve used self report questionnaires (such as the Tampa Scale for Kinesiophobia, or the Pain Catastrophising Scale), often combined with a measure of disability like the Oswestry Disability Index to determine who might be unnecessarily restricting daily activities out of fear of pain or injury. These are useful instruments, but don’t give us the full picture because many people with back pain don’t see that their avoidance might be because of pain-related fear – after all, it makes sense to not do movements that hurt or could be harmful, right?

Behavioural avoidance tests (BAT) are measures developed to assess observable avoidance behaviour. They’ve been used for many years for things like OCD and phobias for both assessments and treatments. The person is asked to approach a feared stimulus in a standardised environment to generate fear-related behaviours without the biases that arise from self-report (like not wanting to look bad, or being unaware of a fear).

This new measure involves asking a person to carry out 10 repetitions of certain movements designed to provoke avoidance. The link for the full instructions for this test is this: click

Essentially, the person is shown how to carry out the movements (demonstrated by the examiner/clinician), then they are asked to do the same set of movements ten times.  Each set of movements is rated 0 = performs exactly as the clinician does; 1 = movement is performed but the client uses safety behaviours such as holding the breath, taking medication before doing the task, asking for help, or motor behaviours such as keeping the back straight (rotation and bending movements are involved); 2 = the person avoids doing the movement, and if the person performs fewer than 10 repetitions, those that are not completed are also coded 2. The range of scores obtainable are 0 – 60.

How and when would you use this test?

It’s tempting to rush in and use a new test simply because it’s new and groovy, so some caution is required.

My questions are: (1) does it help me (or the person) obtain a deeper understanding of the contributing factors to their problem? (2) Is it more reliable or more valid than other tests? (3) Is it able to be used in a clinical setting? (4) Does it help me generate better hypotheses as to what’s going on for this person? (5) I also ask about cost, time required, scoring and whether special training is required.

This test is very useful for answering question (1). It provides me with a greater opportunity to review the thoughts, beliefs and behaviours of a person in the moment. This means I can very quickly identify even the subtle safety behaviours, and obtain the “what’s going through your mind” of the person. If I record the movements, I can show the person what’s going on. NB This is NOT intended to be a test of biomechanical efficiency, or to identify “flaws” in movement patterns. This is NOT a physical performance test, it’s a test of behaviour and belief. Don’t even try to use it as a traditional performance test, or I will find you and I will kill (oops, wrong story).

It is more valid than other tests – the authors indicate it is more strongly associated with measures of disability than measures of pain-related fear and avoidance behaviour. This is expected, because it’s possible to be afraid of something but actually do it (public speaking anyone?), and measures of disability don’t consider the cause of that disability (it could be wonky knees, or a dicky ticker!).

It’s easy to do in a clinical setting – A crate of water bottles (~8 kg) and a table (heights ~68 cm) are needed to conduct the BAT-Back. The crate weighed  7.8 kg including six one-litre plastic bottles. One could argue that people might find doing this test in a clinic is less threatening than doing it in real life, and this is quite correct. The setting is contained, there’s a health professional around, the load won’t break and there’s no time pressure, so it’s not ecologically valid for many real world settings – but it’s better than doing a ROM assessment, or just asking the person!

Does it help me generate better hypotheses? Yes it certainly does, provided I take my biomechanical hat off and don’t mix up a BAT with a physical performance assessment. We know that biomechanics are important in some instances, but when it comes to low back pain it doesn’t seem to have as much influence as a person’s thoughts and beliefs – and more importantly, their tendency to just not do certain movements. This test allows me to go through the thoughts that flash through a person’s mind as they do the movement, thus helping me and the person more accurately identify what it is about the movement that’s bothering them. Then we can go on to test their belief and establish whether the consequences are, in fact, worse than the effects of avoidance.

Finally, is it cost-effective? Overall I’d say yes – with a caveat. You need to be very good at spotting safety behaviours, and you need to have a very clear understanding about the purpose of this test, and you may need training to develop these skills and the underlying conceptual understanding of behavioural analysis.

When would I use it? Any time I suspect a person is profoundly disabled as a result of their back pain, but does not present with depression, other tissue changes (limb fracture, wonky knees or ankles etc) that would influence the level of disability. If a person has elevated scores on the TSK or PCS. If they have elevated scores on measures of disability. If I think they may respond to a behavioural approach.

Oh, the authors say very clearly that one of the confounds of this test is when a person has biological factors such as bony changes to the vertebrae, shortened muscles, arthritic knees and so on. So you can put your biomechanical hat on – but remember the overall purpose of this test is to understand what’s going on in the person’s mind when they perform these movements.

Scoring and normative data has not yet been compiled. Perhaps that’s a Masters research project for someone?

Holzapfel, S., Riecke, J., Rief, W., Schneider, J., & Glombiewski, J. A. (in press). Development and validation of the behavioral avoidance test – back pain (bat-back) for patients with chronic low back pain. Clinical Journal of Pain.

 

 

Fibro fog or losing your marbles: the effect of chronic pain on everyday executive functioning


ResearchBlogging.org

There are days when I think I’m losing the plot! When my memory fades, I get distracted by random thin—-ooh! is that a cat?!

We all have brain fades, but people with chronic pain have more of them. Sometimes it’s due to the side effects of medication, and often it’s due to poor sleep, or low mood – but whatever the cause, the problem is that people living with chronic pain can find it very hard to direct their attention to what’s important, or to shift their attention away from one thing and on to another.

In an interesting study I found today, Baker, Gibson, Georgiou-Karistianis, Roth and Giummarra (in press), used a brief screening measure to compare the executive functioning of a group of people with chronic pain with a matched set of painfree individuals. The test is called Behaviour Rating Inventory of Executive Function, Adult version (BRIEF-A) which measures Inhibition, Shift, Emotional Control, Initiate, Self-Monitor, Working Memory, Plan/Organize, Task Monitor, and Organization of Materials.

Executive functioning refers to “higher” cortical functions such as being able to attend to complex situations, make the right decision and evaluate the outcome. It’s the function that helps us deal with everyday situations that have novel features – like when we’re driving, doing the grocery shopping, or cooking a meal. It’s long been known that people living with chronic pain experience difficulty with these things, not just because of fatigue and pain when moving, but because of limitations on how well they can concentrate. Along with the impact on emotions (feeling irritable, anxious and down), and physical functioning (having poorer exercise tolerance, limitations in how often or far loads can be lifted, etc), it seems that cognitive impairment is part of the picture when you’re living with chronic pain.

Some of the mechanisms thought to be involved in this are the “interruptive” nature of pain – the experience demands attention, directing attention away from other things and towards pain and pain-related objects and situations; in addition, there are now known to be structural changes in the brain – not only sensory processing and motor function, but also the dorsolateral prefrontal cortex which is needed for complex cognitive tasks.

One of the challenges in testing executive functions in people living with chronic pain is that usually they perform quite well on standard pen and paper tasks – when the room is quiet, there are no distractions, they’re rested and generally feeling calm. But put them in a busy supermarket or shopping mall, or driving a car in a busy highway, and performance is not such an easy thing!

So, for this study the researchers used the self-report questionnaire to ask people about their everyday experiences which does have some limitations – but the measure has been shown to compare favourably with real world experiences of people with other conditions such as substance abuse, prefrontal cortex lesions, and ADHD.

What did they find?

Well, quite simply they found that 50% of patients showed clinical elevation on Shift, Emotional Control, Initiate, and Working Memory subscales with emotional control and working memory the most elevated subscales.

What does this mean?

It means that chronic pain doesn’t only affect how uncomfortable it might be to move, or sit or stand; and it doesn’t only affect mood and anxiety; and it’s not just a matter of being fogged with medications (although these contribute), instead it shows that there are clear effects of experiencing chronic pain on some important aspects of planning and carrying out complex tasks in the real world.

The real impact of these deficits is not just on daily tasks, but also on how readily people with chronic pain can adopt and integrate all those coping strategies we talk about in pain management programmes. Things like deciding to use activity pacing means – decision making on the fly, regulating emotions to deal with frustration of not getting jobs done, delaying the flush of pleasure of getting things completed, having to break a task down into many parts to work out which is the most important, holding part of a task in working memory to be able to decide what to do next. All of these are complex cortical activities that living with chronic pain can affect.

It means clinicians need to help people learn new techniques slowly, supporting their generalising into daily life by ensuring they’re not overwhelming, and perhaps using tools like smartphone alarms or other environmental cues to help people know when to try using a different technique. It also means clinicians need to think about assessing how well a person can carry out these complex functions at the beginning of therapy – it might change the way coping strategies are learned, and it might mean considering changes to medication (avoiding opiates, but not only these because many pain medications affect cognition), and thinking about managing mood promptly.

The BRIEF-A is not the last word in neuropsych testing, but it may be a helpful screening measure to indicate areas for further testing and for helping people live more fully despite chronic pain.

 

Baker, K., Gibson, S., Georgiou-Karistianis, N., Roth, R., & Giummarra, M. (2015). Everyday Executive Functioning in Chronic Pain The Clinical Journal of Pain DOI: 10.1097/AJP.0000000000000313

Faking pain – Is there a test for it?


One of the weird things about pain is that no-one knows if you’re faking. To date there hasn’t been a test that can tell whether you’re really in pain, or just faking it. Well, that’s about to change according to researchers in Israel and Canada.

While there have been a whole range of approaches to checking out faking such as facial expression, responses to questionnaires, physical testing and physical examinations, none of these have been without serious criticism. And the implications are pretty important to the person being tested – if you’re sincere, but someone says you’re not, how on earth do you prove that you’re really in pain? For clinicians, the problem is very troubling because allegations of faking can strain a working relationship with a person, and hardly lead to a sense of trust. Yet insurance companies routinely ask clinicians to make determinations about fraudulent access to insurance money – and worst of all, clinicians often feel they have little choice other than to participate.

In this study by Kucyi, Sheinman and Defrin, three hypotheses were tested: 1) Whether feigned performance could be detected using warmth and pain threshold measurements; 2) whether there were changes in the statistical properties of performance when participants were faking; and 3) whether an “interference” or distractor presented during testing interferes with the ability to fake and therefore provide a clue to when someone is being sincere or not.

Using university students (I hope they got course credits for participating!) who were not health science students, and were otherwise healthy, the investigators gave very little information about the procedure or hypotheses to minimise expectancy bias. Participants were then tested using a thermal stimulator to obtain “warmth sensation threshold” and “heat-pain thresholds” – this is a form of quantitative sensory testing (QST). TENS was used as a distractor in the experimental case, applied for 2 minutes before measuring the pain threshold, and during the heat pain threshold test. This was repeated with first the threshold test, then TENS. Participants were asked to pretend they were in an insurance office, being tested to establish whether they were experiencing genuine pain, after being told the test would be able to tell whether their pain was real.

What did they find out?

Well in situation one, where both threshold and warmth detection were used, and participants were asked to fake the pain intensity, respondents gave higher warmth detection ratings than normal. Not only this, but the ability to repeat the same response with the same temperature was poorer.  Heat pain threshold was also consistently different between the sincere and faked conditions, with heat pain threshold lower when people were faking (to around 3 degrees).

When the second testing option was carried out (using TENS to distract), heat pain threshold was significant lower when participants were faking, and the variance of the feigned + interference condition was three times that of the sincere condition, and the CV of the feigned + interference condition was twice that of the sincere condition.

What does this mean?

Well first of all, it means there are some consistent effects of faking in response to tests of warmth and heat-pain threshold when a distractor like TENS is used. Increased reports of warmth threshold and reduced heat pain threshold were observed, and where statistically significant. Interestingly, it was only when a distractor was used that the variability of reports were found – these authors suggest that people are pretty skilled at giving consistent reports when they’re not being distracted by an additional sensory stimulus.

Now here’s where I begin to pull this apart from a clinical and practical perspective. The authors, to give them credit, indicate that the research is both new and that it may identify some people who do have pain as malingerers. My concerns are that people with chronic pain may not look at all like healthy young university students.

We know very little about the responses to QST by people with different forms of chronic pain. We already know that people with impaired descending noxious inhibitory control respond differently to some forms of QST. We also know that contextual factors including motivation can influence how nervous systems respond to input. But my concerns are far more about the potential harm to those who are tested and found to be malingering when they’re not.

What do you do if you think a person is faking? How do you deal with this? What good does it do to suggest to someone their pain is not real, or isn’t nearly as bad as they make out? Once the words are out of your mouth (or written in a report) any chance of a therapeutic relationship has just flown right out the door. And you’re still left with a person who says they’re having trouble – but now you have an angry, resentful person who has a strong need to prove that they DO have pain.

You see, I think it might be more fruitful to ask why is this person faking pain? If it’s simply for money, surely there are far easier ways to get money than pretending to be disabled by pain? If it’s the case that a person is off out fishing or playing golf or living it up when “supposed” to be in pain, wouldn’t it make more sense to reframe their response as either recovering well (doing what’s healthy) and therefore get on with returning to work; or use a private investigator to demonstrate that he or she is actually capable of doing more than they indicate?

The presence or absence of pain is not really the problem, IMHO. To me we need to address the degree of disability that’s being attributed to pain and work on that. Maybe a greater focus on reducing disability rather than on expensive procedures to remove pain or otherwise get rid of pain is in order?

Kucyi, A., Sheinman, A., Defrin, R. (in press). Distinguishing feigned from sincere performance in psychophysical pain testing. The Journal of Pain.

How good is the TSK as a measure of “kinesiophobia”?


The Tampa Scale for Kinesiophobia is a measure commonly used to determine whether a person is afraid of moving because of beliefs about harm or damage, with a second scale assessing current avoidance behaviour. It has been a popular measure along with the pain-related fear and avoidance model and together with the model and measures of disability, catastrophising and pain-related anxiety, has become one of the mainstays within pain assessment.

There have been numerous questions raised about this measure in terms of reliability and validity, but the measure continues to be one that is widely used. The problems with reliability relate mainly to a long version (TSK-17) in which several items are reverse scored. Reverse scored items often state a negative version of one of the concepts being assessed by the measure, but pose problems to people completing the measure because it’s hard to respond to a double negative.  In terms of validity, although the measure has been used a great deal and the original studies examining the psychometric properties of the instrument showed predictive validity, the TSK’s ability to predict response to treatment hasn’t been evaluated.

Chris Gregg and colleagues from The Back Institute and CBI Health Group studied a cohort of 313 people with low back pain attending one of the rehabilitation clinics in New Zealand. Participants completed the TSK at the beginning of treatment, and again at programme completion.  Along with the TSK, participants also completed a numeric pain scale, a modified Low Back Outcome score, and indicated whether they were working or not. These latter measures were considered to be “Quality of Life” measures, although they’re not officially QoL scales.

Before I turn to the study design and statistics, I’ll take a look at the modified Low Back Outcome score. Now I don’t know if you’ve ever searched for something like this, but believe me when I say there are SO many versions of SO many different “modified” back pain questionnaires, it’s really hard to work out exactly which one is the one used in this study, nor how it was modified. I’m assuming that it’s the one mentioned in Holt, Shaw, Shetty and Greenough (2002) because it’s mentioned in the references, but I don’t know the modifications made to it.  The LBOS is a fairly brief 12 item measure looking at pain intensity “on average” over the last week, work status, functional activities, rest, treatment seeking, analgesic use, and another five broad activities (sex life, sleeping, walking, traveling and dressing). It’s been described as having good internal consistency and test-retest reliability but validity isn’t mentioned in the 2002 paper.

Now, coming to this study, overall people improved at the completion of the programme. Pain reduced by 1.84 on the NPS, m-LBOS scores increased by 10.4 (a 28% improvement), and the TSK scores also improved by 5.5. Of course, we’d hope that at the end of a programme people would be doing better – though I’d prefer to see outcomes measured at least another three to 9 months after programme completion.

The authors looked at the relationship between the TSK and initial scores – there were small  statistical relationships between these measures. They then examined the scores between pre-treatment TSK and QoL measures at the end of treatment to establish whether there was a relationship between kinesiophobia and eventual outcome. There wasn’t. At least, not much of a relationship. These authors conclude that the TSK is therefore not a good measure to employ to predict those at high risk of chronicity due to fear of movement. I was a bit disappointed to see that a subscale analysis of the TSK wasn’t carried out – so it’s not possible to know whether change was associated with reduced beliefs about fear of harm/reinjury or whether it was due to reduced avoidance, or both.

Now here’s where I get a bit tangled up. Wouldn’t you expect the underlying constructs of the TSK (fear of harm/reinjury, and avoidance) to be the targets of a back pain related treatment? Especially one that includes cognitive behavioural therapy, education and movement? If we’re using a measure I think we should USE it within our clinical reasoning, and deliberately target those factors thought to be associated with poor outcomes. If we’re successful, then we should be able to see a change in domains thought to be associated with those constructs. In this programme, given that people were given treatment based on sub-typing, including education and CBT, I would hope that pain-related fear and avoidance would be directly targeted so that people develop effective ways of dealing with unhelpful beliefs and behaviours. To establish whether that had happened I’d want to look at the association between post-treatment TSK and measures of function or disability.

And getting back to the timing of outcome assessment, given that we’re interested in people managing any residual back pain (and in this study people were left with pain scores on the NPS of 3.4 (+/- 2.4) they still had some pain), wouldn’t you be interested in how they were managing a bit further down the track? We can (almost) guarantee that people will make changes directly as an effect of attention and structured activities. Measuring what occurs immediately at the completion of a programme may not show us much about what happens once that person has carried on by him or herself for a few months. My experience with chronic pain programmes shows a typical pattern of improvement immediately at the end of a programme, then six weeks later, what can be called regression to the mean, or what we often described as “the dip” or “the slump” as reality hits the road. At a further six months down the track, results had improved a bit, and these were usually sustained (or thereabouts) at the following twelve month follow-up.

So, does this study provide us with evidence that the TSK isn’t useful as a predictive tool? I’m not so sure. I think it does show that there are improvements in TSK, pain, disability and work status immediately at the end of a programme. Unfortunately TSK scores at the end of the programme are not analysed into subscales, so we don’t know which aspects of pain-related fear and avoidance were affected – but we know that they were.

For clinicians working in chronic pain programmes, where people are referred after having remained disabled and/or distressed despite having had prior treatment, the TSK may not be the most useful tool ever. The problems I’ve had with it are that scores in the fear of injury/reinjury subscale are lower when people have been given good pain “education” – but often present with a combined high score because of very high scores on the avoidance subscale.

A lovely study by Bunzli, Smith, Watkins, Schütze and O’Sullivan (2014) looked at what people actually believe about their pain and the associated TSK items. They found that many people DO believe their pain indicates harm, and they also found that people were worried about the effect pain would have on other things – and it’s this part that I find particularly interesting. It may not be the pain that matters as much as the anticipated losses and disruption to normal life that could occur.

The original authors of the “fear-avoidance” model, Vlaeyen and Linto (2012) reviewed the model after 12 years, and agree there is much to be done to refine assessment of pain-related fear. Self-report measures are only as good as the ability, insight and willingness of participants to complete them accurately.

So, is it time to throw the TSK out the window? I don’t think so – at least not yet. There’s more we need to do to understand pain-related fear and subsequent avoidance.

 

Chris D. Gregg, Greg McIntosh, Hamilton Hall, Heather Watson, David Williams, Chris Hoffman, The relationship between the tampa scale of kinesiophobia and low back pain rehabilitation outcomes, The Spine Journal (2015), http://dx.doi.org/doi:10.1016/j.spinee.2015.08.018.

Bunzli, S., Smith, A., Watkins, R., Schütze, R., & O’Sullivan, P. (2014). “What Do People who Score Highly on the Tampa Scale of Kinesiophobia Really Believe? A Mixed Methods Investigation in People with Chronic Non Specific Low Back Pain The Clinical Journal of Pain DOI: 10.1097/AJP.0000000000000143

Vlaeyen, J. W., & Linton, S. J. (2012). Fear-avoidance model of chronic musculoskeletal pain: 12 years on. Pain, 153(6), 1144-1147. doi: dx.doi.org/10.1016/j.pain.2011.12.009

Central sensitisation – can a questionnaire help find out who is, and who isn’t?


My orthopaedic colleagues have been asking for a way to identify which surgical candidate is unlikely to have a good outcome after major joint surgery. They know that between 10 – 50% of people undergoing surgery will have chronic pain.  5 – 10% of those people experiencing pain that’s rated >5/10 on a numeric rating scale where 0 = no pain, and 10 = most severe pain you can imagine ( Kehlet, Jensen, & Woolf, 2006). The people with severe pain are the kind of people who hear “well the surgery I did went well…” and can be left wondering why they ever decided to go ahead with their surgery.

Two main factors seem to be important in postsurgical chronic pain: the presence of central sensitisation (usually indicated by reporting chronic pain in at least two other areas of the body) and catastrophising. I’ve discussed catastrophising a great deal here and here .

What I haven’t talked about is central sensitisation. Now, the idea that people can experience chronic pain associated with changes in the way the nervous system responds to stimuli isn’t new, but the neurobiology of it is still slowly being unravelled.  I’m not going to get into definitions or whether having changes in the nervous system equates with “chronic pain” (because pain is an experience and the neurobiology is just the scaffolding that seems present, the two are not equivalent). I want to talk about the measurement of this “sensitisation” and whether a pen and paper tool might be one way of screening people who are at greatest risk of developing problems if they proceed with surgery.

First of all, what symptoms come under this broad heading of “response to an abnormally sensitised nervous system”? Well, Yunus (2007) proposed that because there are similarities between several so-called “medically unexplained symptoms” such as fibromyalgia, chronic fatigue, irritable bowel disorder and so on, perhaps there is a common aetiology for them. Based on evidence that central sensitisation involves enhanced processing of many sensory experiences, Yunus proposed the term “central sensitivity syndrome” – basically a disorder of the nociceptive system. Obviously it’s pretty complicated, but various researchers have proposed that “dysregulation in both ascending and descending central nervous system pathways as a result of physical trauma and sustained pain impulses, and the chronic release of pro-inflammatory cytokines by the immune system, as a result of physical trauma or viral infection… including a dysfunction of the stress system, including the hypothalamic–pituitary–adrenal axis (Mayer, Neblett, Cohen, Howard, Choi et al, 2012, p. 277)”. (what are “pain impulses”?!)

By proposing this mechanism, various researchers have been able to pull together a number of symptoms that people experience, and their premise is that the more symptoms individuals endorse, the more likely it is that they have an underlying central sensitisation disorder.

The authors completed a literature review to identify symptoms and comorbidities associated with fibromyalgia and the other disorders they believe indicate a sensitised central nervous system. they then develop a self-report instrument and asked people with these problems to complete it, and compared their results with a group of people who wouldn’t usually be thought to have any sensitisation problems (students and staff at a University – we could argue this, but let’s not!).

What they found, after much statistical analysis, is a four factor measure:

Factor 1 – Physical Symptoms (30.9%)
Factor 2 – Emotional Distress (7.2%)
Factor 3 – Headache/Jaw Symptoms (10.1%)
Factor 4 – Urological Symptoms (5.2%)

Test-retest reliability was established, and because the questionnaire could discriminate between those who reported widespread pain (aka fibromyalgia) and those who had no pain, it’s thought to have discriminant validity as well. (BTW a copy of this measure is included in the appendix of the Mayer, Neblett, Cohen, Howard, Choi, Williams et al (2012) paper – go get it!)

The researchers then went on to look at some norms for the measure and found that amongst people with chronic pain, referred to an outpatient multidisciplinary pain centre, those with more diagnosed “central sensitisation syndromes” scored more highly on this measure, and that a score of 40 on the measure was able to discriminate between those who didn’t have sensitisation and those who did (Neblett, Cohen, Choi, Hartzell, Williams, Mayer & Gatchel, 2013).

Well and good. What does it actually mean?

This is where I think this measure can come unstuck. I like the idea of people being asked about their pain and associated symptoms. We often don’t have time in a clinical interview to ask about the enormous range of symptoms people experience, so being able to get people to fill out a pen and paper measure to take stock of the different things people know about themselves is a good thing.

What this measure doesn’t yet do is indicate whether there is any underlying common causal link between these experiences. It’s tautological to list the symptoms people might experience with central sensitisation based on the literature, then ask them to indicate which ones they experience and then conclude “oh yes! this means they have central sensitisation!” All it means is that these people report similar symptoms.

What needs to happen, and is now beginning to occur, are studies examining central nervous system processing and the scores individuals obtain on this measure. That, and establishing whether, by completing this questionnaire, it is possible to predict who is more or less likely to develop things like post-surgical chronic pain. Now that would be a really good measure, and very likely to be used by my orthopaedic colleagues.

In the meantime, whatever this measure indicates, it seems to be able to differentiate between people who are more likely to report “medically unexplained symptoms” and people who don’t. This might be useful as we begin to look at targeting treatment to suit different types of persistent pain. At this point in time, though, I think this measure is more useful in research than clinical practice.

 

Kehlet H, Jensen TS, Woolf CJ. Persistent postsurgical pain: risk factors and prevention. Lancet. 2006;367:1618–1625

Mayer, T.G., Neblett, R., Cohen, H., Howard, K.J., Choi, Y.H., Williams, M.J., . . . Gatchel, R.J. (2012). The development and psychometric validation of the central sensitization inventory. Pain Practice, 12(4), 276-285. doi: 10.1111/j.1533-2500.2011.00493.x

Neblett, R., Cohen, H., Choi, Y., Hartzell, M.M., Williams, M., Mayer, T.G., & Gatchel, R.J. (2013). The central sensitization inventory (csi): Establishing clinically significant values for identifying central sensitivity syndromes in an outpatient chronic pain sample. The Journal of Pain, 14(5), 438-445. doi: http://dx.doi.org/10.1016/j.jpain.2012.11.012

Roussel, N.A., Nijs, J., Meeus, M., Mylius, V., Fayt, C., & Oostendorp, R. (2013). Central sensitization and altered central pain processing in chronic low back pain: Fact or myth? Clin J Pain, 29, 625-638. doi: 10.1097/AJP.0b013e31826f9a71

Van Oosterwijck, J., Nijs, J., Meeus, M., & Paul, L. (2013). Evidence for central sensitization in chronic whiplash: A systematic literature review. European Journal of Pain, 17(3), 299-312. doi: 10.1002/j.1532-2149.2012.00193.x

Yunus, M.B. (2007). Fibromyalgia and overlapping disorders: The unifying concept of central sensitivity syndromes. Seminars in Arthritis & Rheumatism, 36(6), 339-356.