Faking pain – Is there a test for it?

One of the weird things about pain is that no-one knows if you’re faking. To date there hasn’t been a test that can tell whether you’re really in pain, or just faking it. Well, that’s about to change according to researchers in Israel and Canada.

While there have been a whole range of approaches to checking out faking such as facial expression, responses to questionnaires, physical testing and physical examinations, none of these have been without serious criticism. And the implications are pretty important to the person being tested – if you’re sincere, but someone says you’re not, how on earth do you prove that you’re really in pain? For clinicians, the problem is very troubling because allegations of faking can strain a working relationship with a person, and hardly lead to a sense of trust. Yet insurance companies routinely ask clinicians to make determinations about fraudulent access to insurance money – and worst of all, clinicians often feel they have little choice other than to participate.

In this study by Kucyi, Sheinman and Defrin, three hypotheses were tested: 1) Whether feigned performance could be detected using warmth and pain threshold measurements; 2) whether there were changes in the statistical properties of performance when participants were faking; and 3) whether an “interference” or distractor presented during testing interferes with the ability to fake and therefore provide a clue to when someone is being sincere or not.

Using university students (I hope they got course credits for participating!) who were not health science students, and were otherwise healthy, the investigators gave very little information about the procedure or hypotheses to minimise expectancy bias. Participants were then tested using a thermal stimulator to obtain “warmth sensation threshold” and “heat-pain thresholds” – this is a form of quantitative sensory testing (QST). TENS was used as a distractor in the experimental case, applied for 2 minutes before measuring the pain threshold, and during the heat pain threshold test. This was repeated with first the threshold test, then TENS. Participants were asked to pretend they were in an insurance office, being tested to establish whether they were experiencing genuine pain, after being told the test would be able to tell whether their pain was real.

What did they find out?

Well in situation one, where both threshold and warmth detection were used, and participants were asked to fake the pain intensity, respondents gave higher warmth detection ratings than normal. Not only this, but the ability to repeat the same response with the same temperature was poorer.  Heat pain threshold was also consistently different between the sincere and faked conditions, with heat pain threshold lower when people were faking (to around 3 degrees).

When the second testing option was carried out (using TENS to distract), heat pain threshold was significant lower when participants were faking, and the variance of the feigned + interference condition was three times that of the sincere condition, and the CV of the feigned + interference condition was twice that of the sincere condition.

What does this mean?

Well first of all, it means there are some consistent effects of faking in response to tests of warmth and heat-pain threshold when a distractor like TENS is used. Increased reports of warmth threshold and reduced heat pain threshold were observed, and where statistically significant. Interestingly, it was only when a distractor was used that the variability of reports were found – these authors suggest that people are pretty skilled at giving consistent reports when they’re not being distracted by an additional sensory stimulus.

Now here’s where I begin to pull this apart from a clinical and practical perspective. The authors, to give them credit, indicate that the research is both new and that it may identify some people who do have pain as malingerers. My concerns are that people with chronic pain may not look at all like healthy young university students.

We know very little about the responses to QST by people with different forms of chronic pain. We already know that people with impaired descending noxious inhibitory control respond differently to some forms of QST. We also know that contextual factors including motivation can influence how nervous systems respond to input. But my concerns are far more about the potential harm to those who are tested and found to be malingering when they’re not.

What do you do if you think a person is faking? How do you deal with this? What good does it do to suggest to someone their pain is not real, or isn’t nearly as bad as they make out? Once the words are out of your mouth (or written in a report) any chance of a therapeutic relationship has just flown right out the door. And you’re still left with a person who says they’re having trouble – but now you have an angry, resentful person who has a strong need to prove that they DO have pain.

You see, I think it might be more fruitful to ask why is this person faking pain? If it’s simply for money, surely there are far easier ways to get money than pretending to be disabled by pain? If it’s the case that a person is off out fishing or playing golf or living it up when “supposed” to be in pain, wouldn’t it make more sense to reframe their response as either recovering well (doing what’s healthy) and therefore get on with returning to work; or use a private investigator to demonstrate that he or she is actually capable of doing more than they indicate?

The presence or absence of pain is not really the problem, IMHO. To me we need to address the degree of disability that’s being attributed to pain and work on that. Maybe a greater focus on reducing disability rather than on expensive procedures to remove pain or otherwise get rid of pain is in order?

Kucyi, A., Sheinman, A., Defrin, R. (in press). Distinguishing feigned from sincere performance in psychophysical pain testing. The Journal of Pain.

What should I include in my pain assessment?

With such a wide array of factors influencing a person’s pain experience, it can be difficult to decide exactly what to include in a pain assessment.

We do know that the model we use to view pain will influence the factors that are included – and although the internationally accepted model of pain is a biopsychosocial one, there are any number of versions of this model that can be adopted.

Within each domain of the biopsychosocial model the research over the past few years has exploded, meaning there are more and more factors than can be considered – and these need to be organised in a systematic way so that we can make sense of them, make good clinical decisions about interventions and then work with the person who has the pain so they can understand them and contribute.

There are a couple of fundamental things we should always have as guiding principles:

  1. No single element in the biopsychosocial model of pain is more (or less) important than any other
  2. All three domains must be assessed to fully understand the ‘four p’s’ of a pain presentation:
    1. Predisposing factors
    2. Precipitating factors
    3. Perpetuating factors
    4. Protective factors
  3. The fundamental questions to be answered through assessment are
    1. ‘What brought this person to this place with this problem today?’
    2. ‘What can be done to reduce distress and disability?’
  4. Simply asking the person with pain provides some good information, but on its own is probably inadequate.  Interviews need to be supplemented with:
    1. History – from relevant documentation (from the referrer, other health care notes, previous consultations within your facility)
    2. Observation – structured or unstructured observation from the moment the person enters your clinic, to the time they leave
    3. Clinical examination or testing – including functional performance as well as pen and paper questionnaires
    4. Other people – particularly partners or other family members
  5. Assessment only begins the process of developing a working set of hypotheses about what might be ‘true’ for this person at this time for these problems

A couple of models that can be helpful:

This one is from Robert Gatchel (Gatchel, 2004).

, American Psychologist, 59, 792–805.

Another model I like is by Tim Sharp, published in 2000, which is somewhat less complicated than Gatchel’s one, but still has a whole lot of arrows!  Dr Sharp now runs a successful consulting practice listed in my blogroll, worth a look!

Of course, no matter what model you use, under each ‘heading’ you will need to continue to update relevant research into specific factors to include (eg ‘appraisals’ would now routinely include catastrophising and pain-related anxiety, while ‘motor behaviours’ would include avoidance, safety behaviours, as well as task persistence).  And after deciding what to include, it will be just as important to determine the best way to access the information – through questionnaire, observation, history, testing or interview.

Finally, it will be important to work out a structured way to put the information collected together so it can be readily understood and used as the basis for hypothesis testing.

I’m not sure I’ve got a handle on this part yet – but I’m keen to hear what you use, or how you think this part can be structured.  I think we’ll have to draw on research from small group/teamwork literature into decision-making, and on human cognition and information processing to inform us on the best way to integrate such complex information without jumping to conclusions.

Isn’t it great the way that answering one question leads to a whole lot of new bits of research?  Can ya tell how much I love questions?!

If you’ve enjoyed reading this post, and want to keep up to date with my blog, you can click on the ‘RSS’ button at the top of the page to subscribe to updated feeds (I post most days during the working week), or you can simply bookmark this page, and come back every now and then.

I love comments – so please, keep ’em coming!  I do reply usually – oh and you can get hold of me using my email, just head over to my ‘about’ page and you’ll find it there.

(One day I’ll work out how to do a form so you can simply complete a form and it will send it to me, but that’s for another day!)

Gatchel, R. (2004). American Psychologist, 59, 792-805

Sharp, T. (2000). Chronic pain: A reformulation of the cognitive-behavioural model.  Behaviour Research and Therapy, 39, 787–800

Gatchel, R. J.(2004), American Psychologist, 59, 792–805

But does it measure what I want it to?


While there are thousands of assessment tools available for various aspects of pain and function, one of the most important things to consider is content validity – does the assessment measure what I want it to measure? Reliability is all very well, and ensures accuracy, but if the test doesn’t measure anything useful or important, then it’s not going to be very helpful!

This article, published in 2006, is one of the few that seeks to conduct a qualitative evaluation of the content of several questionnaires but base it on a reasonably sound theoretical framework with relatively solid methodology to ensure other researchers can conduct the same process. So far, however, I haven’t found much to compare it with – but it’s a helpful study in terms of helping clinicians define exactly what they want to include in an assessment battery, even if it concludes that there are gaps in the existing repertoire!

Sigl, Cieza, Brockow, Chatterji, Kostanjsek, and Stucki set about comparing three very common low back pain measures using the the International Classification of Functioning, Disability and Health (ICF) approved by the World Health Assembly in May 2001. Their intention was twofold: to review whether three common instruments cover the areas in the ICF, and whether the ICF can function as a somewhat atheoretical framework for comparing different instruments.

Just to review the ICF, the ICF is a multipurpose classification belonging to the WHO family of international health classifications. Part 1 covers functioning and disability and includes the components ‘‘body functions’’ (b) and ‘‘structure’’(s) and ‘‘activities and participation’’ (d). Part 2 covers contextual factors and includes the components ‘‘environmental factors’’ (e) and ‘‘personal factors.’’

To quote directly from the WHO, ‘The ICF puts the notions of ‘health’ and ‘disability’ in a new light. It acknowledges that every human being can experience a decrement in health and therheby experience some degree of disability. Disability is not something that only happens to a minority of humanity. The ICF thus ‘mainstreams’ the experience of disability and recognises it as a universal human experience. [my emphasis – BFT] By shifting the focus from cause to impact it places all health conditions on an equal footing allowing them to be compared using a common metric – the ruler of health and disability. Furthermore ICF takes into account the social aspects of disability and does not see disability only as ‘medical’ or ‘biological’ dysfunction. By including Contextual Factors, in which environmental factors are listed, ICF allows to record the impact of the environment on the person’s functioning.

I quite like the ideal of ‘everyone’ having both limitations and abilities, and especially the idea that limitations are contextual. I’m not sure that this model has yet had an impact on the systems in which we usually work, however! I use the idea that everyone has abilities and everyone has limitations when working with people experiencing chronic pain – it has the effect of encouraging people to focus on their abilities rather than defining themselves by their limitations. The flow from conceptual ideals to measurement and implementation of these ideas takes time, and because it’s a nonmedical concept, unlikely to have a significant impact on health delivery systems for many years yet.

Back to the article…
The methodology is well-described in the article – three clinicians already trained in the ICF were used. Two reviewed the content, and linked the items in the questionnaire to a content area in the ICF, applying 10 different linking rules to the items, and then compared the identified concepts and selected ICF categories to establish a Kappa statistic. If disagreement existed occurred, a third person trained in the ICF and in the linking rules was consulted, and independently determined how the item should be classified.

Clear guidelines on how linkages were to be developed, although these are not provided in the article itself – several examples, however, demonstrate how different items were allocated categories, for example, ‘If an item of a measure contains more than one concept, each concept has to be linked separately. For example, in the item of the ODI ‘‘Pain doesn’t prevent me from walking any distance,’’ the concepts ‘‘pain’’ and ‘‘walking’’ were linked to ‘‘b28013 pain in back’’ and ‘‘d450 walking,’’ respectively. The response options of an item are linked to the ICF if they refer to concepts other than those contained in the corresponding item. For example, in the item 14 ‘‘sleeping’’ of the NASS, in which two of the response categories of the item are ‘‘I sleep well’’ and ‘‘pain interrupts my sleep,’’ the concept ‘‘sleeping’’ was linked to the ICF category ‘‘b134 sleep functions,’’ the concept ‘‘sleep well’’ to ‘‘b1343 quality of sleep,’’ and the concept ‘‘interrupts my sleep’’ to ‘‘b1342 maintenance of sleep.’’ If an item/concept is not contained in the ICF classification, it is labeled ‘‘nc’’ (not covered by the ICF). ‘‘nc’’ does not differentiate between concepts relating to function not covered by the ICF, concepts relating to personal factors for which no categories currently exist, and other concepts relating to aspects like time and space.’

Although this sounds tedious to read here, I’m certain that the process ensures precision and enables the majority of items to be appropriately categorised.

Well the first thing to establish is whether the two (and occasionally three) clinicians agreed on the categories in which they allocated items. The Kappa statistics, with adjustment made for the skewdness of the sample (from high Kappa values and small sample size) by using a bootstrapping technique of sampling from percentiles based on the observed data, was used to determine agreement. The results showed that the range of agreement was from 0.67 at the broadest level of category through to 1.0 (or total agreement) at the fourth level. To illustrate this, an example selected from the component ‘‘body functions’’ is presented below:
b2: Sensory functions and pain (first level) – at this level there was a small level of disagreement
b280: Sensation of pain (second level)
b2801: Pain in body part (third level)
b28013: Pain in back (fourth level)
b28018: Pain in body part, other specified (fourth level) – at this level, there was total agreement

This demonstrates very good inter-rater reliability, although it should be appreciated that there were only three individuals involved. A larger number of raters would have provided a much better determination of the accuracy of this approach to content validation – but would also increase the time required to do it!

Now, for the real work of this study: what areas were covered by the three assessment tools, and which areas were not well-covered?

  • The representation of body functions is similar in all three measures incorporating pain and sleep.
  • All three questionnaires contain a similar number of concepts representing the ICF component “activities and participation.’’
  • None of the selected instruments covered aspects of remunerative work (d850). ‘‘Domestic life, other specified’’ (d698), which had to be linked for carrying out household tasks (‘‘doing any of the jobs that I usually do around the house,’’ ‘‘heavy jobs around the house’’), is applicable only for the RMQ.

The two research questions were: whether three common instruments cover the areas in the ICF, and whether the framework was a useful way to determine content.

  1. It was found that yes, all three instruments cover aspects of the ICF – to varying extents. Only one looked at the psychological impact of pain, and none looked at factors such as fatigue that are well-known to be associated with poorer function. Interestingly, none of the measures looked at ‘context’ – for example, ‘attitudes of immediate family members or friends or society are important prognostic determinants for life satisfaction, work performance, and disability in patients with back pain. This also holds true for remunerative work, which is not covered by any of the measures.’
  2. The second question was whether the ICF could be helpful as a framework – one use of this type of comparison work is to create an item bank. Item banks consist of large sets of questions representing various levels of a latent variable that can be used to develop brief, efficient scales for measuring that latent variable. Using Rasch analysis, items the measure the variable of interest can be identified and selected to form a measurement tool that precisely assesses that specific level of function.
  3. The first finding alone is interesting – why have these very important areas of function been ignored? Does this reflect the western idea that ‘the person with the disability’ exists in isolation?

    The final comment I want to make is about the usefulness of this research from a clinical perspective. Key areas that are well-known to be important both to people with pain, and to funders of health care and compensation are not included in three commonly-used assessment tools. Perhaps if these agencies could see their way to fund this type of comparison, it might be possible to develop supplementary measures to ensure this information is available for use in clinical situations.

    Sigl, T., Cieza, A., Brockow, T., Chatterji, S., Kostanjsek, N., Stucki, G. (2006). Content Comparison of Low Back Pain-Specific Measures Based on the International Classification of Functioning, Disability and Health (ICF). Clinical Journal of Pain, 22(2), 147-153.

    World Health Organization. International Classification of Functioning,
    Disability and Health: ICF. Geneva: WHO, 2001.
    Schultz IZ, Crook JM, Berkowitz J, et al. Biopsychosocial multivariate
    predictive model of occupational low back disability. Spine. 2002;27:

    Takeyachi Y, Konno S, Otani K, et al. Correlation of low back pain with
    functional status, general health perception, social participation, subjective
    happiness, patient satisfaction. Spine. 2003;28:1461–1466.

‘its taken over my life’…

Each time I spend listening to someone who is really finding it hard to cope with his or her pain, I hear the unspoken cry that pain has taken over everything. It can be heartbreaking to hear someone talk about their troubled sleep, poor concentration, difficult relationships, losing their job and ending up feeling out of control and at the mercy of the grim slave-driver we call chronic pain. The impact of pain can be all-pervasive, and it can be hard to work out what the key problems are.

To help break the areas down a little, I’ve been quite arbitrary really. I’m going to explore functional limitations in terms of the following:
1. Movement changes such as mobility (walking), manual handling, personal activities of daily living
2. Disability – participation in usual activities and roles such as grocery shopping, household management, parenting, relationships/intimacy/communication
3. Sleep – because it is such a common problem in pain
4. Work disability – mainly because this is such a complex area
5. Quality of life measures

The two following areas are ones I’ll discuss in a day or so – they’re associated with disability because they mediate the pain experience and disability…as I mentioned yesterday, they’re the ‘suffering’ component of the Loeser ‘rings’ model.
6. Affective impact – things like anxiety, fear, mood, anger that are influenced by thoughts and beliefs about pain and directly influence behaviour
7. Beliefs and attitudes– these mediate behaviour often through mood, but can directly influence behaviour also (especially treatment seeking)

There are so many other areas that could be included as well, but these are some that I think are important.
Before I discuss specific instruments, I want to spend yet more time looking at who and how – and the factors that may influence the usefulness of any assessment measure.

Who should assess these areas? Well, it’s not perhaps who ‘should’ but how can these areas be assessed in a clinical setting.

Most clinicians working in pain management (doctors, psychologists, occupational therapists, physiotherapists, nurses, social workers – have I missed anyone?) will want to know about these areas of disability but will interpret findings in slightly different ways, and perhaps assess by focusing on different aspects of these areas.

As I pointed out yesterday, there are many confounding factors when we start to look at pain assessment, and these need to be borne in mind throughout the assessment process.

How can the functional impact of pain be assessed?

  • Self report, eg interview, questionnaires – and the limitations of these approaches are reliability, validity threats as well as ‘motivation’ or expectancies
  • Observation, either in a ‘natural’ setting such as home or work, or a clinical setting
  • Functional testing, again either in a ‘natural’ setting such as home or work, or a clinical setting – and functional testing can include naturalistic procedures such as the AMPS assessment, formal and structured testing such as the 6 minute walk test, the sock test, or even certain functional capacity tests; or it may be clinical testing such as manual muscle testing or range of movement, or even Waddell’s signs

All self report measures, whether they’re verbal questions, interview or pen and paper measures are subject to the problem that they are simply the individual’s own perception of the degree of interference they attribute to pain. The accuracy of this perception can be called into question especially if the person hasn’t carried out a particular activity recently, but in the end, it is the person’s perception of their abilities.

All measures need to be evaluated in terms of their reliability and validity – how much can we depend on this measure to (1) assess current status (2) contribute to a useful diagnosis (or formulation) (3) provide a basis for treatment decisions (4) evaluate or measure function over time (Dworkin & Sherman, 2001).

Reliability refers to how consistently a measure performs over time, person, clinician.

Validity refers to how well a test actually measures what it says its measuring.  The best way to determine validity is if there is a ‘gold standard’ against which the test can be compared – of course in pain and functional performance, this is not easy, because there is no gold standard!  The closest we can come to is a comparison between, for example, a self report in a clinic on a pen and paper test compared with a naturalistic observation in a person’s home or workplace – when they’re not being observed.

Probably one of the best chapters discussing these aspects of pain assessment is Chapter 32, written by Dworkin & Sherman chapter in the 2nd Edition of the Handbook of Pain Assessment 2001 (DC Turk & R Melzack, Eds), The Guilford Press.

Importantly for clinicians working in New Zealand, or outside of North America and the UK, the reference group against which the client’s performance is being compared, needs to be somewhat similar to the population the client comes from.  Unfortunately, there are very few assessment instruments that have normative data derived from a New Zealand or Australasian population – and we simply don’t know whether the people seeking treatment in New Zealand are the same on many dimensions as those in North America.

I’m also interested in how well any instruments, whether pen and paper, observation or performance-based assessment translate into the everyday context of the person.  This is a critical aspect of pain assessment validity that hasn’t really been examined well.  For example, the predictive validity (which is what I’m talking about) of functional capacity tests such as Isernhagen, Blankenship or other systems have never been satisfactorily established, despite the extensive reliance on these tests by insurers.

Observation is almost always included in disability assessment. The main problems with observation are:
– there are relatively few formal observation assessments available for routine clinical use
– they do take time to carry out
– maintaining inter-rater reliability over time can be difficult (while people may initially maintain a high level of integrity with the original assessment process, it’s common to ‘drift’ over time, and ‘recalibration’ is rarely carried out)

While it’s tempting to think that observation, and even functional testing, is more ‘objective’ than self report, it’s also important to consider that these are tests of what a person will do rather than what a person can do (performance rather than capacity). As a result, these tests can’t be considered infallible or completely reliable indicators of actual performance in another setting or over a different time period.

Influences on observation or performance-based assessments include:
– the person’s beliefs about the purpose of the test
– the person’s beliefs about his or her pain (for example, the meaning of it such as hurt = harm, and whether they believe they can cope with fluctuations of intensity)
– the time of day, previous activities
– past experience of the testing process

And of course, all the usual validity and reliability issues.
More on this tomorrow, in the meantime you really can’t go far past the 2nd Edition of the Handbook of Pain Assessment 2001 (DC Turk & R Melzack, Eds), The Guilford Press.

Here’s a review of the book when the 2nd Edition was published. And it’s still relevant.

Colour therapy…

With only a small proportion of the people experiencing acute low back pain becoming chronically disabled by their pain, a holy grail of sorts has been to quickly and effectively identify those who need additional help and those who don’t.

The ‘Psychosocial Yellow Flags’ initially developed in New Zealand by Kendall, Linton & Main (1999) provides a useful mnemonic for the factors that have been established as predicting longterm disability – but requires clinicians to be aware of the flags, and record them. Because the ‘Yellow Flags’ are not ‘objective’ and can’t be summed or scored, there is no way to determine a cut-off point to identify those people only just at risk – and the tendency is to under-estimate those who need more assistance, while many clinicians report that they don’t feel comfortable or confident to assess ‘Yellow Flags’ in a primary health setting.

For those who can’t remember, the risk factors known to be associated with ongoing disability in people with acute low back pain are:

A: Attitudes and beliefs – e.g. catastrophising, a passive approach to rehabilitation, ‘Doctor fix me’, hurt = harm

B: Behaviours – e.g. resting for extended periods, poor sleep, using aids and appliances such as crutches or braces, inappropriate use of medication, self medication

C: Compensation – e.g. difficulty obtaining cover, inadequate or ineffective case management, multiple claims in the past, poor knowledge of what is available for assistance

D: Diagnosis/Doctor or treatment provider effects – unexplained technical language that is misunderstood, multiple diagnoses, multiple investigations, multiple ineffective treatments, assuring a ‘techno-fix’ is available, recommending changing jobs or stopping jobs

E: Emotions – anger, depression, sense of helplessness or feeling out of control, numbed emotions

F: Family and friends – unintentionally reinforcing pain behaviour, unsupportive of returning to work, punishing responses, or being socially isolated

W: Work – an employer who is unsupportive, history of frequent job changes or limited employment history, heavy manual work, monotonous work, high responsibility with limited control, shiftwork, working alone or while isolated, disliking the job

As I mentioned, these factors are well-known, and relatively easily recognised. Many people say to me that they have an intuitive ‘feel’ for those who will have trouble recovering from ALBP – but feel that if they ask about these factors, they risk ”opening Pandora’s box’, or being unable to extricate themselves from a complex or emotionally charged situation. Many people don’t feel adequately skilled in managing the issues involved, and would prefer to either let well enough alone, or quickly refer to someone else (Crawford, Ryan & Shipton, 2007).

Well, I don’t agree with any of those options. Although sometimes people will have ‘saved up’ a lot of their concerns and want to offload with a lot of emotion, for many people it’s a simple case of exploring what their concerns are and problem-solving around the practical issues. A referral to a psychologist or counsellor isn’t always necessary, and can for some people escalate their distress and disability.

What skills can you use to identify and manage ‘Yellow Flags’?

Open-ended questions like ‘How do you feel about your recovery so far?’, ‘Are there any things that concern you about your recovery?’, ‘What do you think is going on in your back?’, ‘What do you think this [diagnosis] means for you?’

Reflective listening demonstrates two things: (1) that you are listening and (2) that you want to understand. It should be used whenever someone begins to display emotional responses. Reflective listening can be simple ‘So from what you’ve said, I think you mean….’, or more complex ‘It seems that you think your boss wants you to go back to full duties and you’re not sure you can. I wonder if you’re feeling really anxious?’ When in doubt, reflect!

Action and responsibility – Then it can be really useful to pose this question: ‘So where does this leave you?’ or ‘What do you think you need to have happen next?’

For many people the step of demonstrating your acceptance of their point of view and understanding their distress is a good start. And many ‘Yellow Flags’ can be simply influenced by the person themselves – just being given permission to think of a solution that they’re ready to do, or being asked whether they want to hear of other options allows the person to feel more in control.

And for slightly more complex situations – such as financial strain from time off work, or difficulty within a relationship because of changed roles – these can be helped by budgetting advice, bringing the partner in to the clinic to be a part of the rehabilitation, or asking the person to think of community-based self-help organisations. Intense psychological therapy isn’t always necessary, and if misguided or not consistent with other messages about engaging in activity despite pain, can impede recovery.

And if you’re REALLY pressed for time, but still want to ‘pick winners and losers’ – a study by Westman, Linton, Ohrvik, Wahle´& Leppert (2007) finds support for the use of the ‘Orebro Musculoskeletal Pain Screening Questionnaire’, which is a 25 item questionnaire covering five groups (function, pain, psychological factors, fear avoidance, and miscellaneous which includes things like sick leave, age, gender, nationality, monotonous or heavy work and job satisfaction. It’s been used extensively in New Zealand in a compensation setting since 1999, as a screening tool to identify those who may be at increased risk of ongoing disability. This study reviews its use in sub-acute pain, with a three-year follow-up to identify the predictive validity of the instrument for sick leave.

The results are very strong – psychosocial factors as measured by OMPSQ were related to work disability and perceived health even 3 years after treatment in primary care. The screening questionnaire had discriminative power even for patients with non-acute or recurrent pain problems. The OMPSQ had better predictive power than any of the questionnaires included in the study, which included the Job Strain, the Coping Strategies Questionnaire (CSQ),
the Pain Catastrophizing Scale (PCS) and the Tampa Scale for Kinesiophobia (TSK). This study shows that among the factors, pain and function are the factors most strongly related to sick leave 3 years later.

Interestingly for me, the study demonstrated that function with focus on daily living, sleep capacity and pain experience had the most powerful predictive value concerning sick leave at 3 years. While earlier studies have shown that emotional and cognitive variables such as distress and fear avoidance beliefs have been strong predictors for 6–12-month outcomes, the best predictor in this study is having problems functioning.

The authors suggest that this probably reflects the length of the follow-up and suggests that different variables may be predictive at various stages in the process of chronification. Even though it is recognized that psychological variables are
influential factors, little is known about how and when these variables interact in the process toward disability.
Furthermore, psychological variables might operate differently for different people and at different time points.

So, this particular instrument, which has been widely used at least within New Zealand for many years, is readily available and gives clinicians and others very useful guidance on who might benefit the most from high intensity therapeutic input early.

WESTMAN, A. (2007). Do psychosocial factors predict disability and health at a 3-year follow-up for patients with non-acute musculoskeletal pain A validation of the Orebro Musculoskeletal Pain Screening Questionnaire. European Journal of Pain DOI: 10.1016/j.ejpain.2007.10.007

Crawford C, Ryan K, Shipton E (2007) Exploring general practitioner identification and management of psychosocial Yellow Flags in acute low back pain, New Zealand medical journal, 120:1254, pp U2536



There are some very weird and crazy measures out there in pain assessment land… some of them take a little stretch of the imagination to work out how they were selected and what they’re meant to mean in the real world.

Functional measures are especially challenging – given that they are about what a person will do on a given day in a given setting, they are inherently prone to performance variation (test-retest reliability) and can’t really be held up as gold standards in terms of objectivity. Nevertheless, most pain management programmes are asked to provide measures of performance, and over the years I’ve seen quite a few different ones. For example, the ‘how long can you stand on one leg’ timed measure…the ‘sock test’ measure…the ‘pick up a crate from the floor and put it on a table’ measure…the ‘timed 50 m walk test’…the ‘step up test’… – and I could go on.

Some of these tests have normative data against age and gender, some even have standardised instructions (and some of these instructions are even followed!), and some even have predictive validity – but all measures beg the question – ‘why?’

I’m not being deliberately contentious here, not really… I think we as clinicians should always ask ‘why’ of ourselves and what we do, and reflect on what we do in light of new evidence over time. At the same time I know that each of us will come up with slightly different answers to the question ‘why’ depending on our professional background, experience, the purpose of the measure, and even our knowledge of scientific methodology. So, given that I’m in a thinking sort of mood, I thought I’d spend a moment or two noting down some of the thoughts I have about measures of function in a pain management setting.

  1. The first thing I’d note is that functional performance is at least in part, a measure of pain behaviour. That is, it’s about what a person is prepared to do, upon request, in a specific setting, at a certain time of day, for a certain purpose. And each person who is asked to carry out a functional task will bring a slightly different context to the functional performance task. For example, one person may want to demonstrate that their pain is ‘really bad’, another may want to ‘fake good’ because their job is on the line, another may be fearful of increased pain or harm and self-limit, while another may be keen to show ‘this new therapist just what it’s like for me with pain’. As a result, there will be variations in performance depending on the instructions given, the beliefs of the person about their pain – and about the way the assessment results will be used, and even on the gender, age and other characteristics of the therapist conducting the testing. And this is normal, and extremely difficult to control.
  2. The second is that the purpose of the functional performance testing must be clear to the therapist and the participant. Let’s look at the purpose of the test for the therapist – is it to act as a baseline before any intervention is undertaken? is it to be used diagnostically? (ie to help assess the performance style or approach to activity that the client has) is it to establish whether the participant meets certain performance criteria? (eg able to sustain manual handling safely in order to carry out a work task) is it to help the participant learn something about him or herself? (eg that this movement is safe, that this is the baseline and they are expected to improve over time etc).  And for the participant? Is this test to demonstrate that they are ‘faking’? (or do they think that’s what it’s about?) Is it to help them test out for themselves whether they are safe? Is it a baseline measure, something to improve on?  Is it something they’ve done before and know how to do, or is it something they’ve not done since before they hurt themselves? You see, I can go on!!
  3. Then the functional measures must be relevant to the purpose of the testing. It’s no use measuring ‘timed get up and go’, for example, if the purpose of the assessment is to determine whether this person with back pain can manage his or her job as a dock worker. Likewise, if it’s to help the person learn about his or her ability to approach a feared task, then it’s not helpful to have a standardised set of measures (unless this is a set that is taken pre-treatment and again at post-treatment). This means the selection of the measures should at least include consideration of predictive validity for the purpose of the test. For example, while a ‘timed get up and go’ may be predictive of falls risk in an elderly population, it may be an inappropriate measure in a young person who is being assessed for hand pain. It’s probably more useful to have a slightly inaccurate measure that measures something relevant than a highly accurate measure that measures something irrelevant. For example, we may know the normative data for (plucking something out of the air here…) ‘standing on one leg’, but unless this predicts something useful in the ‘real world’, then it may be a waste of time.
  4. Once we’ve determined a useful, hopefully predictive measure, then it’s critical that the assessment process is carried out in a standard way. That means the whole process, not just the task itself. What do I mean? Well, because there are multiple influences on performance, such as time of day, presence or absence of other people, and even the way the test is measured (eg If it’s timed with a stop-watch, when is the button pushed to start? When is it pushed to stop? Is this documented so everyone carries it out exactly the same way?) There is a phenomenon known as assessment drift (well, that’s what I call it!) where the person carrying out the assessment drifts from the original measurement criteria over time. This happens for all of us as we get more experienced, and as we forget the original instructions. Essentially we are a bit like a set of scales – we need to be calibrated just as much as any other piece of equipment. So the entire assessment needs to be documented right down to the words used, and the exact criteria used for each judgement.
  5. And finally, probably for me a plea from the heart – that the measures are recorded, analysed, repeated appropriately, and returned to the participant, along with the interpretation of the findings. This means the person being assessed gains from the process, not just the clinician, or the funder or requester of the assessment.

So over the Easter break (have a good one!), take a moment or two to think about the validity and reliability of the functional assessments you take. Know the confounds that may influence the individuals’ performance and try to take this into account when interpreting the findings. Consider why you are using these specific measures, and when you were last ‘calibrated’. Make a resolution: ask yourself ‘what will this measure mean in the real world?’ And if, as I suspect most of us know, your assessments don’t reflect the reality of carrying the groceries in from the boot of the car, or pushing a supermarket trolley around a busy supermarket, or squeezing the pegs above the head to hang out the washing – well, there might be a research project in it!!

‘Faking’ or ‘Malingering’ or ‘Exaggerated Pain Behaviour’

Hot words!!

It’s amazing how often health providers get asked directly or indirectly whether someone experiencing pain is ‘faking’ it. The short answer is the most accurate – we can’t tell. We’re not lie detectors, there is no ‘gold standard’ to work out whether someone is pretending or not, and the question is based on erroneous thinking about pain and pain behaviour.

I can almost feel the spluttering at my last sentence from some readers!

Let’s look at this more closely.

Remember the biopsychosocial model of pain states that the experience of pain and pain behaviour is influenced by three broad groups of factors: the biomedical/biophysical factors such as extent of tissue disruption at the periphery (or site of trauma), neurological changes of transmission and transduction (throughout the peripheral and central nervous system), and disturbance of the neuromatrix.

At the same time, there are psychological factors such as the level of alertness and arousal, attention, past learning, expectations, beliefs, attitudes, mood, contingencies and so on.

And there is also a range of social factors such as the presence or absence of social support, the systems in which the event occurs (such as compensation, availability of health care and technology), cultural expectations, religious beliefs at the same time as the other two factors.

Recall that pain is not the same as pain behaviour – pain behaviour is everything that we do in response to pain, including involuntary physiological responses (flushing, sweating), reflexes (withdrawal), verbal utterances (groans, gasps, requests for help), as well as complex behaviours such as reaching for medication, going to see a doctor, asking for time off work etc.

Pain behaviour is subject to all the usual influences on any behaviour – that is, operant conditioning can be involved, as well as classical conditioning. And pain behaviour has developed from the behaviour we displayed as a baby to the more complex and modulated behaviours we demonstrate now.

So, it’s easy to see that pain behaviours vary hugely between individuals even if the original trauma is exactly the same.

I can understand several things about the question ‘can you tell if he’s faking’. Pain behaviour elicits strong emotions in observers – it’s designed to do just that! It communicates, and something we like to do as humans is work out whether someone is lying or not. The problem is – we’re not very good at telling who is and is not lying (but we like to fool ourselves that we personally don’t fall for liars!).

In a litigation or compensation situation, it would be great to work out exactly ‘how much’ each pain is worth in order to give it a dollar value, and determine compensation. But – pain can’t be measured directly, we have to use pain behaviour as the next best thing – and pain behaviours are influenced by a whole lot of things. So it’s not a very reliable measure.

Another reason for wanting to know ‘is he faking’ is how far to ‘push’ the person into doing more. The underlying concern is ‘will I cause harm’. And again we really don’t have any useful measure if we try to have pain or pain behaviour as our guide. We need to use something else – radiological union perhaps, control of a load, heart rate and respiration.

BUT the question is based on the assumption that there ‘should’ be a certain amount of pain behaviour for a certain amount (or length of time since) tissue damage. And there simply isn’t.

Some allied questions….‘can’t you use functional capacity testing to work out whether someone’s faking?’ No – sorry. A functional assessment, just like any physical examination or test, tells you what the person will do, and perhaps how consistently they will do it – today. Few, if any, FCE’s have demonstrated predictive validity – that is, they don’t accurately predict how much someone will or won’t do in a day-to-day ‘real’ situation, in fact they won’t tell you what the person can and cannot do at all.

What does this ‘consistency of effort’ tell me? Just that – how consistent this person carried out this activity this time. It doesn’t predict anything, and certainly doesn’t tell whether someone is ‘faking’. People vary in their consistency of performance depending on: their initial expectation of the activity (it may have been harder or easier than they first thought); their prediction of the effect of doing that activity (they may predict they will have an increase in pain – and therefore reduce effort, or perhaps increase effort in order to convince the examiner that they are trying hard); their fear/anxiety may vary throughout an activity; their past experiences may influence what they are prepared to do. Even when given the same instructions ‘use maximal effort’ – if a person hasn’t done anything very physical for a while, ‘maximal effort’ may be hard for them to predict. Some may even be ‘saving themselves’ for activities later in the assessment battery.

But surely some people do fake! Yes – but it’s not a health or medical matter. It’s just not helpful to work out whether someone is or isn’t faking. What happens if you do somehow detect ‘faking’? Confront the person? Take their health care away? Tell them to pull themselves together?

It’s more helpful to think about what factors might be initiating and then maintaining this behaviour – then start to work on these variables to promote change.

But I’ve seen even eminent researchers use the term ‘exaggerated illness behaviour’. Yes, well, even eminent researchers can be mistaken! All that we can observe is that this person behaves in this way at this time in this setting, and the person attributes the behaviour to pain (or illness). We can only identify all the possible factors that are contributing to the maintenance of the behaviour, elicit from the person the not so good things about the situation they find themselves in (and acknowledge the good things about their situation), and help increase the importance they place on making changes, and support their confidence to start to make changes. This might mean leveraging off contingencies (reducing compensation, withdrawing spouse support) if these things are maintaining the behaviour – but it may also mean simply resolving ambivalence about the positives of moving forward.

Malingering? Faking? Exaggerating? When someone can tell me why yellow is better than blue, or find a measure of the ‘chocolateness’ of chocolate and the banana-ness of a banana, perhaps we may have found an objective pain measure. Until then, don’t ask me to work out whether someone is faking it, just ask me to help them move forward.

Oh, and just for fun – how many different words are used to suggest that someone is ‘faking’?

– functional overlay

– supratentorial factors

– a ‘genuine’ man (as opposed to a fake one, or one that is faking)

– adopting the ‘sick role’ (if someone believes they are sick/unwell, what can we expect? How many people do we see ‘adopting the well role’?)

– demonstration of non-organic signs (now that one’s actually interesting, as Gordon Waddell states very clearly in his book ‘The Back Pain Revolution’ (2nd Ed), he never intended the term to mean anything other than to suggest the person was experiencing increased psychological distress – and NEVER to be used as a means of detecting faking!)

This paper by Michael Sullivan is a little philosophical, but at the same time illustrates the points I’ve tried to make above.
This reference is from the 2001 version of the New Zealand ‘Yellow Flags’ document on acute low back pain management.

It’s actually quite hard to come up with good (quality, evidence-based) references on ‘malingering’. By far the majority of articles I located using Google, searching on the terms ‘malingering pain behaviour’ suggested that somehow ‘medical people’ or ‘psychologists’ or ‘psychiatrists’ using special tests can identify malingerers. Someone please show me the ‘special test’!! This section of the IASP Core Curriculum should put to rest this sad aspect of the management of people experiencing persistent pain. Detection of malingering is best left up to private investigators, leaving health care providers to the really difficult work of helping people recognise that change is possible, desireable and important.

For my most recent post on why people might report differently on self-report than in physical performance tests – click here