What do people want from pain management?


The short answer is often “take my pain away” – and we’d be foolish to ignore the impact of pain intensity on distress and disability. At the same time there’s more than enough research showing that if treatment only emphasises pain intensity (1) it may not be achievable for many, especially if we take into account the small effect sizes on pain intensity from exercise, medications and psychological therapies; and (2) even if pain is reduced, it may not translate into improvements in daily life.

The slightly more complex answer lies behind the desire to “take my pain away.” We need to be less superficial in our responses to this simple answer – and take a hard look at what people believe pain represents to them, and what they want to be able to do if pain is reduced.

A paper in the current issue of Pain piqued my interest as the authors explored what people with ongoing pain chose as treatments when given the choice. The paper itself is a systematic review of research papers using discrete choice experiments to determine preferences of people with pain when deciding on treatment.

Discrete choice experiments assume that treatments can be described by their important features, such as where therapy is administered, how often, the target outcome, adverse effects and so on. The approach also assumes that people make choices based on their personal weighting or the value they place on those features. As the experiment progresses, participants are asked to weight each attribute and choose their preferences as they gradually narrow the number of choices. (This open access paper outlines DCE in health in a little more detail – click, or you can take a look at this YouTube video summarising DCE – click).

Now there are some issues I have about this approach, because it also assumes that people make logical choices, that they have freedom to choose independently of other influences (like medico-legal requirements or cultural factors), and it also assumes that people make decisions in the same way that economic modeling finds – and I’m not so sure of that! Having said this, the methodology does shed some light on what people might value provided these assumptions hold true.

Following a systematic search of the databases, the authors identified 51 studies with a total of 4065 participants included, and were published between 2004 and 2021. Most of the studies looked at low back pain and/or osteoarthritis (high prevalence = lots of participants = easy to access). When analysing the attributes participants were asked to choose from, the authors identified the following (not all listed):

  • Capacity to realise daily life activities – walking, domestic activities, social activities, activities of daily living, difficulties doing daily tasks etc
  • Risk of adverse events – side effects, cardiovascular events, upper gastrointestinal problems etc
  • Effectiveness on pain reduction – maximum pain intensity, improvement in pain, pain intensity, reduction in pain etc
  • Out of pocket costs – direct payment, premium reduction, cost etc
  • Treatment frequency – schedule, frequency, time
  • Onset of treatment efficacy – waiting time for effect, time before able to exercise
  • Design – individual, group, supervised
  • Travel time
  • Relapse risk
  • Duration of effectiveness

What did they find?

Unsurprisingly, they identified that reduced pain was highly desired, and again, unsurprisingly, they found that the risk of adverse events was pretty darned important. What might be surprising is the capacity to realise daily life activities was the third most frequently rated attribute! In other words, while pain reduction and not having harmful effects was important, the capacity to do what matters is absolutely crucial!

Something I found rather interesting, though, is located deep in the manuscript: neither psychological interventions nor manual therapy have been investigated with this methodology. Now that is odd. And something that sorely needs to be examined because, at least in New Zealand, ‘psychology’ for pain is (almost) obligatory for pain programmes, at least those provided under the auspices of our national compensation organisation. What this means is, we don’t know whether people would choose psychological approaches over other forms of treatment for pain… and isn’t it time we did?

The authors point out that IMMPACT (Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials) recommends six core outcomes when evaluating the effectiveness of treatments for chronic pain. These are pain, physical function, emotional functioning, participant ratings of global improvement and satisfaction with treatment, adverse events, and participant disposition. Interestingly, there’s no specific mention of enhanced capacity to do daily life – it’s assumed, I suppose, that improved physical and emotional functioning translate to improved daily life, but they’re not a direct equivalent (it’s an assumption, right?). Given the differences found between what people do in a treatment setting, vs what they do do in their own life contexts, maybe this is something we should pay far more attention to.

I also note that the attributes don’t include in the need to adopt lifelong changes in routines, choices, activities, participation. Things like exercise, for example, along with medications, often need to be carried out over long periods of time – years, even. And research doesn’t manage to follow people over long periods because it’s very expensive and people drop out. And yet – this is exactly what people with pain must do.

Sensitively, the authors also point out that people at different life ages and stages may make different choices. If I’m nearing the end of my life, I might be more willing to ‘take the risk’ of an adverse event over the need to make long-lasting changes to my daily routine – the quick fix beckons! Concurrently, I’m curious that something clinicians consistently complain about: the desire people have for ‘quick fixes’ or immediate results ranked relatively low on the frequency table – at 1/3 of the ranking frequency. It’s the hope that treatment will enable people to do what matters in their life that seems so important! Who would have guessed…

Now my question is: do currently popular treatments (at least in New Zealand) like exercise and ‘psychological therapies’ have a useful impact on what people with pain rate so highly? Do they actually translate into enhanced capacity to engage in what matters to individuals? If they do – how is this measured? Does a ‘disability’ measure capture what’s important? Does a ‘quality of life’ measure do that well? When I value being able to do some things that really matter to me, but don’t matter to my partner or my next-door-neighbour, are we measuring these individual differences? And in what contexts? I might be happy to compromise on my ability to walk quickly over rough ground in the weekend, but what about my willingness to compromise on my walking at work? How about my ability to sit? What if I’m OK sitting with a soft cushion under my butt at home, but can’t carry that thing around with me to work or the movies or the restaurant or church?

Daily life activities are THE area of expertise of occupational therapists. If being able to do daily life is what people want, why oh why are so few occupational therapists included in pain programmes – even a tertiary level provider here in my home city? Come on, let’s get real about what occupational therapists know about! (end of rant!).

Zhu, M., Dong, D., Lo, H. H., Wong, S. Y., Mo, P. K., & Sit, R. W. (2022). Patient preferences in the treatment of chronic musculoskeletal pain: a systematic review of discrete choice experiments. Pain. 164(4). 675-689. https://doi.org/10.1097/j.pain.0000000000002775

N of 1 studies – great examples


It’s true that ‘unconventional’ studies of any kind don’t get published as readily as conventional RCTs even if those studies are under-powered, have errors in their construction and don’t tell us much of anything. Grrr. Publishing studies from my PhD has been fraught because I chose a form of grounded theory that doesn’t conform to the conventional constructivist or Straussian approach. What, then are we to do?

Two things strike me: first we always need to select a research method to give us the best answer to our research question, not something that will ‘get published’ easily. There are many research questions and RCTs simply don’t answer them all. A quantitative method doesn’t lend itself to ‘why’ questions and inevitably require assumptions about the factors thought to be relevant, the measurement strategy, the underlying theory explaining what’s going on. This doesn’t really help us when we have a new field of study to look at, where there is no clear theoretical explanation, where measures don’t measure what’s relevant. Hence drawing on different designs like mixed methods and qualitative approaches. From a pragmatic perspective, the numbers needed for an RCT are much greater than most clinicians can find unless they’re working in a large research setting (and have a bit of funding!). Nevertheless, ‘pilot’ studies using RCT methods do get published even when they don’t have huge explanatory power, partly because they’re familiar to the reviewers.

The second thing that strikes me is: we need to have good exemplars. These give us a template of sorts to learn how to conduct good research, how to communicate why a particular ‘unconventional’ method is the best way to answer the question, and how to write the results/findings in a way that is compelling.

I’ve written before about the failure of much research in human behaviour and experience to understand that ergodic theorum is violated in grouped statistics. This means we can deeply question the results as they apply to the person we see in the clinic. Ergodicity implies that all people in a group will ultimately follow the same trajectory, develop in the same way over the same time, respond to treatment in the same way and follow the same processes. But clinicians know that some people respond very quickly to a component in a programme, while others don’t.

I recently found this example from Tarko (2005) and cited in Lowie & Verspoor (2019)

OK, ’nuff said. Ergodicity matters.

Choosing the right research strategy begins with having a good research question, and most clinicians have a very good research question: what is the best treatment I can offer this person presenting in this way at this time? The follow-up question is: is this treatment helping? or… to be more precise, which component of my treatment/s are helping?

It’s this question that N=1 or single case experimental designs are intended to answer, and they do it very well.

Here are some great examples of published studies using intensive repeated measures – and we need more of these!

Lydon-Staley, D. M., Zurn, P., & Bassett, D. S. (2020). Within-person variability in curiosity during daily life and associations with well-being. Journal of Personality, 88(4), 625-641. https://doi.org/10.1111/jopy.12515

I included this one because it’s not about pain! And yet it sheds light on something important in pain management. Curiosity is about being intrigued by novel, unfamiliar situations. Curiosity doesn’t flourish when a person is anxious, but does when people are wanting to increase their knowledge and skills, and it’s associated with greater well-being. So it’s something clinicians might want to foster – especially for someone who has become afraid to rely on their body and body sensations. In this study, people were asked to complete a daily diary and do some internet browsing (yay! my favourite thing to do!). After some fairly complex statistical analysis (described in good detail in this paper), the results from 167 people who completed 21 days of daily diary measures and a one-off set of measures showed that being consistently curious is associated with feeling good – AND that doing physical movement practices might enhance curiosity via improving mood. Now that’s worth knowing.

Mun, C. J., Thummala, K., Davis, M. C., Karoly, P., Tennen, H., & Zautra, A. J. (2017). Predictors and social consequences of daily pain expectancy among adults with chronic pain. Pain, 158(7), 1224-1233. http://dx.doi.org/10.1097/j.pain.0000000000000903

Now this study is a big one – 231 people in study one, and 220 people in study two. Cutting to the chase, these researchers found that people who expected high pain in the evening experienced greater pain the next day, even when controlling for current pain intensity. The study also found that morning pain predicted next afternoon social enjoyment but not social stress. And what this means is…. clinicians need to promote joy/fun/positive affect, and to help people reduce their expectations that their pain will inevitably increase or ‘be bad’ – it’s anticipation that seems to influence pain intensity and avoidance. These study designs allow researchers to tease apart the factors contributing to experiences over time. We need more of them!

Hollander, M. D., de Jong, J., Onghena, P., & Vlaeyen, J. W. S. (2020). Generalization of exposure in vivo in Complex Regional Pain Syndrome type I. Behaviour Research and Therapy, 124. https://doi.org/https://doi.org/10.1016/j.brat.2019.103511

And from a large study to a much smaller one with – wait for it – 8 participants! That’s more like the numbers we see in clinic, right? This study examined whether it’s more fruitful to expose people to many activities they’ve previously avoided, or instead, to limit the number of activities each person was exposed to. This is SUCH an important component of therapy where people have avoided doing things that bother them because they anticipate either that their pain will go to untolerable levels (or interfere with other important things like sleep) or because they’re worried they’ll do harm to themselves. Why? Because doing things in one safe space is not life. We do lots of activities in lots of different spaces, and most of them are unpredictable and we don’t have a ‘safe person’ to rely on. It’s perhaps one of the reasons exercise carried out in a gym might not transfer into greater reductions in disability in daily life – and why involving occupational therapists in pain management as ‘knowledge translation experts’ is such a good thing.

Caneiro, J. P., Smith, A., Rabey, M., Moseley, G. L., & O’Sullivan, P. (2017). Process of Change in Pain-Related Fear: Clinical Insights From a Single Case Report of Persistent Back Pain Managed With Cognitive Functional Therapy. Journal of Orthopaedic & Sports Physical Therapy, 47(9), 637-651. https://doi.org/10.2519/jospt.2017.7371

Lucky last – a single case study exploring the process of change experienced by one person undergoing cognitive functional therapy. While recent meta-analyses suggest CFT is ‘no better’ than any other treatment for people with persistent pain, what meta-analyses can’t show is those for whom it’s highly effective. Why? Because individual responses don’t show up in meta-analyses, and the mean or even the confidence intervals don’t show those people who do extremely well – or those who don’t do well at all. And yet as clinicians, we deal with each individual.

Now I chose these four studies because they are all published in highly respected and ‘highly ranked’ journals. I don’t care a fig about the supposed rank of a journal, but there’s no doubt that getting into one of these journals requires research of a very good standard. And somehow these ones snuck through!

Am I suggesting that RCTs shouldn’t feature in research? No – but I do think a much more careful analysis of these is needed, so we can understand the golden question: what works for whom and when? And to answer these questions we need far more detailed analysis. Oh – and evidence-based healthcare has always been a synthesis of THREE elements – research yes, clinician’s experience AND the person’s preferences and values. ALL THREE not just ‘research’ and out of research, not just RCTs.

Lowie, W. M., & Verspoor, M. H. (2019). Individual Differences and the Ergodicity Problem. Language Learning, 69, 184-206. https://doi.org/10.1111/lang.12324

Tarko, V. (2005, December 29). What is ergodicity? Individual behavior and ensembles. Softpedia News. Retrieved from https://news.softpedia.com/news/ What-is-ergodicity-15686.shtml

“N-of-1” research – A clinically relevant research strategy!


I’ve been banging on about single case experimental research designs (SCED) ever since I studied with Prof Neville Blampied at University of Canterbury. Prof Blampied (now retired) was enthusiastic about this approach because it allows clinicians to scientifically test whether an intervention has an effect in an individual – but he took it further with a very cool graphical analysis that allows multiple cases to be studied and plotted using the modified Brinley Plot (Blampied, 2017), and I’ll be discussing it later in this series. Suffice to say, I love this approach to research because it allows clinicians to study what happens especially when the group of participants might be quite unique so RCTs can’t readily be conducted. For example, people living with CRPS!

Krasny-Pacini & Evans (2018) make the case that SCED are useful when:

1. Evaluating the efficacy of a current intervention for one particular patient in daily clinical practice to provide the best treatment based on evidence rather than clinical impressions;
2. Conducting research in a clinical rehabilitation setting (outside a research team) with a single or few patients;
3. Piloting a novel intervention, or application/modification of a known intervention to an atypical case or other condition/type of patients that the intervention was originally designed for;
4. Investigating which part of an intervention package is effective;

5. working with rare conditions or unusual target of intervention, for which there would never be enough patients for a group study;

6. Impossibility to obtain a homogenous sample of patients for a group study;
7. Time limitation (e.g. a study needing to be completed within 8 months, e.g. for a master degree research. . .) or limited funding not allowing recruitment of a group.

So let’s think of how we might go about doing a single case experiment in the clinic.

First step, we need to think hard about what we want to measure. It’s not likely you’ll find an already-developed measure that is tailored to both the person and the treatment you want to use. There are key characteristics for this measure that you’ll need to consider (these come from the SCRIBE guidelines – see Tate, et al., 2016). You’ll want to look for target behaviours “relevant to the behaviour in question and that best match the intervention as well as accurate in their measurement”; “specific, observable and replicable”; “inter-observe agreement on the target behaviour is needed”.

You’ll also want to think of the burden on the person completing the measures, because mostly these will be carried out intensively over a day/week or even a therapy session.

Some examples, drawn from the Krasny-Pacini & Evans (2018) paper include:

  • the number of steps a person does in a day
  • time it takes to get dressed
  • VAS for pain
  • self-rated confidence and satisfaction with an activity
  • Goal attainment scale (patient-specific goals rated on a scale between -2 and +2) – this link takes you to a manual for using GAS [click]
  • the time a person heads to bed, and the time they wake up and get out of bed

You can choose when to do the measurements, but because one of our aims is to generalise the learning, I think it’s useful to ask the person to complete these daily.

You’ll also need to include a control measure – these are measures that aren’t expected to change as a result of your therapy but are affected by the problem and help to demonstrate that progress is about the therapy and not just natural progression or regression to the mean, or attention etc. For example, if you’re looking at helping someone develop a regular bedtime and wakeup time, you might want to measure the time they have breakfast, or the number of steps they do in a day.

Generalisation measures are really important in rehabilitation because, after all, we hope that what we do in our therapy will have an effect on daily life outside of therapy! These measures should assess the intervention’s effect on ‘untrained’ tasks, for example we could measure self-rated confidence and satisfaction on driving or walking if we’ve been focusing on activity management (pacing). We’d hope that by using pacing and planning, the person would feel more confident to drive places because they have more energy and less pain. It’s not as necessary to take generalisation measures as often as the target behaviour, but that can be an option, alternatively you could measure pre and post – and of course, follow-up.

Procedural data are measures that show when a person implements the intervention, and these show the relationship between the intervention and the target we hope to influence. So, if we’ve used something like a mindfulness exercise before bed, we hope the intevention might reduce worry and the person will wake feeling refreshed, so we’d monitor (a) that they’ve done the mindfulness that night; (b) that they feel less worried in the morning; and (c) that they wake feeling refreshed. All of these can be measured using a simple yes/no (for the mindfulness), and a 0 – 10 numeric rating scale with appropriate anchors (for less worry, and feeling refreshed).

If you’re starting to think what you could measure – try one of these yourself! Start by deciding what you’d like to change, for example, feeling less worried. Decide on the intervention, for example using a mindfulness activity at night. Add in a measure of ‘feeling refreshed’. Keep a notepad by your bed and each night, record whether you did the mindfulness activity, then in the morning record your level of worry 0 = not at all worried, 10 = extremely worried; and record your feeling of refreshment 0 = not at all refreshed, 10 = incredibly refreshed.

If you want to, you can set up a Google Docs form, and graph your results for each day. At the end of each day you could include a note about how stressful your day has been as another measurement to add to the mix.

For patients, using text messaging is really helpful – if you have a clinic SMS service, you could use this to send the text messages to your client and they can text back. Many of the SMS services can automatically record a client’s response, and this makes it easy to monitor their progress (and yours if you want to try it out!).

There are some other designs you can use – and remember I mentioned you’d usually want to record a baseline where you don’t use the intervention. As a start, do this for at least a week/seven days, but you’re looking to establish any patterns so that when you do the intervention you can distinguish between random variations across a week and change that occurs in response to your therapy.

Have a go – and let me know how it works for you!

Blampied, N. M. (2017). Analyzing Therapeutic Change Using Modified Brinley Plots: History, Construction, and Interpretation. Behavior Therapy, 48(1), 115-127. https://doi.org/https://doi.org/10.1016/j.beth.2016.09.002

Krasny-Pacini, A., & Evans, J. (2018). Single-case experimental designs to assess intervention effectiveness in rehabilitation: A practical guide. Annals of Physical & Rehabilitation Medicine, 61(3), 164-179. https://doi.org/10.1016/j.rehab.2017.12.002

Tate, R. L., Perdices, M., Rosenkoetter, U., McDonald, S., Togher, L., Shadish, W., Horner, R., Kratochwill, T., Barlow, D. H., Kazdin, A., Sampson, M., Shamseer, L., & Vohra, S. (2016). The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016: Explanation and elaboration. Archives of Scientific Psychology, 4(1), 10-31. https://doi.org/10.1037/arc0000027

Course open for pre-enrolments!


Pain Concepts for Occupational Therapists course is now open for pre-enrolments! Click here for more information and to enrol [click]

Please note: I am not GST registered, there is no GST on the fee for this course.

I will issue tax receipts on request – please email me if you need one.

There are two payment options: Paypal and Stripe (your credit card). If you want a separate invoice because your work is paying for you, please email me direct. There will be an additional NZ$20 administration fee for this.

Remember: this course is limited to 20 participants at a time so you get the best learning experience, so be in quick!

If a rose is a rose by any other name, how should we study treatment processes in pain management & rehabilitation?


A new instalment in my series about intensive longitudinal studies, aka ecological momentary assessment (and a host of other names for methods used to study daily life in real time in the real world).

Daily life is the focus of occupational therapy – doing what needs to be done, or a person wants to do, in everyday life. It’s complex because unlike a laboratory (or a large, well-controlled randomised controlled trial) daily life is messy and there is no way to control all the interacting factors that influence why a person does what they do. A technical term for the processes involved is microtemporality, or the relationships between factors in the short-term, like hours or days.

For example, let’s take the effect of a cup of coffee on my alertness when writing each day. I get up in the morning, feeling sluggish and not very coherent. I make that first delicious cup of coffee, slurp it down while I read the news headlines, and about 20 minutes later I start feeling a lot perkier and get cracking on my writing. Over the morning, my pep drops and I grab another cup or a go for a brief walk or catch up with a friend, and once again I feel energised.

If I wanted to see the effect of coffee on alertness I could do a RCT, making the conditions standard for all participants, controlling for the hours of sleep they had, giving them all a standard dose of caffeine and a standard cognitive test. Provided I have chosen people at random, so the chance of being in either the control group (who got the Devil’s drink, decaffeinated pseudo-coffee) or the experimental group was a toss of the coin, and provided we assume that anyone who has coffee will respond in the same way, and the tests were all equally valid and reliable, and the testing context is something like the world participants will be in, the results ought to tell us two things: (1) we can safely reject the null hypothesis (that there is no difference between decaffeinated coffee and real coffee on alertness) and (2) we can generalise from the results to what happens in the real world.

Now of course, this is how most of our research is carried out (or the ‘trustworthy’ research we rely on) – but what it doesn’t tell us as occupational therapists is whether this person in front of me will be in the very top or bottom of the bell curve in their response, and whether this will have any impact on what they need to do today.

For this unique person, we might choose another method, because we’re dealing only with this one person not the rest of the population, and we’re interested in the real world impact of coffee on this individual’s feelings of alertness. We can choose single case experimental design, where we ask the person to rate their alertness four or five times every day while they go about their usual daily life. We do this for long enough until we can see any patterns in their level of alertness ratings, and be satisfied that we’re observing their ‘normal’. During this time we don’t ask them to change their coffee drinking habits, but we do ask them to record their intake.

Then we get nasty, we give them the Devil’s decaf instead of the real deliciousness, but we do this without them knowing! So it looks just the same as the real thing, comes in the same container with the same labeling, and hope that it has the same delicious flavour. We ask them to carry on drinking as normal, and rating their alertness levels four or five times every day, and we do this for another two weeks. The only things we need to watch carefully for is that they don’t suspect a thing, and that their daily life doesn’t change (that’s why we do a baseline first).

Just because we’re a bit obsessed, and because we’re interested in the real world impact, we sneakily switch out the rubbish decaf and replace it with the real thing – again without the person knowing – and we get them to carry on recording. If we’re really obsessed, we can switch the real thing out after two weeks, and replace with the pseudo coffee, and rinse and repeat.

Now in this example we’re only recording two things: the self-reported level of alertness, and whether it’s the real coffee or not (but the person doesn’t suspect a thing, so doesn’t know we’ve been so incredibly devious).

We can then draw up some cool graphs to show the level of alertness changes over the course of each day, and with and without the real coffee. Just by eyeballing the graphs we can probably tell what’s going on…

Usually in pain management and rehabilitation we’re investigating the impact of more than one factor on something else. For example, we’re interested in pain intensity and sleep, or worry and pain intensity and sleep. This makes the statistics a bit more complex, because the relationships might not be as direct as coffee on alertness! For example, is it pain intensity that influences how much worrying a person does, and does the worry directly affect sleep? Or is it having a night of rotten sleep that directly influences worrying and then pain intensity increases?

To begin with however, occupational therapists could spend some time considering single case experimental designs with a very simple strategy such as I’ve described above. It’s not easy because we rarely ‘administer’ an intervention that doesn’t have lingering effects. For example, we can’t make someone forget something we’ve told them. This means we can’t substitute ‘real’ advice with ‘fake’ advice like we can with coffee and decaf. The ‘real’ advice will likely hang around in the person’s memory, as will the ‘fake’ advice, so they’ll influence how much the person believes and then acts on that information. There are strategies to get around this such as multiple baseline designs (see the Kazdin (2019) and Kratochwill et al., (2012) article for their suggestions as to what this looks like), and for a rehabilitation-oriented paper, Krasny-Pacini & Evans (2018) is a great resource.

If you’re intrigued by this way of systematically doing research with individuals but wonder if it’s been used in pain management – fear not! Some of the most influential researchers in the game have used this approach, and I’ve included a list below – it’s not exhaustive…

Next post I’ll look at some practical ways to introduce single case intensive longitudinal design into your practice. BTW It’s not just for occupational therapists – the paper by Ruissen et al., (2022) looks at physical activity and psychological processes, so everyone is invited to this party!

Selected Pain Rehab SCED studies (from oldest to most recent)

Vlaeyen, J. W., de Jong, J., Geilen, M., Heuts, P. H., & van Breukelen, G. (2001). Graded exposure in vivo in the treatment of pain-related fear: a replicated single-case experimental design in four patients with chronic low back pain. Behaviour Research & Therapy., 39(2), 151-166.

Asenlof, P., Denison, E., & Lindberg, P. (2005). Individually tailored treatment targeting motor behavior, cognition, and disability: 2 experimental single-case studies of patients with recurrent and persistent musculoskeletal pain in primary health care. Physical Therapy, 85(10), 1061-1077.

de Jong, J. R., Vlaeyen, J. W., Onghena, P., Cuypers, C., den Hollander, M., & Ruijgrok, J. (2005). Reduction of pain-related fear in complex regional pain syndrome type I: the application of graded exposure in vivo. Pain, 116(3), 264-275. https://doi.org/10.1016/j.pain.2005.04.019

de Jong, J. R., Vlaeyen, J. W. S., Onghena, P., Goossens, M. E. J. B., Geilen, M., & Mulder, H. (2005). Fear of Movement/(Re)injury in Chronic Low Back Pain: Education or Exposure In Vivo as Mediator to Fear Reduction? Clinical Journal of Pain Special Topic Series: Cognitive Behavioral Treatment for Chronic Pain January/February, 21(1), 9-17.

Onghena, P., & Edgington, E. S. (2005). Customization of pain treatments: single-case design and analysis. Clinical Journal of Pain, 21(1), 56-68.

Lundervold, D. A., Talley, C., & Buermann, M. (2006). Effect of Behavioral Activation Treatment on fibromyalgia-related pain anxiety cognition. International Journal of Behavioral Consultation and Therapy, 2(1), 73-84.

Flink, I. K., Nicholas, M. K., Boersma, K., & Linton, S. J. (2009). Reducing the threat value of chronic pain: A preliminary replicated single-case study of interoceptive exposure versus distraction in six individuals with chronic back pain. Behaviour Research and Therapy, 47(8), 721-728. https://doi.org/doi:10.1016/j.brat.2009.05.003

Schemer, L., Vlaeyen, J. W., Doerr, J. M., Skoluda, N., Nater, U. M., Rief, W., & Glombiewski, J. A. (2018). Treatment processes during exposure and cognitive-behavioral therapy for chronic back pain: A single-case experimental design with multiple baselines. Behaviour Research and Therapy, 108, 58-67. https://doi.org/https://doi.org/10.1016/j.brat.2018.07.002

Caneiro, J. P., Smith, A., Linton, S. J., Moseley, G. L., & O’Sullivan, P. (2019). How does change unfold? an evaluation of the process of change in four people with chronic low back pain and high pain-related fear managed with Cognitive Functional Therapy: A replicated single-case experimental design study. Behavior Research & Therapy, 117, 28-39. https://doi.org/10.1016/j.brat.2019.02.007

Svanberg, M., Johansson, A. C., & Boersma, K. (2019). Does validation and alliance during the multimodal investigation affect patients’ acceptance of chronic pain? An experimental single case study. Scandinavian Journal of Pain, 19(1), 73-82.

E. Simons, L., Vlaeyen, J. W. S., Declercq, L., M. Smith, A., Beebe, J., Hogan, M., Li, E., A. Kronman, C., Mahmud, F., R. Corey, J., B. Sieberg, C., & Ploski, C. (2020). Avoid or engage? Outcomes of graded exposure in youth with chronic pain using a sequential replicated single-case randomized design. Pain, 161(3), 520-531.

Hollander, M. D., de Jong, J., Onghena, P., & Vlaeyen, J. W. S. (2020). Generalization of exposure in vivo in Complex Regional Pain Syndrome type I. Behaviour Research and Therapy, 124. https://doi.org/https://doi.org/10.1016/j.brat.2019.103511

Edwin de Raaij, E. J., Harriet Wittink, H., Francois Maissan, J. F., Jos Twisk, J., & Raymond Ostelo, R. (2022). Illness perceptions; exploring mediators and/or moderators in disabling persistent low back pain. Multiple baseline single-case experimental design. BMC Musculoskeletal Disorders, 23(1), 140. https://doi.org/10.1186/s12891-022-05031-3

References

Kazdin, A. E. (2019). Single-case experimental designs. Evaluating interventions in research and clinical practice. Behav Res Ther, 117, 3-17. https://doi.org/10.1016/j.brat.2018.11.015

Krasny-Pacini, A., & Evans, J. (2018). Single-case experimental designs to assess intervention effectiveness in rehabilitation: A practical guide. Annals of Physical & Rehabilitation Medicine, 61(3), 164-179. https://doi.org/10.1016/j.rehab.2017.12.002

Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2012). Single-Case Intervention Research Design Standards. Remedial and Special Education, 34(1), 26-38. https://doi.org/10.1177/0741932512452794

Ruissen, G. R., Zumbo, B. D., Rhodes, R. E., Puterman, E., & Beauchamp, M. R. (2022). Analysis of dynamic psychological processes to understand and promote physical activity behaviour using intensive longitudinal methods: a primer. Health Psychology Review, 16(4), 492-525. https://doi.org/10.1080/17437199.2021.1987953

Pain concepts for practice: Occupational therapists


Registration opens 11 Feburary 2023, click here for more details – click

Numbers limited to 20 to ensure a great learning experience.

Fundamental concepts for clinical practice including pain neurobiology, assessment, formulation and therapy.

What’s the relationship between pain intensity and functional limitations?


This question comes up from time to time as some commentators strive to “find the cause and fix the problem at all cost.” The argument is that if pain was gone, the person would simply return to their old life just as they were. And for what it’s worth, there’s certainly a relationship between pain intensity and disability, and pain intensity and distress – but it’s not simple.

One of the earliest papers I read when I was beginning my pain management career is one by Waddell, Main, Morris, Di Paola & Gray (1984). Gordon Waddell was an orthopaedic surgeon with an interest in low back pain – and an equal interest in what people do when they’re sore. In collaboration with Chris Main and others, he examined 200 people referred from family doctors for low back pain, and analysed psychological questionnaires administered to this same group. The process this team used to establish the results was rigorous by any standard, but especially rigorous at the time: they carried out pilot interviews and exams on 182 people, then carried out a further analysis of impairment and disability on a different group of 160 people, conducted this study with 200 people, and further cross-checked with a second group of 120 people.

What did the team find? Well, putting aside (for now*) the judgements about “inappropriate responses” to examinations and “magnified illness behaviour” they found that people who were highly distressed demonstrated more of these “inappropriate” and “magnified” behaviours. Makes sense to me as it did to Waddell and colleagues – their analysis was “They may develop as a largely unconscious and socially productive ‘cry for help’ but, unfortunately, in the absence of due help they may, in themselves, add to disability and become counterproductive.”** The table below (from p. 212 of this paper) shows that physical impairment was the most significant contributor to disability.

But hold on a minute! In the prestigious Volvo Award winning paper, Waddell (1987) then shows a wonderful graphic that encapsulates just how complicated this relationship is. In it, he shows that “objective physical impairment” (remember this is in back pain) has a correlation of just r=0.27 with pain, and r=0.54 with disability, while the relationship between pain and disability was only r=0.44.

In other words, if pain and disability were directly related, there would need to have a relationship of 1:1 between pain intensity and functional limitations. There is not – so “other things” intrude or influence the relationship between pain and disability. Again in this paper, Waddell shows that there is little difference in pain intensity between people who go and see a health professional for low back pain, and those who don’t (and seeking healthcare is a pain-related behaviour, or illness behaviour) – because what we do about pain depends a great deal on what we think is going on, and on what we think a health professional can do for us.

Now because these papers are old, they’ll likely be discounted so I dipped into the enormous literature on pain and disability. I thought I’d ask if having a successful surgery that removed pain led to a “return to normal.” A 2010 paper by Bade et al., found that in knee replacement surgery “Compared to healthy older adults, patients performed significantly worse at all times for all measures (P<.05), except for single-limb stance time at 6 months (P>.05). One month postoperatively, patients experienced significant losses from preoperative levels in all outcomes. Patients recovered to preoperative levels by 6 months postoperatively on all measures, except knee flexion range of motion, but still exhibited the same extent of limitation they did prior to surgery.” So that’s a study using boring old functional assessments and disability measures: what if the person was getting surgery so they could do something they enjoy, perhaps golf? Jackson et al., (2009) found that only 57% of golfers returned to golf after total knee arthroplasty, with 81% golfing as often, or more, than before their surgery – but only 14% walked the course after surgery. And these were keen golfers with no pain after their knee replacements!

Kovaks et al., (2004) also found that “Clinically relevant improvements in pain may lead to almost unnoticeable changes in disability and quality of life. Therefore, these variables should be assessed separately when evaluating the effect of any form of treatment for low back pain.” The two important tables showing how correlations changed over time are below. On day 1, a 10% increase in VAS (ie pain intensity) increases disability by only 3.3%, and quality of life by 2.65%. On day 15, a 10% increase in VAS increases disability by 4.99% and quality of life by 3.8%.

Now I’m not reporting a large number of studies because – well, there are a LOT of them. Suffice to say that while there is a relationship between pain intensity and disability, it is not straightforward, and simply reducing pain does not mean a person will return to what they love doing, even golf! I’ve chosen older studies because it’s kinda helpful to look at older research to show that these ideas are not new. This poor relationship between pain intensity and function is something we should know already. We should have been taught this in our training. So catch up with the literature please!!


The factors that influence disability are many, and they’re not just biological. They include fears (of reinjury, of pain flare-up), they include other peoples’ responses to them (advice from health professionals, workplace requirements, family responses). They are real and mean that even once there is an effective treatment for forms of persistent pain (and we’ll be waiting a while for these), rehabilitation from a whole person perspective is crucial. In fact, in the golfing study, all the physical measures (strength, ROM etc) were fine – so it’s not about physical fitness, nor about pain intensity, it is about people being people. So we also need to be people working with other people.

*We cannot detect malingering in people with pain because we have no objective measure of pain. Psychometric measures don’t measure malingering (see Tuck, N. L., Johnson, M. H., & Bean, D. J. (2019). You’d Better Believe It: The Conceptual and Practical Challenges of Assessing Malingering in Patients With Chronic Pain. Journal of Pain, 20(2), 133-145. https://doi.org/10.1016/j.jpain.2018.07.002), and neither can we.

**For what it’s worth, if anyone suggests the “Waddell signs” can demonstrate who is malingering – go read Waddell’s own words, where he states unequivocally that these are indications only of psychological distress.

Bade, M. J., Kohrt, W. M., & Stevens-Lapsley, J. E. (2010). Outcomes before and after total knee arthroplasty compared to healthy adults. Journal of Orthopaedic Sports Physical Therapy, 40(9), 559-567. https://doi.org/10.2519/jospt.2010.3317

Jackson, J. D., Smith, J., Shah, J. P., Wisniewski, S. J., & Dahm, D. L. (2009). Golf after total knee arthroplasty: do patients return to walking the course? American Journal of Sports Medicine, 37(11), 2201-2204. https://doi.org/10.1177/0363546509339009

Kovacs, F. M., Abraira, V., Zamora, J., Teresa Gil del Real, M., Llobera, J., Fernández, C., & Group, t. K.-A. P. (2004). Correlation Between Pain, Disability, and Quality of Life in Patients With Common Low Back Pain. Spine, 29(2), 206-210. https://doi.org/10.1097/01.Brs.0000107235.47465.08

Waddell, G., Main, C. J., Morris, E. W., Paola, M. D. I., & Gray, I. C. (1984). Chronic Low-Back Pain, Psychologic Distress, and Illness Behavior. Spine 9(2), 209-213.

Waddell, G. (1987). 1987 Volvo Award in Clinical Sciences: a new clinical model for the treatment of low-back pain. Spine, 12(7), 632-644.

New year, new you! 10 Steps to Change Your Life!


Are you setting goals for this year? Did you decide to get fit? Eat healthier? Spend more time with your family? Be more mindful? Read on for my famous 10 steps to change your life!

Bah, humbug!

Reflect for a moment on what you’ve just read. Head to Google and do a search using the terms “New Year” and see what you come up with. My search page showed, amongst all the horrific news of car smashes and events for the holiday season, topics like “New Year Bootcamp: Get rid of your debt”, “cook something new every week”, “read more books”, “create a cleaning schedule you’ll stick to”…

Ever wonder why we do this? Every single year?

First, we buy into the idea that our life right now isn’t good enough. There are improvements we can [read ‘should’] make.

Then we decide what “good” looks like. Better finances, healthier diet, less time on devices, cleaner and tidier house…whatever.

We then read all the things we should do – apparently, improving body, mind and soul is good for… the soul.

The popular “experts” then tell us to use a planner, tick off daily fitness goals, and tackle small actions frequently.

Betcha like anything most of us will fail. Even if we begin with the best of intentions.

This year, I’m not doing “goals” – I’ve bought into the over-use of SMART goals for too long, and I’m rejecting them. Why? Because life begins to look like a whole bunch of tick boxes, things to do, keeping the “eye on the prize” at the end. But when is “the end”? Is it a set of “yes! I’ve done it” achievements? Little celebrations? Or do we feel coerced into setting yet another goal? Can goals prevent us from being present to the intrinsic nature of daily life? I think so, at least sometimes. A goal focus can take us away from appreciating what we have right now, while also detracting from the process of going through each day. We can lose the joy of running, for example, if we’re only looking to the finish line. We can forget the pleasure of fishing in beautiful natural surroundings if we’re only looking to hook a fish!

So, as a start to this year, I’m sitting still. I’m noticing my Monday morning routine as I slurp my coffee and sit at my computer to write my blog. I’m making a choice to be present with my thoughts and ponderings. I’m looking back at the blog posts I’ve made since 2007 – all 1262 of them! – and feeling proud of my accomplishment. I’m revisiting my “why” or the values that underpin my writing. I’m acknowledging that I’ve chosen to put my voice out there, whether others read what I write or not (FWIW readership is low compared with the heady days of 2008 and 2009!). These choices aren’t in a weird pseudo-spiritual mindful sort of way, just a nod to my habits and the underlying reasons for doing what I do.

I’ve been pondering the drive clinicians have to set goals with patients, and to record achievements. As if these exist outside of the person’s context and all the other influences on what a person can and does do. There are even posts declaiming patients for not “doing the work” even after the explanations and rationales are presented, as if the only factor involved in doing something is whether it has a good enough reason for it to be done. This attitude is especially pertinent when a person lives with persistent pain, and is embroiled in a compensation system with expectations for recovery.

I suppose I’m looking for more attention to be paid to strengths people demonstrate as they live with persistent pain. More awareness of the complexity of living with what persistent pain entails (see this post for more). And for us as clinicians to be more content with what is, despite limitations and uncertainty, ambiguity, frustration and limited ‘power’ to make changes happen.

Contentment is at the heart of “fulfillment in life” (Cordaro, et al., 2016). It’s an emotion with connotations of peace, life satisfaction, and, again according to Cordaro and colleagues, “a perception of completeness in the present moment.” In English, contentment invokes a sense of “having enough” and a sense of acceptance whether the situation is desirable or undesirable (Cordaro, et al, 2016, p.224). Contentment, in contrast to happiness, is considered a low arousal state: that is, when we feel content we experience reduced heart rate, skin conductance and is associated with serotonergic activity, while happiness in contrast activates higher arousal states including dopaminergic responses (Dustin et al., 2019). The table below gives some interesting comparisons between the “reward” and the “contentment” states in humans – take it with a grain of salt, but it makes for useful pondering.

When we think about helping people with persistent pain, how often do we consider contentment as a long-term outcome? To be content that, despite all the hard work the person and their healthcare team and their family and colleagues, this person has achieved what they can. Do we even have this conversation with the person? Giving them the right to call it quits with constantly striving for more.

How can we develop contentment for ourselves and for the people we work with? Should we guide people towards activities that foster contentment? These will likely be the leisure activities that take time, that involve giving without a focus on receiving, that calm people, that invoke nurturing (plants, animals, people), and probably those that involve moderate intensity movement practices (Wild & Woodward, 2019). I hope we’ll draw on occupational therapy research and practice, because these activities will likely be long-term practices for daily life contentment, and daily life is our occupational therapy focus.

For ourselves, I suspect fostering contentment will be more difficult. Our jobs, often, depend on finding out what is wrong and setting goals for a future state, not ideal for those wanting to be OK with what is. We often work in highly stressful and demanding contexts with numerous insults to our moral ideals and values. We debate ideas and approaches to our work with vigour. We make judgements about our own performance and that of others. We often find our expectations aren’t fulfilled and that we can’t do what we think/know would be better.

I’ll leave you with a series of statements about contentment compared with other states that can be related to contentment (Cordaro et al., 2016, p.229). It helps clarify, perhaps, what we might do for ourselves in this new year. Happy 2023 everyone!

Cordaro, D. T., Brackett, M., Glass, L., & Anderson, C. L. (2016). Contentment: Perceived Completeness across Cultures and Traditions. Review of General Psychology, 20(3), 221-235. https://doi.org/10.1037/gpr0000082

Dustin, D. L., Zajchowski, C. A. B., & Schwab, K. A. (2019). The biochemistry behind human behavior: Implications for leisure sciences and services. Leisure Sciences, 41(6), 542-549. https://doi.org/10.1080/01490400.2019.1597793

Lustig, R. (2017). The hacking of the American mind: The science behind the corporate takeover of our bodies and brains. New York, NY: Avery.

Wild, K., & Woodward, A. (2019). Why are cyclists the happiest commuters? Health, pleasure and the e-bike. Journal of Transport & Health, 14. https://doi.org/10.1016/j.jth.2019.05.008

Persistent pain and movement practices


Here I go, stepping into “the bio” to write about movement. Oh dear, what am I doing?

Movement practices of various kinds are part and parcel of pain management. In fact, to read some of the material in social media-land, exercise is the be-all and end-all of pain management, maybe with a dash of psychology. Can we please stop doing this?

I’ve said it often, for many forms of persistent pain, especially the most common forms – nonspecific chronic low back pain, fibromyalgia, and osteoarthritic pain – movement is a good thing, but the effect sizes are small for both pain intensity and disability (eg Jayden, et al., 2021). I’ve reproduced the author’s conclusions below:

We found moderate‐certainty evidence that exercise is probably effective for treatment of chronic low back pain compared to no treatment, usual care or placebo for pain. The observed treatment effect for the exercise compared to no treatment, usual care or placebo comparisons is small for functional limitations, not meeting our threshold for minimal clinically important difference. We also found exercise to have improved pain (low‐certainty evidence) and functional limitations outcomes (moderate‐certainty evidence) compared to other conservative treatments; however, these effects were small and not clinically important when considering all comparisons together. Subgroup analysis suggested that exercise treatment is probably more effective than advice or education alone, or electrotherapy, but with no differences observed for manual therapy treatments.

So for chronic low back pain, short-term pain intensity reduction is clinically significant, but neither functional limitations nor pain intensity reductions over the long-term reached clinical significance. Ouch! This means that we must not oversell the usefulness of exercise as a panacea for chronic pain.

Some missing bits in this meta-analysis: how many people carried on doing their exercise programmes? Why did they keep on going if they didn’t experience reduced pain or better function? How many people dropped out from follow-up?

But my biggest question is “Why does increased physical fitness and reduced pain not translate into better function in daily life?” And of course, my next question is “What might improve the daily life outcomes for people with pain?”

I might also ask why there is so much emphasis on exercise as an approach for chronic pain? Why oh why? One reason could be the assumptions made about the reasons people have trouble with daily life activities. A reasonable assumption might be that people are unfit. Another might be that people don’t have confidence to move. But if these assumptions were true, we’d see better results than this. Perhaps we need to be much more sophisticated and begin to explore what really does impact a person’s daily life activities? My plea therefore is that we cease doing RCTs comparing exercises of various forms to placebo, no treatment or usual care. Please. We know movement is a good thing, and with the enormous number of studies carried out, surely we can stop now?!*

Here are some clinical reasoning pointers when employing movement practices. I’m being agnostic with respect to what form of movement practice [insert your favourite here].

  • Find out what the person enjoys doing for movement/exercise. Aim to do this, or build towards doing this. Start low and build up intensity, load and frequency.
  • Find out why the person has stopped doing their movement/exercise practice. If pain has stopped them, be curious about what they think is going on, what they think the pain means, what happens if they experience pain doing their favourite movement practice, and find out how long and how much they’ve done before pain stops them. Then address unhelpful beliefs, re-set the starting point and progress in a gentle graded way.
  • If the person hasn’t ever been a movement/exercise person, be curious about why. Explore this in detail – beliefs about movement, movement practices they’ve tried, time available, cost, all the things that might get in the way of doing a movement practice. You might find it was a high school physical ed. practice that totally put them off – but look beyond “exercise” or “sports” and remember that movement includes walking, dancing, gardening, playing with the dog, fishing, kayaking….

When you’re starting to generate a movement practice programme, for goodness sake ask the person when they’re going to find time to do it, and don’t make it too long! Explore when might be the most convenient time, and what might make it easy to do. Use low cost, low-tech practices. Find out what might get in the way of doing the movement practice, and do some problem-solving – anticipate what goes through a person’s mind and together, come up with counter-arguments or better, think of some really important values that might underpin the reason to do what is undoubtedly difficult for this person in their life.

Think about life-long habits and routines. How might this person explore options that could fit into their life as they get older? What might they do if the weather is bad, or they have an addition to the family? How many different movement practices can you and the person think of? And remember, if it’s OK for a person at a gym to do “leg day” one day, and “arm day” another, it’s perfectly fine for someone to do gardening one day, and go for a walk up the hill the next. Don’t be boring! Invite exploration and variety.

Work on translating the movement practices you and the person do in clinic into the daily life movements the person is having trouble doing. This might mean asking the person about their daily life and what’s most difficult for them to do right now. If it’s bending to load/unload the dishwasher, ask them what’s going on, what comes up for them when they do this? Is the problem about physical capability – or is it because it’s at the end of a long day at work, they’re tired and haven’t been sleeping and they’re worrying about how the pain in their back is going to affect their sleep tonight? If it’s the latter – guess what, physical exercise isn’t going to change this! So talk about what they can do to help with their sleep, or if that’s not your forte, talk to another team member (occupational therapist, psychologist) about what might help.

Note that as clinicians, we have no right to dictate what a person’s life looks like. This means we can’t judge a person for their choice of movement practice. We also can’t dictate how often or how intense their “workout” should be. It’s going to vary, depending on all the things this person in front of you values most. And we must respect this – don’t be judgemental, their values may be very different from yours, and this is perfectly OK. Just help them explore the good – and not so good – of their choices.

Finally, don’t be afraid to have fun with movement! Play a little. If disc golf is the person’s thing – go try it out! If jive dance is their thing, maybe it’s time you gave that a go. If they like hiking to a quiet spot to do a little bird photography, go with them and carry your own camera gear. If their life is so busy that movement practice gets squeezed out, work with them to find ways to get movement snacks into their day. Don’t be boring. And worry a little less about “prescribing” movement, and much more about experiencing your body as a living sensory being – get in the moment and enjoy what your body is able to do. That is really what we’re encouraging in movement practices for chronic pain.

*A couple of other guesses for why exercise gets seen as The Best Thing – it’s “cheap” in comparison with other options, people can do it reasonably easily after therapy, there are LOTS of physiotherapists and others who offer this, it appeals to our “simple” (but wrong) beliefs about pain, psychological approaches are more expensive (though don’t offer better outcomes), daily life occupational therapy approaches are really hard to conduct as RCTs….

Hayden JA, Ellis J, Ogilvie R, Malmivaara A, van Tulder MW. Exercise therapy for chronic low back pain. Cochrane Database of Systematic Reviews 2021, Issue 9. Art. No.: CD009790. DOI: 10.1002/14651858.CD009790.pub2. Accessed 18 December 2022.

The joy of having many data points


Researchers and clinicians are drawn to studies with many participants. Especially randomised controlled trials, where two groups are randomly divided and one gets “the real thing” while the other does not. The joy comes from knowing that results from these kinds of studies suggest that, all things being equal, the differences between the groups is “real” and not just by chance.

When we come to analyse the graphs from these kinds of studies, what we hope to see are two nice bell-shaped curves, with distinct peaks (the arithmetic mean) and long tails either side – and a clear separation between the mean of one group (the experimental one) and the control group.

It should look a bit like this:

Now one of the problems in doing research is that we can’t always guarantee a large sample – for example, it’s difficult to find enough people with a relatively rare problem like complex regional pain syndrome to randomly split the groups to iron out major differences between them. And, this kind of research design presumes the principle of ergodicity – here for more information from Wikipedia, or here for a more detailed examination relating to generalising from groups to individuals.

This research design also struggles to deal with distributions that don’t conform to the lovely bell curve – things like bimodal distributions, or skewed distributions. And if we draw only on the mean – we don’t get to see these delightful peaks and troughs – or the people at either end of the curves.

The more variables we add to analysis, the more complex the statistics needed – so in the case of pain research, we often have to simplify the research question, do complex maths to “normalise” the results, and ultimately we get research that doesn’t look the slightest bit like the people we see in clinical practice. No wonder we get results that don’t look nearly as nice as the research studies!

Now I don’t mind statistics at all, but I do mind research papers that don’t declare the assumptions made when using analyses. Many papers assume the reader knows these assumptions – unlike qualitative research where the authors philosophical assumptions are openly stated, and where epistemology and ontology are considered part of the research design.

So why might lots of data points be cool?

Most of us working in a clinic will be seeing an individual. One person, with all their unique history, attributes, vulnerabilities, preferences and values. When we extrapolate the findings from RCTs especially, and apply them to this unique person, we risk failing to acknowledge that we’re violating the principle of ergodicity, and that our person may be one of those falling at the tails of that bell curve: or worse, in the middle of a bimodal distribution. Given that most pain problems, particularly persistent pain, are multifactorial, applying a single “solution” no matter how many studies showing a positive effect there are, may not cut it.

For years I’ve been pointing out the value, both in research and in clinical practice, of single case experimental designs. There are loads of reasons for using this approach, and it’s a method with a long history. Essentially, the person serves as their own control, they take lots of measurements before introducing a treatment, the treatment is applied and changes in the measurements are closely monitored. If there’s a change in the expected direction, we can test whether it was the treatment by withdrawing said treatment, and closely monitoring any changes in the measurements. Of course, there are challenges to using this approach – we have to be able to withdraw the treatment, and that doesn’t work if it’s something like “information”. But there are ways around this – and the method of intensive longitudinal repeated measures is becoming a rich source of information about change processes.

Change processes are changes that mediate the final outcome. In other words, when we do a treatment, either the treatment directly causes the end outcome – eg give someone a raised toilet seat, and they can get off the toilet because the toilet is at a good height for them – or via some other process – eg by giving the raised toilet seat, the person gains confidence to get on and off the toilet so it’s not the toilet seat per se, but enhanced confidence that mediates the outcome.

Change processes matter because once we’ve identified them, we can develop ways to work with them more effectively. We can also measure the progress a person makes on more than one change process, and refine what we do in our treatments in response. The more data points we collect from that one person, the more we can track their trajectory – and the better we can understand what’s going on in their world to influence their responses.

Technology for repeated measures in real time has become much smarter and more invisible than it used to be. We can still employ some of the simpler techniques – a pen and paper diary still has used! But we then have to rely on the person remembering to fill them in. Passive data collection using wearable technology is something many of us use to track fitness, diet, sleep, travel, heart rate variability and so on. Set the parameters, and as long as you’re wearing the gadget, your data is captured.

Before anyone leaps in to tell me the gadgets are prone to measurement error, believe me I know! For example, monitoring sleep using a phone (or even a smartwatch) doesn’t monitor sleep depth, it monitors movement (and records snoring LOL). However – and this is important – it is more likely to get used than anything requiring me to do something extra in my day. And we can integrate both passive data collection and active data collection using similar technologies. For example, it’s not difficult to send an SMS (instant text message) at random times during the day to ask someone a brief and simple question.

Where these repeated measures approaches get a bit gnarly is in analysing the data – but even this doesn’t mean it can’t be done. The analyses require a good understanding of what it is being measured (and why), and how best to use complex statistical analyses to understand how one factor (variable) might influence another.

The advantages of using intensive measures in clinic lie with understanding how, for example, one day of additional activity (measured using the step counter combined with GPS tracking) might directly influence mood the next day (or pain, or energy levels or whatever). We still need to apply critical thinking to uncover the causal mechanisms (is it plausible for factor X to directly cause a change in factor Y?) and to check whether the results are stable over time (or just a chance fluctuation). Another advantage is that we can quickly step in to experiment with an intervention – and watch what happens. For example, if we think being very active on one day has an effect on mood the following day, we can test this out: try experimenting with a day of lots of activity, and monitor what happens the next day, or the converse, do very little and monitor what happens with mood the following day. Rinse and repeat until we’re certain that for this person, activity level has an effect on mood.

And the study that made me think about all this? It’s this one by Whibley, Williams, Clauw, Sliwinski and Kratz (2022) – click

If we want to really develop excellent clinically-relevant research-based ways to understand what might be going on for the one person in front of us, and not for the large mixed group of people included in a randomised controlled trial, we could be inspired to look at intensive repeated “micro-longitudinal” research strategies as models for clinic-based research.

Whibley, D., Williams, D. A., Clauw, D. J., Sliwinski, M. J., & Kratz, A. L. (2022). Within-day rhythms of pain and cognitive function in people with and without fibromyalgia: synchronous or syncopated? Pain, 163(3), 474-482. https://doi.org/10.1097/j.pain.0000000000002370