single case experimental design

N of 1 studies – great examples

It’s true that ‘unconventional’ studies of any kind don’t get published as readily as conventional RCTs even if those studies are under-powered, have errors in their construction and don’t tell us much of anything. Grrr. Publishing studies from my PhD has been fraught because I chose a form of grounded theory that doesn’t conform to the conventional constructivist or Straussian approach. What, then are we to do?

Two things strike me: first we always need to select a research method to give us the best answer to our research question, not something that will ‘get published’ easily. There are many research questions and RCTs simply don’t answer them all. A quantitative method doesn’t lend itself to ‘why’ questions and inevitably require assumptions about the factors thought to be relevant, the measurement strategy, the underlying theory explaining what’s going on. This doesn’t really help us when we have a new field of study to look at, where there is no clear theoretical explanation, where measures don’t measure what’s relevant. Hence drawing on different designs like mixed methods and qualitative approaches. From a pragmatic perspective, the numbers needed for an RCT are much greater than most clinicians can find unless they’re working in a large research setting (and have a bit of funding!). Nevertheless, ‘pilot’ studies using RCT methods do get published even when they don’t have huge explanatory power, partly because they’re familiar to the reviewers.

The second thing that strikes me is: we need to have good exemplars. These give us a template of sorts to learn how to conduct good research, how to communicate why a particular ‘unconventional’ method is the best way to answer the question, and how to write the results/findings in a way that is compelling.

I’ve written before about the failure of much research in human behaviour and experience to understand that ergodic theorum is violated in grouped statistics. This means we can deeply question the results as they apply to the person we see in the clinic. Ergodicity implies that all people in a group will ultimately follow the same trajectory, develop in the same way over the same time, respond to treatment in the same way and follow the same processes. But clinicians know that some people respond very quickly to a component in a programme, while others don’t.

I recently found this example from Tarko (2005) and cited in Lowie & Verspoor (2019)

OK, ’nuff said. Ergodicity matters.

Choosing the right research strategy begins with having a good research question, and most clinicians have a very good research question: what is the best treatment I can offer this person presenting in this way at this time? The follow-up question is: is this treatment helping? or… to be more precise, which component of my treatment/s are helping?

It’s this question that N=1 or single case experimental designs are intended to answer, and they do it very well.

Here are some great examples of published studies using intensive repeated measures – and we need more of these!

Lydon-Staley, D. M., Zurn, P., & Bassett, D. S. (2020). Within-person variability in curiosity during daily life and associations with well-being. Journal of Personality, 88(4), 625-641.

I included this one because it’s not about pain! And yet it sheds light on something important in pain management. Curiosity is about being intrigued by novel, unfamiliar situations. Curiosity doesn’t flourish when a person is anxious, but does when people are wanting to increase their knowledge and skills, and it’s associated with greater well-being. So it’s something clinicians might want to foster – especially for someone who has become afraid to rely on their body and body sensations. In this study, people were asked to complete a daily diary and do some internet browsing (yay! my favourite thing to do!). After some fairly complex statistical analysis (described in good detail in this paper), the results from 167 people who completed 21 days of daily diary measures and a one-off set of measures showed that being consistently curious is associated with feeling good – AND that doing physical movement practices might enhance curiosity via improving mood. Now that’s worth knowing.

Mun, C. J., Thummala, K., Davis, M. C., Karoly, P., Tennen, H., & Zautra, A. J. (2017). Predictors and social consequences of daily pain expectancy among adults with chronic pain. Pain, 158(7), 1224-1233.

Now this study is a big one – 231 people in study one, and 220 people in study two. Cutting to the chase, these researchers found that people who expected high pain in the evening experienced greater pain the next day, even when controlling for current pain intensity. The study also found that morning pain predicted next afternoon social enjoyment but not social stress. And what this means is…. clinicians need to promote joy/fun/positive affect, and to help people reduce their expectations that their pain will inevitably increase or ‘be bad’ – it’s anticipation that seems to influence pain intensity and avoidance. These study designs allow researchers to tease apart the factors contributing to experiences over time. We need more of them!

Hollander, M. D., de Jong, J., Onghena, P., & Vlaeyen, J. W. S. (2020). Generalization of exposure in vivo in Complex Regional Pain Syndrome type I. Behaviour Research and Therapy, 124.

And from a large study to a much smaller one with – wait for it – 8 participants! That’s more like the numbers we see in clinic, right? This study examined whether it’s more fruitful to expose people to many activities they’ve previously avoided, or instead, to limit the number of activities each person was exposed to. This is SUCH an important component of therapy where people have avoided doing things that bother them because they anticipate either that their pain will go to untolerable levels (or interfere with other important things like sleep) or because they’re worried they’ll do harm to themselves. Why? Because doing things in one safe space is not life. We do lots of activities in lots of different spaces, and most of them are unpredictable and we don’t have a ‘safe person’ to rely on. It’s perhaps one of the reasons exercise carried out in a gym might not transfer into greater reductions in disability in daily life – and why involving occupational therapists in pain management as ‘knowledge translation experts’ is such a good thing.

Caneiro, J. P., Smith, A., Rabey, M., Moseley, G. L., & O’Sullivan, P. (2017). Process of Change in Pain-Related Fear: Clinical Insights From a Single Case Report of Persistent Back Pain Managed With Cognitive Functional Therapy. Journal of Orthopaedic & Sports Physical Therapy, 47(9), 637-651.

Lucky last – a single case study exploring the process of change experienced by one person undergoing cognitive functional therapy. While recent meta-analyses suggest CFT is ‘no better’ than any other treatment for people with persistent pain, what meta-analyses can’t show is those for whom it’s highly effective. Why? Because individual responses don’t show up in meta-analyses, and the mean or even the confidence intervals don’t show those people who do extremely well – or those who don’t do well at all. And yet as clinicians, we deal with each individual.

Now I chose these four studies because they are all published in highly respected and ‘highly ranked’ journals. I don’t care a fig about the supposed rank of a journal, but there’s no doubt that getting into one of these journals requires research of a very good standard. And somehow these ones snuck through!

Am I suggesting that RCTs shouldn’t feature in research? No – but I do think a much more careful analysis of these is needed, so we can understand the golden question: what works for whom and when? And to answer these questions we need far more detailed analysis. Oh – and evidence-based healthcare has always been a synthesis of THREE elements – research yes, clinician’s experience AND the person’s preferences and values. ALL THREE not just ‘research’ and out of research, not just RCTs.

Lowie, W. M., & Verspoor, M. H. (2019). Individual Differences and the Ergodicity Problem. Language Learning, 69, 184-206.

Tarko, V. (2005, December 29). What is ergodicity? Individual behavior and ensembles. Softpedia News. Retrieved from What-is-ergodicity-15686.shtml

If a rose is a rose by any other name, how should we study treatment processes in pain management & rehabilitation?

A new instalment in my series about intensive longitudinal studies, aka ecological momentary assessment (and a host of other names for methods used to study daily life in real time in the real world).

Daily life is the focus of occupational therapy – doing what needs to be done, or a person wants to do, in everyday life. It’s complex because unlike a laboratory (or a large, well-controlled randomised controlled trial) daily life is messy and there is no way to control all the interacting factors that influence why a person does what they do. A technical term for the processes involved is microtemporality, or the relationships between factors in the short-term, like hours or days.

For example, let’s take the effect of a cup of coffee on my alertness when writing each day. I get up in the morning, feeling sluggish and not very coherent. I make that first delicious cup of coffee, slurp it down while I read the news headlines, and about 20 minutes later I start feeling a lot perkier and get cracking on my writing. Over the morning, my pep drops and I grab another cup or a go for a brief walk or catch up with a friend, and once again I feel energised.

If I wanted to see the effect of coffee on alertness I could do a RCT, making the conditions standard for all participants, controlling for the hours of sleep they had, giving them all a standard dose of caffeine and a standard cognitive test. Provided I have chosen people at random, so the chance of being in either the control group (who got the Devil’s drink, decaffeinated pseudo-coffee) or the experimental group was a toss of the coin, and provided we assume that anyone who has coffee will respond in the same way, and the tests were all equally valid and reliable, and the testing context is something like the world participants will be in, the results ought to tell us two things: (1) we can safely reject the null hypothesis (that there is no difference between decaffeinated coffee and real coffee on alertness) and (2) we can generalise from the results to what happens in the real world.

Now of course, this is how most of our research is carried out (or the ‘trustworthy’ research we rely on) – but what it doesn’t tell us as occupational therapists is whether this person in front of me will be in the very top or bottom of the bell curve in their response, and whether this will have any impact on what they need to do today.

For this unique person, we might choose another method, because we’re dealing only with this one person not the rest of the population, and we’re interested in the real world impact of coffee on this individual’s feelings of alertness. We can choose single case experimental design, where we ask the person to rate their alertness four or five times every day while they go about their usual daily life. We do this for long enough until we can see any patterns in their level of alertness ratings, and be satisfied that we’re observing their ‘normal’. During this time we don’t ask them to change their coffee drinking habits, but we do ask them to record their intake.

Then we get nasty, we give them the Devil’s decaf instead of the real deliciousness, but we do this without them knowing! So it looks just the same as the real thing, comes in the same container with the same labeling, and hope that it has the same delicious flavour. We ask them to carry on drinking as normal, and rating their alertness levels four or five times every day, and we do this for another two weeks. The only things we need to watch carefully for is that they don’t suspect a thing, and that their daily life doesn’t change (that’s why we do a baseline first).

Just because we’re a bit obsessed, and because we’re interested in the real world impact, we sneakily switch out the rubbish decaf and replace it with the real thing – again without the person knowing – and we get them to carry on recording. If we’re really obsessed, we can switch the real thing out after two weeks, and replace with the pseudo coffee, and rinse and repeat.

Now in this example we’re only recording two things: the self-reported level of alertness, and whether it’s the real coffee or not (but the person doesn’t suspect a thing, so doesn’t know we’ve been so incredibly devious).

We can then draw up some cool graphs to show the level of alertness changes over the course of each day, and with and without the real coffee. Just by eyeballing the graphs we can probably tell what’s going on…

Usually in pain management and rehabilitation we’re investigating the impact of more than one factor on something else. For example, we’re interested in pain intensity and sleep, or worry and pain intensity and sleep. This makes the statistics a bit more complex, because the relationships might not be as direct as coffee on alertness! For example, is it pain intensity that influences how much worrying a person does, and does the worry directly affect sleep? Or is it having a night of rotten sleep that directly influences worrying and then pain intensity increases?

To begin with however, occupational therapists could spend some time considering single case experimental designs with a very simple strategy such as I’ve described above. It’s not easy because we rarely ‘administer’ an intervention that doesn’t have lingering effects. For example, we can’t make someone forget something we’ve told them. This means we can’t substitute ‘real’ advice with ‘fake’ advice like we can with coffee and decaf. The ‘real’ advice will likely hang around in the person’s memory, as will the ‘fake’ advice, so they’ll influence how much the person believes and then acts on that information. There are strategies to get around this such as multiple baseline designs (see the Kazdin (2019) and Kratochwill et al., (2012) article for their suggestions as to what this looks like), and for a rehabilitation-oriented paper, Krasny-Pacini & Evans (2018) is a great resource.

If you’re intrigued by this way of systematically doing research with individuals but wonder if it’s been used in pain management – fear not! Some of the most influential researchers in the game have used this approach, and I’ve included a list below – it’s not exhaustive…

Next post I’ll look at some practical ways to introduce single case intensive longitudinal design into your practice. BTW It’s not just for occupational therapists – the paper by Ruissen et al., (2022) looks at physical activity and psychological processes, so everyone is invited to this party!

Selected Pain Rehab SCED studies (from oldest to most recent)

Vlaeyen, J. W., de Jong, J., Geilen, M., Heuts, P. H., & van Breukelen, G. (2001). Graded exposure in vivo in the treatment of pain-related fear: a replicated single-case experimental design in four patients with chronic low back pain. Behaviour Research & Therapy., 39(2), 151-166.

Asenlof, P., Denison, E., & Lindberg, P. (2005). Individually tailored treatment targeting motor behavior, cognition, and disability: 2 experimental single-case studies of patients with recurrent and persistent musculoskeletal pain in primary health care. Physical Therapy, 85(10), 1061-1077.

de Jong, J. R., Vlaeyen, J. W., Onghena, P., Cuypers, C., den Hollander, M., & Ruijgrok, J. (2005). Reduction of pain-related fear in complex regional pain syndrome type I: the application of graded exposure in vivo. Pain, 116(3), 264-275.

de Jong, J. R., Vlaeyen, J. W. S., Onghena, P., Goossens, M. E. J. B., Geilen, M., & Mulder, H. (2005). Fear of Movement/(Re)injury in Chronic Low Back Pain: Education or Exposure In Vivo as Mediator to Fear Reduction? Clinical Journal of Pain Special Topic Series: Cognitive Behavioral Treatment for Chronic Pain January/February, 21(1), 9-17.

Onghena, P., & Edgington, E. S. (2005). Customization of pain treatments: single-case design and analysis. Clinical Journal of Pain, 21(1), 56-68.

Lundervold, D. A., Talley, C., & Buermann, M. (2006). Effect of Behavioral Activation Treatment on fibromyalgia-related pain anxiety cognition. International Journal of Behavioral Consultation and Therapy, 2(1), 73-84.

Flink, I. K., Nicholas, M. K., Boersma, K., & Linton, S. J. (2009). Reducing the threat value of chronic pain: A preliminary replicated single-case study of interoceptive exposure versus distraction in six individuals with chronic back pain. Behaviour Research and Therapy, 47(8), 721-728.

Schemer, L., Vlaeyen, J. W., Doerr, J. M., Skoluda, N., Nater, U. M., Rief, W., & Glombiewski, J. A. (2018). Treatment processes during exposure and cognitive-behavioral therapy for chronic back pain: A single-case experimental design with multiple baselines. Behaviour Research and Therapy, 108, 58-67.

Caneiro, J. P., Smith, A., Linton, S. J., Moseley, G. L., & O’Sullivan, P. (2019). How does change unfold? an evaluation of the process of change in four people with chronic low back pain and high pain-related fear managed with Cognitive Functional Therapy: A replicated single-case experimental design study. Behavior Research & Therapy, 117, 28-39.

Svanberg, M., Johansson, A. C., & Boersma, K. (2019). Does validation and alliance during the multimodal investigation affect patients’ acceptance of chronic pain? An experimental single case study. Scandinavian Journal of Pain, 19(1), 73-82.

E. Simons, L., Vlaeyen, J. W. S., Declercq, L., M. Smith, A., Beebe, J., Hogan, M., Li, E., A. Kronman, C., Mahmud, F., R. Corey, J., B. Sieberg, C., & Ploski, C. (2020). Avoid or engage? Outcomes of graded exposure in youth with chronic pain using a sequential replicated single-case randomized design. Pain, 161(3), 520-531.

Hollander, M. D., de Jong, J., Onghena, P., & Vlaeyen, J. W. S. (2020). Generalization of exposure in vivo in Complex Regional Pain Syndrome type I. Behaviour Research and Therapy, 124.

Edwin de Raaij, E. J., Harriet Wittink, H., Francois Maissan, J. F., Jos Twisk, J., & Raymond Ostelo, R. (2022). Illness perceptions; exploring mediators and/or moderators in disabling persistent low back pain. Multiple baseline single-case experimental design. BMC Musculoskeletal Disorders, 23(1), 140.


Kazdin, A. E. (2019). Single-case experimental designs. Evaluating interventions in research and clinical practice. Behav Res Ther, 117, 3-17.

Krasny-Pacini, A., & Evans, J. (2018). Single-case experimental designs to assess intervention effectiveness in rehabilitation: A practical guide. Annals of Physical & Rehabilitation Medicine, 61(3), 164-179.

Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2012). Single-Case Intervention Research Design Standards. Remedial and Special Education, 34(1), 26-38.

Ruissen, G. R., Zumbo, B. D., Rhodes, R. E., Puterman, E., & Beauchamp, M. R. (2022). Analysis of dynamic psychological processes to understand and promote physical activity behaviour using intensive longitudinal methods: a primer. Health Psychology Review, 16(4), 492-525.