A new instalment in my series about intensive longitudinal studies, aka ecological momentary assessment (and a host of other names for methods used to study daily life in real time in the real world).
Daily life is the focus of occupational therapy – doing what needs to be done, or a person wants to do, in everyday life. It’s complex because unlike a laboratory (or a large, well-controlled randomised controlled trial) daily life is messy and there is no way to control all the interacting factors that influence why a person does what they do. A technical term for the processes involved is microtemporality, or the relationships between factors in the short-term, like hours or days.
For example, let’s take the effect of a cup of coffee on my alertness when writing each day. I get up in the morning, feeling sluggish and not very coherent. I make that first delicious cup of coffee, slurp it down while I read the news headlines, and about 20 minutes later I start feeling a lot perkier and get cracking on my writing. Over the morning, my pep drops and I grab another cup or a go for a brief walk or catch up with a friend, and once again I feel energised.
If I wanted to see the effect of coffee on alertness I could do a RCT, making the conditions standard for all participants, controlling for the hours of sleep they had, giving them all a standard dose of caffeine and a standard cognitive test. Provided I have chosen people at random, so the chance of being in either the control group (who got the Devil’s drink, decaffeinated pseudo-coffee) or the experimental group was a toss of the coin, and provided we assume that anyone who has coffee will respond in the same way, and the tests were all equally valid and reliable, and the testing context is something like the world participants will be in, the results ought to tell us two things: (1) we can safely reject the null hypothesis (that there is no difference between decaffeinated coffee and real coffee on alertness) and (2) we can generalise from the results to what happens in the real world.
Now of course, this is how most of our research is carried out (or the ‘trustworthy’ research we rely on) – but what it doesn’t tell us as occupational therapists is whether this person in front of me will be in the very top or bottom of the bell curve in their response, and whether this will have any impact on what they need to do today.
For this unique person, we might choose another method, because we’re dealing only with this one person not the rest of the population, and we’re interested in the real world impact of coffee on this individual’s feelings of alertness. We can choose single case experimental design, where we ask the person to rate their alertness four or five times every day while they go about their usual daily life. We do this for long enough until we can see any patterns in their level of alertness ratings, and be satisfied that we’re observing their ‘normal’. During this time we don’t ask them to change their coffee drinking habits, but we do ask them to record their intake.
Then we get nasty, we give them the Devil’s decaf instead of the real deliciousness, but we do this without them knowing! So it looks just the same as the real thing, comes in the same container with the same labeling, and hope that it has the same delicious flavour. We ask them to carry on drinking as normal, and rating their alertness levels four or five times every day, and we do this for another two weeks. The only things we need to watch carefully for is that they don’t suspect a thing, and that their daily life doesn’t change (that’s why we do a baseline first).
Just because we’re a bit obsessed, and because we’re interested in the real world impact, we sneakily switch out the rubbish decaf and replace it with the real thing – again without the person knowing – and we get them to carry on recording. If we’re really obsessed, we can switch the real thing out after two weeks, and replace with the pseudo coffee, and rinse and repeat.
Now in this example we’re only recording two things: the self-reported level of alertness, and whether it’s the real coffee or not (but the person doesn’t suspect a thing, so doesn’t know we’ve been so incredibly devious).
We can then draw up some cool graphs to show the level of alertness changes over the course of each day, and with and without the real coffee. Just by eyeballing the graphs we can probably tell what’s going on…

Usually in pain management and rehabilitation we’re investigating the impact of more than one factor on something else. For example, we’re interested in pain intensity and sleep, or worry and pain intensity and sleep. This makes the statistics a bit more complex, because the relationships might not be as direct as coffee on alertness! For example, is it pain intensity that influences how much worrying a person does, and does the worry directly affect sleep? Or is it having a night of rotten sleep that directly influences worrying and then pain intensity increases?
To begin with however, occupational therapists could spend some time considering single case experimental designs with a very simple strategy such as I’ve described above. It’s not easy because we rarely ‘administer’ an intervention that doesn’t have lingering effects. For example, we can’t make someone forget something we’ve told them. This means we can’t substitute ‘real’ advice with ‘fake’ advice like we can with coffee and decaf. The ‘real’ advice will likely hang around in the person’s memory, as will the ‘fake’ advice, so they’ll influence how much the person believes and then acts on that information. There are strategies to get around this such as multiple baseline designs (see the Kazdin (2019) and Kratochwill et al., (2012) article for their suggestions as to what this looks like), and for a rehabilitation-oriented paper, Krasny-Pacini & Evans (2018) is a great resource.
If you’re intrigued by this way of systematically doing research with individuals but wonder if it’s been used in pain management – fear not! Some of the most influential researchers in the game have used this approach, and I’ve included a list below – it’s not exhaustive…
Next post I’ll look at some practical ways to introduce single case intensive longitudinal design into your practice. BTW It’s not just for occupational therapists – the paper by Ruissen et al., (2022) looks at physical activity and psychological processes, so everyone is invited to this party!
Selected Pain Rehab SCED studies (from oldest to most recent)
Vlaeyen, J. W., de Jong, J., Geilen, M., Heuts, P. H., & van Breukelen, G. (2001). Graded exposure in vivo in the treatment of pain-related fear: a replicated single-case experimental design in four patients with chronic low back pain. Behaviour Research & Therapy., 39(2), 151-166.
Asenlof, P., Denison, E., & Lindberg, P. (2005). Individually tailored treatment targeting motor behavior, cognition, and disability: 2 experimental single-case studies of patients with recurrent and persistent musculoskeletal pain in primary health care. Physical Therapy, 85(10), 1061-1077.
de Jong, J. R., Vlaeyen, J. W., Onghena, P., Cuypers, C., den Hollander, M., & Ruijgrok, J. (2005). Reduction of pain-related fear in complex regional pain syndrome type I: the application of graded exposure in vivo. Pain, 116(3), 264-275. https://doi.org/10.1016/j.pain.2005.04.019
de Jong, J. R., Vlaeyen, J. W. S., Onghena, P., Goossens, M. E. J. B., Geilen, M., & Mulder, H. (2005). Fear of Movement/(Re)injury in Chronic Low Back Pain: Education or Exposure In Vivo as Mediator to Fear Reduction? Clinical Journal of Pain Special Topic Series: Cognitive Behavioral Treatment for Chronic Pain January/February, 21(1), 9-17.
Onghena, P., & Edgington, E. S. (2005). Customization of pain treatments: single-case design and analysis. Clinical Journal of Pain, 21(1), 56-68.
Lundervold, D. A., Talley, C., & Buermann, M. (2006). Effect of Behavioral Activation Treatment on fibromyalgia-related pain anxiety cognition. International Journal of Behavioral Consultation and Therapy, 2(1), 73-84.
Flink, I. K., Nicholas, M. K., Boersma, K., & Linton, S. J. (2009). Reducing the threat value of chronic pain: A preliminary replicated single-case study of interoceptive exposure versus distraction in six individuals with chronic back pain. Behaviour Research and Therapy, 47(8), 721-728. https://doi.org/doi:10.1016/j.brat.2009.05.003
Schemer, L., Vlaeyen, J. W., Doerr, J. M., Skoluda, N., Nater, U. M., Rief, W., & Glombiewski, J. A. (2018). Treatment processes during exposure and cognitive-behavioral therapy for chronic back pain: A single-case experimental design with multiple baselines. Behaviour Research and Therapy, 108, 58-67. https://doi.org/https://doi.org/10.1016/j.brat.2018.07.002
Caneiro, J. P., Smith, A., Linton, S. J., Moseley, G. L., & O’Sullivan, P. (2019). How does change unfold? an evaluation of the process of change in four people with chronic low back pain and high pain-related fear managed with Cognitive Functional Therapy: A replicated single-case experimental design study. Behavior Research & Therapy, 117, 28-39. https://doi.org/10.1016/j.brat.2019.02.007
Svanberg, M., Johansson, A. C., & Boersma, K. (2019). Does validation and alliance during the multimodal investigation affect patients’ acceptance of chronic pain? An experimental single case study. Scandinavian Journal of Pain, 19(1), 73-82.
E. Simons, L., Vlaeyen, J. W. S., Declercq, L., M. Smith, A., Beebe, J., Hogan, M., Li, E., A. Kronman, C., Mahmud, F., R. Corey, J., B. Sieberg, C., & Ploski, C. (2020). Avoid or engage? Outcomes of graded exposure in youth with chronic pain using a sequential replicated single-case randomized design. Pain, 161(3), 520-531.
Hollander, M. D., de Jong, J., Onghena, P., & Vlaeyen, J. W. S. (2020). Generalization of exposure in vivo in Complex Regional Pain Syndrome type I. Behaviour Research and Therapy, 124. https://doi.org/https://doi.org/10.1016/j.brat.2019.103511
Edwin de Raaij, E. J., Harriet Wittink, H., Francois Maissan, J. F., Jos Twisk, J., & Raymond Ostelo, R. (2022). Illness perceptions; exploring mediators and/or moderators in disabling persistent low back pain. Multiple baseline single-case experimental design. BMC Musculoskeletal Disorders, 23(1), 140. https://doi.org/10.1186/s12891-022-05031-3
References
Kazdin, A. E. (2019). Single-case experimental designs. Evaluating interventions in research and clinical practice. Behav Res Ther, 117, 3-17. https://doi.org/10.1016/j.brat.2018.11.015
Krasny-Pacini, A., & Evans, J. (2018). Single-case experimental designs to assess intervention effectiveness in rehabilitation: A practical guide. Annals of Physical & Rehabilitation Medicine, 61(3), 164-179. https://doi.org/10.1016/j.rehab.2017.12.002
Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2012). Single-Case Intervention Research Design Standards. Remedial and Special Education, 34(1), 26-38. https://doi.org/10.1177/0741932512452794
Ruissen, G. R., Zumbo, B. D., Rhodes, R. E., Puterman, E., & Beauchamp, M. R. (2022). Analysis of dynamic psychological processes to understand and promote physical activity behaviour using intensive longitudinal methods: a primer. Health Psychology Review, 16(4), 492-525. https://doi.org/10.1080/17437199.2021.1987953