placebo

hoarfrost

The positive power of what we say during treatment


Expectations form one of the important predictors of response to treatment, especially in the case of treatments for pain. A person’s belief or expectation that a treatment will reduce their pain is thought to be part of the response to placebo – and indeed, part of the response to almost any treatment.  Much of the research into expectancies has been carried out in experimental models where healthy people are given a painful stimulus, then provided with some sort of treatment along with a verbal (or written) instruction that is thought to generate a positive belief in the effectiveness of that treatment. The people we see in a clinical setting, however, are in quite a different setting – they experience pain sufficiently disruptive to their sense of well-being that they’ve sought treatment, they may not know what the pain problem is, they may have other health conditions affecting their well-being, and for some, their pain may be chronic or persistent. Do expectations have a clinically-relevant effect on their pain?

Luckily for us, a recent meta-analysis published in Pain (Peerdeman, van Laarhoven, Keij, Vase, Rovers, Peters & Evers, 2016) means the hard work of crunching through the published research has been completed for us! And given 15 955 studies were retrieved in the initial pass through the databases, we can be very relieved indeed (although only 30 met the inclusion criteria…).

What are expectations?

Before I swing into the results, it’s important to take a look at what expectations are and how they might relate to outcomes. According to Kirsch (1995) response expectancies are expectancies of the occurrence of nonvolitional responses (ie responses we’re not aware we make) as a result of certain behaviours, or specific stimuli.  Kirsch points out that nonvolitional responses act as reinforcement for voluntary behaviour, so that by experiencing a nonvolitional response such as relief, joy, reduced anxiety and so on, people are likely to engage in  behaviours associated with that experience again. For example, if someone is feeling worried about their low back pain, just by having a treatment they expect will help and subsequently feeling relieved, they’re likely to return for that treatment again.

How are expectations created?

Some expectations are generated within a culture – we expect, for example, to see a health professional to relieve our ill health. In general, simply by seeing a health profession, in our developed culture, we expect to feel relieved – maybe that someone knows what is going on, can give a name to what we’re experiencing, can take control and give direction to whatever should happen next. This is one reason we might no longer feel that toothache as soon as we step into the Dentist’s waiting room!

Peerdeman and colleagues outline three main interventions known to enhance positive expectations for treatment: verbal suggestion “You’ll feel so much better after I do this…”; conditioning “If I give you this treatment and reduce the painful stimulation I’ve been giving you, when you next receive this treatment you’ll have learned to experience relief” (not that you’d actually SAY this to anyone!); and mental imagery “Imagine all the wonderful things you’ll be able to once this treatment is over”.

I think you’d agree that both verbal suggestion and mental imagery are processes commonly used in our clinics, and probably conditioning occurs without us even being aware that we’re doing this.

How well does it work for people with acute pain?

As I mentioned above, expectations are used in experimental designs where healthy people are poked and zapped to elicit pain, and hopefully our clinical population are not being deliberately poked and zapped! But in clinical samples, thanks to the review by Peerdeman and co, we can see that there are quite some impressive effect sizes from all three forms of expectancy induction – g =  0.67 (95% CI 0.49-0.86). That means a good deal  of support from the pooled results of 27 studies to suggest that intentionally creating the expectation that pain will reduce actually does reduce pain!

And now for chronic pain

Ahhh, well…. here the results are not so good, as we’d expect. Small effects were found on chronic pain, which is not really unexpected – chronic pain has been around longer than acute pain, so multiple reinforcement pathways have developed, along with pervasive and ongoing experiences of failed treatments where either neutral or negative effects have been experienced.

What does this mean for us as clinicians?

Probably it means that we can give people who are about to undergo a painful procedure (finger pricking for diabetes, dressing changes for ulcers, getting a flu jab) a positive expectation that they’ll feel better once it’s over because the strongest effect was obtained for people undergoing a painful procedure who received a positive verbal suggestion that the procedure would help.

Chronic pain? Not quite so wonderful – but from this study I think we should learn that expectations are a powerful force in our treatments, both individually with the person sitting in front of us, but also socioculturally – we have an expectation that treatments will help, and that’s not something to sniff at. Perhaps our next steps are to learn how to generate this without inducing reliance or dependence on US, and on helping the person recognise that they have generated this themselves. Now that’s power to the people!

 

Kirsch, I. (1985). Response expectancy as a determinant of experience and behavior. American Psychologist, 40(11), 1189.

Peerdeman, K. J., van Laarhoven, A. I. M., Keij, S. M., Vase, L., Rovers, M. M., Peters, M. L., & Evers, A. W. M. (2016). Relieving patients’ pain with expectation interventions: A meta-analysis. Pain, 157(6), 1179-1191.

pohutukawa3-medium-web-view

The wonderful, mysterious placebo


ResearchBlogging.org
I think one of the most curious phenomena we know about is the placebo – also known as “meaning response” (Moerman, 2002). A seemingly innocent and inactive “thing” is administered, and they miraculously get an effect. It can’t be an active substance, because often the “thing” is a sugar pill or pretend treatment, yet the effects include pain reduction, improved movement (Parkinsons), reduced nausea, and better mood, amongst others. Mysterious effects don’t just include positive ones, because people can also experience negative effects such as nausea, fatigue, headache, rashes and so on. This is the “nocebo” effect.

Some people argue that there is no such thing as a meaning response/placebo, that it’s a temporary phenomenon that quickly fades and that people who experience placebo are imagining it, or are saying what they think is wanted, but imaging studies, particularly fMRI, show that there are distinct changes of activity in areas of the brain – and some of these changes can be reversed after an opioid antagonist is administered.

What is it then? How does it work? What are the implications?

One hypothesis is that placebo is a “learned” phenomenon, based on the expectation an individual holds for “something” to happen. We develop expectations because we’re human, and these have cultural and individual origins. For example, the colour of a pill can influence its effect – but this differs depending on the country in which it’s administered. In another example, people who receive fake acupuncture can respond – but only if they are familiar with acupuncture as a treatment.

Why is this? Well, it could be that all the trappings of treatment – the ritual of seeing a special person in a special place, with special certificates on the wall, getting a special piece of paper to take to another special place, to be given a special bottle with special pills in it – can set our brains up to expect a special effect. And this is enhanced when the person giving us the special piece of paper says it’s going to have a significant effect on us.

Something that can enhance this effect is if a real effect occurs. For example, if a person is given real sedatives they will become sedated. If they do this for a week or so, they will have learned that this pill leads to sedation. If they’re then given a pill that looks exactly like the real one, they can experience the same level of sedation.

We know that intermittent reinforcement, that is, occasionally getting the result we want, is the most powerful learning schedule we have. Just think of the gambler’s high – occasionally winning a lottery leads to always buying a ticket because “I’m lucky”. It can lead to always choosing the same lottery numbers, wearing a lucky lottery hat, buying from a lucky lottery store and so on.

In treatment, it means that if, on occasion, the treatment provides a “real” effect, the likelihood will be for the learning effect to be incredibly powerful. This is what Au Yueng, Colagiuri, Lovibond and Colloca (2014) did in a recent experiment.

Partial reinforcement, extinction and placebo analgesia

Au Yeung and colleagues decided to use TENS, or transcutaneous nerve stimulation as their treatment. All 69 participants (undergraduate students) were told that TENS involves “passing an electrical current through the skin”, with no mention of how this might affect their pain. Two groups were chosen for the experimental condition, while a third were the control group who received no more information and told that the TENS was a machine to measure skin conductance by passing a current through the skin, and that they would feel a “slight sensation”.

The two experimental groups were given more information about TENS, including a handout saying how good TENS is, and included references to articles about TENS. They were also told by the researchers that TENS “can reduce pain by inhibiting the pain signals that travel up your arm and into your brain.”

Now the trick was that no TENS was actually given to the participants, instead an electrical stimulation device was used to give a very slight pulse that was only just felt by the participants.

The pain stimulus was given by an electric shock calibrated for each individual to the point that the individual reported a level that was “definitely painful but tolerable”.

All participants were given a shock, then asked to rate the pain intensity.  On placebo trials, the “TENS” was given during the stimulation (but actually was a mild electrical stimulation).

Now comes the fun part – participants in the experimental groups either had a hidden reduction in painful stimulation during the ALL placebo trials (when the “TENS” was administered along with the shocks), or SOME of the placebo trials.

Finally, at the end of the experimental, all the participants were asked if they knew the real nature of the study, and how much they knew about the placebo used in the experiment.

  1. The findings showed that the control group didn’t experience any placebo responding – so the fake TENS has no intrinsic effect.
  2. The group with “TENS” administered with every shock reported pain reduction with the “TENS” administration.
  3. The group with intermittent “TENS” also showed pain reduction – placebo analgesia was maintained even when the “TENS” wasn’t given.

What this means

Well, this study shows that placebo analgesia can be achieved with only intermittent placebo administration, but that the effect wasn’t as strong as when the placebo was given every time. BUT the placebo analgesia lasted longer when it was given intermittently.

And what THIS means is that clinically, if a “true” effect is achieved every now and then, the likelihood is that people who are supported to believe in its effectiveness will experience a long-lived placebo analgesia. Longer-lived than for those who always get a good “real” result. And this is something worth thinking about.

Pain reduction treatments are not very effective for many people with chronic pain. Some of the effects can be quite hit and miss. If we can learn how to harness this effect, we might be able to help people extend the effects of treatment while not having to have quite as much of the “active ingredient”. This could be useful especially in medication management.

On the other hand, it also suggests that for some people, the effect of “every now and then” getting a good result might lead to their ongoing belief in the usefulness of a treatment that is largely a hit and miss affair. That’s great for the clinician getting paid to give the treatment, but not so good if the aim is to help the person live well without depending on getting treatment from someone else.

Au Yeung ST, Colagiuri B, Lovibond PF, & Colloca L (2014). Partial reinforcement, extinction, and placebo analgesia. Pain PMID: 24602997

Moerman, D. (2002). Meaning, Medicine, and the “Placebo Effect”. Cambridge University Press: Cambridge.

4822820305_c73117af41_z

What do we do about placebo?


ResearchBlogging.org
Body in Mind recently featured a piece on the ‘Moral Dilemma of Offering a Known Placebo’ in which Neil O’Connell talks about how the ‘placebo effect … in part rests on the effects of expectation, belief in the treatment and possibly a re-evaluation by the patient of their symptoms’. He was referring to treatments like acupuncture, electrotherapy and so on, and calls them ‘magic kisses’ because they work in a similar fashion to the ‘Mummy will kiss it better’ treatment I’ve given to my kids when they were younger.  The dilemma lies in the fact that placebo is simply an inert, inactive ‘intervention’ given as if it was active – inevitably requiring deception on the part of the practitioner, and what this can do to things like trust and informed choice by the patient.

So much of what we do as clinicians, particularly physical and occupational therapies, has a limited evidence base.  At the same time, some of the ‘active ingredients’ that have been identified in ‘placebo’ are the very things we are taught to develop – like active listening, instilling positive expectations, helping people re-evaluate their situation.  It’s incredibly difficult to disentangle the ‘active’ from the inactive components of the treatment.

Like Neil, I have concerns about encouraging, even inadvertently, any belief in a mystical, magical ingredient – chi anyone?  I also have concerns about any intervention that suggests the need for an ongoing relationship with a clinician – six-weekly ‘adjustments’ sir?  Or interventions that leave the power (or locus of control to be pedantic) with a gadget or device or substance that someone else needs to operate – three monthly infusions madam?

Dan Moerman’s view of health interventions suggests that every treatment inevitably contains culturally-based elements.  These are the result of an interaction between the person seeking treatment, the social environment in which they live, the treatment setting, the ritual associated with the treatment process, and the interpersonal relationship with the practitioner – everything we do in any healthcare encounter will influence the ‘healing’ or ‘meaning response’ of the patient.

Along with the placebo effect (meaning response), we sometimes forget the nocebo effect – the ill effects that people develop as a result of receiving an inactive treatment.  Take a look at any of the randomised, double-blinded, placebo controlled studies, and in a good one, you’ll see listed all the side effects that people developed when receiving the active treatment – and if you look carefully, you’ll also see the side effects that people developed when receiving the inactive treatment.  It’s entirely likely that along with our very effective, evidence-based treatments, some people will also either fail to respond, or will develop side effects that negate the positive effects of the intervention.

So what on earth do we do about this placebo thing?

Putting my ‘patient’ hat on for a moment, and remember that we are ALL patients at some point in our lives, I know that I want honesty from the practitioner I’m seeing.  I want to choose whether I have a certain treatment – or not.  And I want to know my options.  I’d like to be told about both the side effects and the hopefully positive effects of the treatment.  I want to know the evidence-base for the treatments I get (and if I don’t get told, you can bet, like many of our patients, I’ll be onto the internet and into the journals quick as a flash to find out!)

I don’t want to have a long-term relationship with a practitioner who will want to see me every six weeks or three months, and I don’t want chi (or woo).  I’m not into magic, superstition or intuition.

I’m inevitably going bring all my socially-shaped judgements and beliefs and prejudices into the treatment setting, and I know this is going to influence the outcome.

I’m likely to decide on a particular practitioner on the basis of word of mouth (reputation), what I’ve read from the literature (call that advertising if you will), and I’ll probably decide to return (or not) depending on his or her interpersonal skills – and because I too am influenced by the superficial – I’ll probably be influenced by the decor in his or her rooms and the cost of the treatment!

You see, we’re all influenced by these meaning responses.

So… what to do about placebo?  I, like Neil, hope that as the evidence accumulates, I will throw out the treatments that don’t have solid support from well-constructed RCT’s.  I will be mindful of my reputation and hope to have one that means I’m recognised by my adherence to an evidence-based approach to pain management.  I also hope I’ll always focus on helping people to help themselves, so I don’t inadvertently foster dependence on me.  I won’t be incorporating woo, chi or crystals (at least, not on the basis of current evidence!)  I won’t be using gadgets except as part of helping someone develop their own skills.  If I ask someone to come back after a bit of a break, it will be only to review how they’re going with their own goals, and to help them re-jig their plan for the future.

And I will try to recognise that some people will come to see me and will not ‘get better’ – not because of my approach, but because they have come into treatment with their own beliefs and expectations, their own ‘meaning response’ might interfere with what I’m doing.  Above all, I hope I’ll be honest about what I’m doing and be prepared to change my approach on the basis of science.

That darned placebo – whatever do we do about it? Learn more I hope!

Moerman DE (2002). The meaning response and the ethics of avoiding placebos. Evaluation & the health professions, 25 (4), 399-409 PMID: 12449083
Moerman, D., (2003). Doctors and patients: The role of clinicians in the placebo effect. Advances in Mind-Body Medicine. Vol.19(1), pp. 14-22.
Moerman, D., (2003). “Placebo” versus “meaning”: The case for a change in our use of language. Prevention & Treatment. Vol.6(1), pp. No Pagination Specified

Hypnosis: Response expectancies?


ResearchBlogging.org
Let’s explore the proposed mechanisms in hypnosis as I wander through the subject this week.
According to some researchers, response expectancies, or ‘the expectation of one’s own non-volitional reactions to situational cues’ are thought to play a major part in both hypnosis and placebo responding. Let’s translate that: a person’s belief that they will respond to something may lead to them actually responding. Possibly the original ‘mind over matter’!

Both hypnosis and placebo (or meaning response – see Dan Moerman for more details on this!) are complex effects that are not yet really understood, except to confound most RCT’s and to provide food for thought for philosophers and psychologists and lay people alike. In this paper, response expectancies were experimentally examined to see whether they have a mediating effect on the effect of ‘suggested’ or placebo analgesia. The methodology is a wonderful design developed by Barron & Kenny, where separate sets of mediator analyses are performed in which the no-treatment control condition is contrasted in turn with each of the treatments. Performing the analyses in this way isolates the mediator function of response expectancies in each treatment. Three regression equations were estimated, to in turn, identify the strength of the relationship between each variable. For more of the maths, go to this paper!

What did they do?
A group of students were recruited to take part in a study examining the effectivenes of ‘an experimental topical anaesthetic’. They were not informed initially that hypnosis was part of the study, to minimise the chance that they would inadvertently over-report their levels of pain to ‘give room’ to pain reduction with hypnosis.

Pain intensity was measured using an 11 point graphical scale 0 = no pain, 10 = pain as intense as one can imagine, and taken every 20 seconds while their finger was put in a pain stimulator for one minute (cruel people these psychology professors!).

Pain expectancy was measured using the same 11 point scale and the score was taken immediately after the baseline pain rating and indicated the level of pain the participants expected to be like if they put their finger in the device without any intervention.  During the experiment, participants were asked to use this rating immediately after they had experienced a pain control intervention (but without putting a finger in the stimulator) indicating what participants expected the pain would be like while using the pain intervention they had just experienced.

The Carleton University Responsiveness to Suggestion Scale (CURSS) was used to establish how how much each participant responded to suggestions contained in the scale. This is a measure of three types of suggestability: ‘Objective suggestibility’ : what participants think an observer would have seen them do in response to each suggestion.
‘Subjective suggestibility’ : indicates participants’ internal experience of each suggestion.
Involuntariness’: assesses the extent to which participants experienced each suggestion as occurring automatically and without a feeling of effort.

The torture instrument? This was A Forgione-Barber Strain Gauge Pain Stimulator. Details shall remain secret, suffice to say that it squashes the fingertip so that it hurts (ouch! remind me not to be a psychology student).

There were two phases to the experiment: the preparation phase, during which participants were given information about pain management, and offered the chance to try a pain intervention without actually submitting to the finger device. They were then asked to rate what they thought the pain would have been like had they actually had the device applied to their finger while using that coping strategy.

During the intervention phase, participants were administered the intervention while actually submitting to the pain device, and making intensity ratings.

The interventions

Hypnotic analgesia – in the preparation phase, the participants listened to a tape recording presenting information about hypnosis (correcting myths), using the actual hypnotic induction from the CURSS, information about hypnotic analgesia, and an opportunity to experience a short ‘glove anaesthesia’.  In the intervention phase, participants had the same glove anaesthesia hypnosis, but actually got the pain.

In the ‘imaginative analgesia’ condition, participants experienced the same glove analgesia suggestion used in the hypnotic analgesia condition, but without the hypnotic induction or information designed to correct misconceptions about hypnosis. The glove anaesthesia was represented as a ‘guided imagery’. In the preparation phase, participants were asked to ‘use your imagination’ to experience the glove analgesia, while in the intervention phase, the induction was given ‘live’, and you guessed it, the pain was live too.

The placebo condition involved an inert solution presented as an experimental, local, topical anaesthetic, but was actually oil of thyme, and iodine, presented in a brown bottle and labelled ‘Trivaricaine: Approved for Research Purposes Only’. During the preparation phase, participants were told about the effects of analgesia, with a lot of showmanship to demonstrate how ‘powerful’ the liquid was. During the intervention phase, participants had this liquid applied, received the pain, and rated their pain.

The final group were given a ‘no treatment’ condition – They waited the same length of time between the initial rating of pain expectancy as the other three groups, then received the same pain, and asked to re-rate their pain.

I won’t detail the rest of the methodology, but it followed a random assignment format to the four conditions, with equal numbers of males and females to each condition. Strategies were undertaken to minimise the potential for demand bias, ‘hold-back’ effect, and of course, participants were free to withdraw if they chose to.

Results
As usual, I’m not going to go through all the statistical detail – it’s well-documented in the paper, and you should read it if you want to really scrutinise the quality of this study.

As hoped, there was an effect from giving people pain, and the scores differed depending on the pain interventions given – scores did change, with the no-treatment group reporting more intense pain than those in the placebo group, and those in either of the hypnotic groups.

The object of this experiment, however, wasn’t to establish whether hypnosis had an effect – it was to examine the effect of expectancy on pain intensity, and to look at suggestibility.

First up, suggestibility ‘subjective and involuntariness dimensions of hypnotic suggestibility moderated the effect of the hypnotic analgesia treatment’ – what this means is that aspects of suggestibility do influence how effective hypnotic analgesia can be.

Then expectancies: ‘the effect of each of the three treatment conditions on pain intensity was partially mediated by response expectancies. The extent of mediation by response expectancies appeared to be greater in the placebo condition than in the hypnotic and imaginative analgesia conditions.’

What does this mean? The expectation that pain will be reduced influenced each of the treatment implemented – and more so on the placebo treatment.

Now that’s interesting!
Something about the beliefs that people place on not just the ‘active ingredients’ but the showmanship, ritual, and ‘hype’ involved in a treatment has an effect on how much pain relief a person achieves. Doesn’t that make you think about advertising for pharmaceuticals, the interpersonal skills involved in treatment – and how careful we need to be when discussing treatment options. However well-intentioned, I don’t think we can realistically offer impartial advice on different treatment options because we’re human – so how can a patient not be influenced by our (inadvertent) enthusiasm?

Milling, L. (2009). Response expectancies: a psychological mechanism of suggested and placebo analgesia Contemporary Hypnosis, 26 (2), 93-110 DOI: 10.1002/ch.379

More about acupuncture: press needles as a placebo


ResearchBlogging.org
Slightly tangential to my normal topics, I located this article today on a placebo procedure that may work for acupuncture.
Many people will be aware that in acupuncture, it’s really difficult to truly conduct a double-blind trial where both the person receiving and the person giving the treatment are unaware of which is the ‘active’ treatment. In fact ongoing criticism of many studies such as those reviewed in Cochrane reviews (and the recent post I made of Ernst’s review of 32 Cochrane reviews) is that in giving the ‘placebo’ treatment, the comparison is not really between acupuncture and placebo acupuncture, but it is instead of acupuncture with ‘something else’, and in doing this, much of the ‘active’ component of acupuncture is lost.

This paper, written by researchers from Kyushu University; and Fukuoka University, is in two parts: Part One ‘to evaluate the applicability and efficacy of the press needles, 90 participants who had never been treated using acupuncture were randomly assigned to receive either the press needle (n=45) or a placebo (n=45)’. This part of the study determined whether participants thought the needles penetrated their skin, and whether the intervention was in any way effective. The participants all had chronic low back pain, and the findings showed that there was no significant difference concerning the perception of penetration, and for patients with LBP, the press needles reduced the subjective evaluation of LBP compared with the placebo (P<0.05).
Just to clarify the two interventions: press needles are a device that look like this (see below), while the placebo has just the needle removed, so it looks exactly the same.
press needle

Part Two looked at ‘the mechanism for the analgesic effect of the press needles on LBP.‘ Before the press needle was inserted, an anesthetic patch (lidocaine) was applied for 30 min to block the peripheral nerve fibres around the acupoint site. The two groups were compared where one group was treated with the press needles after local anesthesia, and a second group who were treated with the press needles without anaesthetic. The findings from this study showed that LBP was reduced significantly more in the press needle group than in the local anesthesia group (P<0.05), suggesting that one potential action is via the peripheral nerve fibres around the acupoint site.

Of course, those who practice acupuncture suggest that it’s not simply the action of the acupuncture at the site of insertion but also the context of the treatment (the ch’i and balancing yin/yang and unblocking the flow of ch’i) – suggesting that unless you’re a ‘real’ acupuncturist you can’t replicate the ‘real’ action of acupuncture with all the nonspecific effects of the consultation and so on. Hmmm, if the practitioner is blind to whether or not the press needle has a needle, and carries out all the rest of the consultation as normal, perhaps these arguments will no longer hold.

I really do look forward to further studies using this device, so we can progress toward a methodologically sound way to establish whether acupuncture has any effect apart from those ‘nonspecific’ components of the consultation. If it does – then we can have the discussion about whether this intervention can be included as part of ongoing self management, or whether it should be something completed before self management is commenced.

One thing I’m always reminded of in being a scientist: I may need to revisit my opinion on whether an approach should be supported or not, depending on the cumulative evidence available. Dogmatic beliefs simply don’t belong in health practice.

Miyazaki S, Hagihara A, Kanda R, Mukaino Y, & Nobutomo K (2009). Applicability of press needles to a double-blind trial: a randomized, double-blind, placebo-controlled trial. The Clinical journal of pain, 25 (5), 438-44 PMID: 19454879

Ernst, E. (2009). Acupuncture: What Does the Most Reliable Evidence Tell Us? Journal of Pain and Symptom Management, 37 (4), 709-714 DOI: 10.1016/j.jpainsymman.2008.04.009

‘Placebo’ response in osteoarthritis – what does it mean in practice?


ResearchBlogging.org
One of the most maligned but to me most fascinating aspect of health care is the human response to placebo.

Placebo is an inert substance – or at least, a substance that is objectively without specific activity for the condition being treated. Dan Moerman has written about the so-called ‘placebo response’ and suggests that it should be called the ‘meaning response’ because humans attribute meaning to the interaction between a health care provider and a patient, and he argues that it is this meaning that influences the response in the patient. This definition is helpful for moving the source of ‘action’ in a placebo away from the inert substance and on to the interaction between the treatment provider and the person in whom the response occurs.

There are loads of reasons for researchers to be bothered by the meaning response – all of these randomised controlled trials in which one arm of the research uses placebo are confounded by the numbers of people who respond positively despite receiving the placebo dose (oh and yes, they can also respond negatively – the ‘nocebo’ effect that might include ‘side effects’ like nausea or headache). The gold standard in research is to compare the results of giving nothing with the results from giving an active ‘something’ – so to have the ‘nothing’ produce an effect is pretty awkward.

This paper by Doherty and Dieppe reviews the evidence for placebo response in randomised controlled trials in treatments for osteoarthritis. The authors specifically looked for trials with no treatment control groups, and to identify possible determinants of the size of any such effects. They reviewed 198 trials that met the inclusion criteria – 16,364 patients who had received placebo and 1,167 patients who had received active treatments of many types including pharmacological, nonpharmacological and invasive treatments.

The good news? Well, receiving placebo had a positive response for pain reduction (effect size of 0.51 (95%CI 0.46 to 0.55). Oh darn, that wasn’t what was being tested? If the patients received nothing (ie the research didn’t include a placebo group, but instead had a non-treatment comparison group), the effect size was almost zero (ES 0.03, 95% CI -0.13 to 0.18). So giving nothing at all meant nothing changed. The effect size was even higher in head-to-head comparisons between placebo and no treatment (ES 0.77, 95% CI 0.65 to 0.89). So giving something that doesn’t have an active ingredient works better than giving nothing at all. Curious.

The authors go on to identify significant independent effects for the outcome of pain relief – these include that the higher the treatment effect size, the higher the placebo effect; the higher the pain was initially, the greater the effect size in the placebo group; the larger the study, the higher the placebo effect; and finally, how the treatment was delivered made a difference as well – repeated needling had the highest placebo effect.

The scary thing for me is knowing that there is a systematic bias, demonstrated by a skewed distribution of the effect size, suggesting that ‘trials with smaller placebo effects are more likely to be published, perhaps because it is easeir to demonstrate the superiority of a treatment when this occurs’. But this effect size is influenced by the size of the study – so be aware!

Back to what this means in practice. We need to review what may be going on when a person receives a placebo (and, for that matter, an active treatment). There is a good deal of research on the factors that are known to influence the ‘meaning response’ in people. Dan Moerman’s book ‘Meaning, Medicine and the ‘Placebo Effect’ reads well, and describes some of the known factors – such as mode of delivery (injections are more likely to produce a greater effect than a pill); colour of pill (pink pills are responded to as stimulating, while blue pills are responded to as sedating); number of pills (more pills = greater effect); and the bits I’m most interested in, response expectancy, conditioning and the relationship between the person and the treatment provider.

Response expectancy is the expectation a person holds about what the treatment will do. Being told what a treatment is meant to achieve influences outcome – so we know that if we infuse our instructions about going for a walk with the idea that not only does it help with mobility and fitness, but it makes you feel good – then participants are more likely to report feeling good after a walk than those who didn’t get that instruction. Simply knowing that a treatment is being given has an effect – whether that treatment is placebo or not.

In conditioning, the response is associated with an aspect of the treatment – for example, in this paper, the authors describe people being given a green liquid to drink along with an immunosuppressant. After four doses given this way over a week, the participants were then given four doses of a placebo and the green liquid over a week – and lo and behold, the same immunosuppression response occurred.

Where the treatment occurs is also highly significant. Pleasant surroundings seems to improve recovery, a quiet tranquil setting aids relaxation, a messy office does not inspire confidence (hence I don’t see people in my office!).

And the final ingredient is the relationship between the person receiving the treatment and the person giving the treatment. From the words used, the attention given, the ‘therapeutic ritual’, even the amount of time spent with the person – these all influence the response in the patient.

What does it mean?

Doherty and Dieppe suggest that because placebo affects ‘the illness’ rather than ‘the disease’ (ie pain rather than x-rays), the value of it is primarily for patient symptoms and distress.  I’m not surprised – these are the targets of many interventions for painful conditions such as low back pain, headache and so on.  They, somewhat surprisingly, suggest that ‘we should be more aware of the power of the placebo response to relieve pain and suffering in people with OA, and that we should learn how to use it better’. My surprise is that it’s not the placebo that we want to use – after all, placebo is inert – it’s the meaning response we want to influence in patients, so they can maximise the positive benefits from the interactions they have with us as treatment providers.

The recommendations Doherty and Dieppe suggest don’t sound over the top – being calm, unhurried, focusing on the patient, having a comfortable environment, explaining what is going on in terms the patient understands – isn’t this basic human compassion? This is, as Doherty and Dieppe acknowledge, a treatment.  (Isn’t this so true of so-called complementary health consultations? Maybe that’s why they have such a profound effect on some people).   Perhaps if we valued time and caring (and made sure these were never compromised irrespective of the demands to ‘increase throughput’), our patients might benefit as much as we would?

Doherty, M., & Dieppe, P. (2009). The “placebo” response in osteoarthritis and its implications for clinical practice Osteoarthritis and Cartilage DOI: 10.1016/j.joca.2009.03.023

Placebo and social observational learning


ResearchBlogging.org
One of the greatest enigma in health is the human response to placebo. Placebo itself is an inert substance or treatment that has no effect – yet humans can respond with physiological changes as if the substance was active. For years some unscrupulous medical practitioners have used this response in people experiencing chronic pain as evidence that their pain is ‘all in the head’, or that their problem is ‘psychosomatic’, whereas other even less scrupulous snake oil merchants have used this as a way to sell things like crystals, colour therapy and even coloured lotions for the ‘healing’ of pain and other assorted symptoms.

Colloca and Benedetti are two of the most respected researchers into the phenomenon of human response to placebo. They have used a wide range of experimental methodologies to investigate placebo, and this one is yet another to add to their extensive repertoire.

In this study, hoping to investigate the effect of learning through observing someone experience placebo analgesia as compared with first-hand experience and verbal suggestion alone. The premise is that some placebo analgesia is influenced by expectancy, some by conditioning, some by reinforcement – and in this experiment, by social observational learning.

Social observational learning is where an individual watches another person and learns through ‘vicarious learning’.
In this experiment, the participants were asked to sit beside a person who had been trained to simulate the
experimental session. This person ‘always rated as painful the stimuli paired to red light and as non-painful the stimuli paired to green light. In this way, he simulated an analgesic benefit following the presentation of the green light.’ After observing this, the participant underwent his or her own experimental session.

The other two conditions were – one in which the person was conditioned using an electric shock paired with the red light, and were told a ‘sub-threshold’ electric shock would be delivered paired with a green light. An electric shock was never paired with the green light at all, leading to a conditioned response where the green light produced an analgesic effect. As the authors state: ‘It is important to stress that the stimulus intensity was surreptitiously reduced, so that the subjects believed that the green light anticipated analgesic effects’. This is a standard conditioning process used in Colloca and Benedetti’s placebo experiments.

The final condition was one in which the participants were told that the green light would be paired with an analgesic just before the shock was delivered – the subjects were told ‘that a green light would anticipate a stimulus that was made analgesic by delivering a sub-threshold electrical shock on their middle finger. Conversely, a red light would anticipate the deactivation of this electrode and thus a painful stimulation on the dorsum of the hand. Actually, all the stimuli were set to go off at the same time as the light.’

What were the results? Quite startling, actually! The subjects who had observed the analgesic effect in the demonstrator rated the green-stimuli consistently less painful than the red-stimuli. And every single green-stimulus was rated lower than the red. This effect simply from watching someone else apparently receiving an analgesia – when actually nothing was being delivered.

The experiential group, those that went through the conditioning procedure themselves, also reported reduced pain when the electric shock was paired with a green light. And finally those who were given a verbal instruction that they would experience analgesia paired with the green light also reported lower pain, but this dropped off fairly quickly after the initial instruction.

So there you have it – somehow by watching someone else obtain an effect, these participants developed a strong and sustained analgesic effect. What is it they were seeing? We’re not sure yet – but Colloca and Benedetti suggest that empathy has something to do with it, because there was a relationship between empathy and the response as measured on the Empathic Concern subscale of the Interpersonal Reactivity Index, a measure often used to investigate trait empathy. This wasn’t demonstrated for other subscales of the IRI.

What can we learn from this? Well, firstly it’s important to recognise that this is an experimental situation in a lab with volunteers – all female – who may not be like you or me! But findings like this can suggest that when we observe someone else reporting and behaving as if a treatment provides good results, we are likely to have a similar effect, provided of course we’re high in empathy. Similarly, but not quite as strongly, we respond to being conditioned ourselves to experience analgesia through a placebo.

Maybe an experiment like this will see the end of celebrity endorsement of magnetic underlays for the bed?!

Colloca, L., & Benedetti, F. (2009). Placebo analgesia induced by social observational learning Pain DOI: 10.1016/j.pain.2009.01.033
Colloca L, Benedetti FPlacebo analgesia induced by social observational learning, PAIN (2009),
doi:10.1016/j.pain.2009.01.033

Links and placebo


I’m going to ‘cheat’ a little today – I want to link to someone else’s blog and an article I’ve previously posted on to discuss placebo.

It’s a vexed question for people working in pain management – the ‘meaning response’, or interpretation of the meaning the patient and clinicians place on the health care interaction and the context in which it occurs, is at the heart of so much of pain relief that we can’t ignore it. But at the same time, we know so little about the ‘meaning response’ and have incredible difficulty studying placebo especially in the field of persistant pain.

I did discuss this article a couple of days ago, but I think the questions it raises are worth considering again.

This blog post by Jake Young, who writes in ‘pure pedantry’ (a name I admire!!) discusses his response to the article by Tilburt and colleagues, and says this ‘I don’t think I would be comfortable deceiving my patient under any circumstances. I see my role as a future physician partly as a healer but also as an educator. Patients — particularly patients with intractable chronic illnesses — want to understand what is happening to them. I almost feel like in deceiving them, I would be denying them that small measure of control — that small measure of dignity — that is vital to feeling like a complete person, even in the face of a life destroying illness.’

One persons response to this comment is this: ‘ in sum, placebos are unpredictable. They cannot be effectively prescribed in any rational manner. One placebo “effect” is the relationship between doctor and patient—this should always be used anyway. Prescribing a pill, elixir, etc and giving false information as to it’s effectiveness is unethical.’

Another said ‘oh please. how many medications do we have where we do not really and truly understand how they work? Just because you have a handful of acute pharmacological effects to report doesn’t mean you really understand the critical mechanism. How the hell does Ritalin work? How about Prozac, which takes many weeks for efficacy?’

While yet another said ‘The placebo effect may very well depend on generating a false belief in the patient. If lying works, why is this necessarily unethical? It should be subject to cost/benefit analysis like any other therapy…”

Hmmmm.

Given that we don’t know how many modalities in pain management ‘work’, we could be accused of using ‘placebo’ in many situations – how does CBT work? how does exposure therapy work?

But neither of these are inert treatments, which is what placebo is. And they don’t involve deception.

And it’s the latter that really bothers me the most.

If I intentionally choose not to disclose what I’m doing and why, how can I be assured I’m providing patients with ‘informed consent’?  Don’t patients (ie, you and me when we seek health care…) have the right to know their options, and choose?

There is an attitude amongst some providers that says ‘the ends justify the means’. Do you want that for yourself as a consumer?

I don’t know how my antidepressant medication works – but I know I’m taking a specific medication, and I’ve read the evidence that says it has a good effect, and my options were explained to me very clearly by my GP and specialist.

Deliberately providing me with an inert substance in the name of an active substance without informing me is deception, and if I ever found out, my trust in my health care provider would be betrayed. And I wonder whether one of the ‘active ingredients’ in the meaning response is actually trust.

Do you really want to abuse that? It could be you next…

Placebo debates go on…


It’s been a long time since I posted on placebo, but it’s a topic I keep returning to whenever I think about the complexities of carrying out randomised controlled trials on pain management. I’ve recently joined an on-line group called SomaSimple in which some of the most interesting debates I’ve seen in a while have been raging on… one of which is about placebo.

And it was while I was on there I was lead to this site which actually sells REAL small, inert, side-effect-free sugar pills that are often used in drug research as the control condition (otherwise called ‘placebo’).

One of my favourite sites is Ben Goldacre’s Bad Science site. He’s one of my favourite writers, and has been featured on BBC Radio talking about placebo. He’s written about it here and here’s the link to ‘listen again’ from BBC Radio 4. There are two parts to the discussion, and you can listen to part two here.

Another writer I particularly enjoy with regard to placebo is Dan Moerman. He’s written a book called ‘Meaning, medicine and the ‘Placebo Effect’. It’s been out since 2002, but the concepts haven’t dated one bit. He describes the placebo as being ‘inert’ – which it is, and states that the ‘placebo effect’ can’t be – because placebo is an inert substance. Therefore the term needs to be redefined, and he suggests ‘meaning response’ – something I won’t get into at all here, because he takes a whole book to discuss it, and I’m not about to attempt it. But at the heart of his thesis is that there is meaning ascribed to the relationship and interaction between someone seeking help and the person giving it. And this meaning is culturally-bound, susceptible to change over time, and profoundly involved in the ‘healing’ process, which is more than simply recovering from illness or injury. Worth thinking about really – he also adds that given that we can’t eliminate the meaning response in any healthcare situation, we should really learn as much about it and be effective with it, rather than pretending that it simply doesn’t exist.

A BMJ article just published discusses a recent survey of 1200 practising internists and rheumatologists in the United States. To quote a brief excerpt from the article, ‘Investigators measured physicians’ self reported behaviours and attitudes concerning the use of placebo treatments, including measures of whether they would use or had recommended a “placebo treatment,” their ethical judgments about the practice, what they recommended as placebo treatments, and how they typically communicate with patients about the practice.’

The results? Just over half of the 1200 participants responded (679), and of these ‘about half of the surveyed internists and rheumatologists reported prescribing placebo treatments on a regular basis’, they mainly thought this was ethical (Most physicians (399, 62%) believed the practice to be ethically permissible.). There are heaps of Rapid Responses to this article, well worth reading if you’re interested in the ethics of this sort of ‘treatment’. It’s certainly at odds with the American Pain Society position statement on the use of placebo, published in 2005.

Thoughts anyone?

Bad, bad science and why learning about real science is important


I had to chuckle a lot to myself this morning when I went over the Ben Goldacre’s site Bad Science and read through the article on the fad of Brain Gym. Thankfully my kids have mainly managed to avoid this – but oh! what a lot of twaddle dressed up in pseudoscience!

Basically for those who haven’t been exposed to Brain Gym, it’s a series of exercises intended to integrate neural circuitry so that kids learn more easily. A lot of the exercises are fun, they certainly make you think about coordination and they make people laugh – great stuff! Where they fall over is in the use of pseudoscientific claims about the mechanisms involved…
Now Ben makes some really good points about how easy it is for both lay people, and people with a degree of sophistication and knowledge, to be bluffed by statements that throw in a few polysyllabic words…

He reports on some experiments discussed in the “March 2008 edition of the Journal of Cognitive Neuroscience, which elegantly show that people will buy into bogus explanations much more readily when they are dressed up with a few technical words from the world of neuroscience.”

Here is one of their scenarios. Experiments have shown that people are quite bad at estimating the knowledge of others: if we know the answer to a piece of trivia, we overestimate the extent to which other people will know that answer too. A “without neuroscience” explanation for this phenomenon was: “The researchers claim that this [overestimation] happens because subjects have trouble switching their point of view to consider what someone else might know, mistakenly projecting their own knowledge on to others.” (This happened to be a “good” explanation.)

A “with neuroscience” explanation – and a cruddy one too – was this: “Brain scans indicate that this [overestimation] happens because of the frontal lobe brain circuitry known to be involved in self-knowledge. Subjects make more mistakes when they have to judge the knowledge of others. People are much better at judging what they themselves know.” The neuroscience information is irrelevant to the logic of the explanation.

The subjects were from three groups: everyday people, neuroscience students, and neuroscience academics. All three groups judged good explanations as more satisfying than bad ones, but the subjects in the two non-expert groups judged that the explanations with logically irrelevant neurosciencey information were more satisfying than the explanations without. What’s more, the bogus neuroscience information had a particularly strong effect on peoples’ judgments of bad explanations. As quacks are well aware, adding scientific-sounding but conceptually uninformative information makes it harder to spot a dodgy explanation.

Go on over to the post, and see for yourself – and then think about some of the pseudoscience involved in chronic pain management… I think many of the explanations for ‘believing’ in Brain Gym apply to therapists adhering to ‘NEW’ ‘IMPROVED’ treatments for things like CRPS or back pain. Let’s hear what you think…