On the evidence for decisions about the use of therapeutic interventions

You might have seen a theme emerging this week in my posts – clinical reasoning, evidence-base for treatments, balance between science and art … I came across this rather weighty document today in which Professor Sir Michael David Rawlins presents THE HARVEIAN ORATION Delivered before the Fellows of The Royal College of Physicians of London on Thursday 16 October 2008. Despite the rather grand titles, this discussion (published in full in pdf) is both a well-articulated explanation of levels of evidence and judgement, and quite an easy read.

In his paper, he describes the development and elevation of the RCT to the ‘pinnacle’ of evidence. He also describes the limitations of the RCT – which is particularly appropriate in the area of nonpharmaceutical therapies for pain management. These limitations are circumstances when it may be inappropriate either to conceive, or undertake RCT’s – for example where the numbers of people with the disorder are very low. He also argues that when there are very large effect sizes, or where previous studies have demonstrated good effects, using the null hypothesis is meaningless and findings should be put into the context – especially when comparing two equally helpful treatments. He goes further into Bayesian probabilities for determining when a trial should be halted, or as part of developing RCT methodology – something that statistics-hating occupational therapists may shudder at! Yes, there are numbers involved in working out whether something ‘works’ or not.

Other considerations are the generalisability of results, the assessment of benefit (not simply ‘does it work’, but ‘is it worth it?’), assessment of harms, and the cost of carrying out RCT’s. Especially when the intervention is a method or technique and not a pharmacological product that can be sold!

Sir Rawlins moves on to discuss observational studies. He identifies five types of observational study:
– historical controlled trials
– non-randomised, contemporaneous controlled trials
– case-control studies
– before-and-after designs
– case series and case reports.
and discusses the pro’s and con’s of each type of trial. There is much to be learned from these studies despite the limitations on drawing generalisations from them – the main concern is lack of controlling for systematic bias, which limits how comfortable we can be about applying the findings in patients who are dissimilar from those included in these studies.

The final type of study Sir Rawlins discusses is qualitative research. This is an area dear to my heart not because I dislike statistic studies, but simply because I think qualitative research provides rich pickings for exploring and describing phenomena as part of developing an explanatory theory. It’s the theory that gets systematically tested, but most research ignores the process of developing the initial theory. And theory has to start from somewhere – either direct observation, or building on the efforts of others.
Sir Rawlins agrees, and adds that qualitative research can also provide insights into social values, patient preferences – things that can’t be readily reduced to numbers. Yet qualitative research is often either completely ignored from levels of evidence hierarchies, or placed at the very bottom. In my mind, qualitative research is complementary to and an integral part of research enquiry. We need both.

Sir Rawlins concluding comments make this point

“Experiment, observation and mathematics – individually and collectively – have a crucial role to play in providing the evidential basis for modern therapeutics. Arguments about the relative importance of each are an unnecessary distraction. Hierarchies of evidence should be replaced by accepting – indeed embracing – a diversity of approaches.”

He goes on to say that this doesn’t mean replacing the ‘gold standard’ RTC with Bayesian probabilities (for which most of us will say Amen!), but he does urge us all to continue to refine the methods we use to make decisions, and to recognise that judgements on therapies are based on more than simple ‘effect’ – in fact, they’re based on integration of a range of scientific methodologies – with a hefty dash of social realism.
As Sir Rawlins puts it:

‘Such judgements relate to the extent to which each of the components of the evidence base is ‘fit for purpose’. Is it reliable? Does it appear to be generalisable? Do the intervention’s benefits outweigh its harms?’

I guess something that is completely missing from Sir Rawlins paper, is the fact that many therapists fail to systematically consider the range of alternatives available. If the therapy falls outside his or her clinical interest, scope or specialty, or it’s simply ‘not done here’, or the clinician isn’t an avid reader of literature (or doesn’t or can’t critique it), then decisions may well be made on the basis of history, anecdote or even habit. Let’s hope that therapists of any persuasion are taught throughout their undergraduate training to understand scientific method, learn to routinely critique the literature, and develop sophistication in clinical reasoning rather than relying on introspective ‘reflective journalling’ as the best way to develop and refine skill.

On the evidence for decisions about
the use of therapeutic interventions
Professor Sir Michael David Rawlins
Royal College of Physicians
11 St Andrews Place, London NW1 4LE
click here for pdf

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s