A theme of comments made by people I’ve seen clinically is that certain health care encounters they’ve had have not been especially helpful. Some people feel belittled, some patronised, some bamboozled, some dismissed – and yet in most surveys of health care satisfaction, the rating is pretty high (Jenkinson, Coulter, Bruster, Richards & Chandola, 2002). What is an effective way to measure how well we do what we do?
One method is to look at repeat customers – but in chronic pain management this could indicate that something is wrong! After all, if we’re working to help people who have a chronic condition to self manage their pain, coming back for more is most likely going to end up in an enormous number of patients to see! Chronic conditions, by definition, are not going to ‘come right’.
A better method is to look at outcomes – both pen and paper outcomes (questionnaire responses) and ‘real life’ outcomes such as attendances at other health care providers (Emergency Departments, GP consultations and so on), and things like the ability to return to work. Intermediate outcomes like range of movement improvements, more strength, better cardiovascular fitness and so on are also useful to collect (although no-one has developed a robust set of repeatable physical functioning measures that can be carried out without undue fuss).
There have also been some well conducted observational studies of health care encounters, but these take a lot of time, can be invasive for both patient and clinician, and there is always a suggestion that clinicians and patients change their behaviour when they are aware of being observed.
A final method is to – yes! Ask the person.
In this interesting paper, Stomski and colleagues summarise the state of play in ways to measure chronic pain consultation quality. They make a couple of interesting points about the ways in which clinical consultations can be improved to obtain better outcomes, points that may make some of us ponder about our own approaches:
- Basing pain intensity assessment on people’s subjective experience, especially as health professionals frequently underestimate pain intensity.
- Health professionals reflecting the validity of people’s chronic pain experience.
- Addressing psychosocial factors that perpetuate and exacerbate chronic pain, particularly catastrophizing, fear-avoidance beliefs, anxiety, depression, and social isolation.
- Affective components of the therapeutic relationship.
- Health professionals establishing collaborative relationships by eliciting people’s preferences, providing information about their condition and available treatment options, and involving them in decisions about their care
The reviewing team used the Medical Outcomes Trust’s eight attributes for assessing measures – these are having a conceptual and measurement model; content validity; construct validity; internal consistency; test-retest reliability; number required for mean score; responsiveness; interpretability; response burden; administrative burden; conceptual and linguistic adaptations.
Cutting to the numbers, over 3,000 papers were identified, 88 full text studies were considered for inclusion, 58 potential measures were identified, but only four were able to be included. They were Treatment Helpfulness Questionnaire (THQ), Trust in Physician Scale (TIP), Picker Musculoskeletal Disorder Questionnaire (PMSDQ), Modified Perceived Involvement in Care Scale (MPICS).
As far as I can see, none of these look specifically at non-medical clinical encounters.
Now for a set of measures intended to reflect what patients think of clinical encounters, there was one rather glaring omission – none of the questionnaires had elicited participants opinions about the relevance of the items during development of the instrument. A bit of an odd omission to me, and the authors of this paper make the point: “the content validity of most of the included measures needs to be reassessed by incorporating the target population’s views and this should be prioritized because assessments of reliability and feasibility are largely inconsequential if the measure’s content validity has not been adequately established.”
Exactly! They also say that further qualitative studies need to be undertaken to ‘articulate in detail’ the processes underpinning interactions during consultations. This is exactly the kind of situation in which qualitative studies give us really important information – generating theory that can then be tested in a more typical quantitative way.
These authors also point out that when developing a measure of any kind, it’s vital to consider how it’s going to be used. In ‘formative’ measures, ie those measures that give clinicians some indication of their own performance, it’s important to know whether the instrument is reliable over time – test-retest reliability. This has been omitted from the development of most of these tools which have, instead, looked at internal consistency.
Where does this leave us in terms of looking at our own skills in clinical situations?
Unfortunately, it doesn’t leave us too far advanced. The conclusion from this study is that none of the measures can be used clinically – but they don’t suggest that we all go out and develop some new questionnaires! Instead, further studies looking at specific aspects of these existing tools need to be carried out. It is vital that we learn more about how our behaviour in a consultation affects the lives of the people we hope to help.
Maybe more video recordings of clinical encounters can be made, with in-depth analysis of the communication patterns that occur during a consultation. Maybe further discussions with patients about their experiences of consultations – what worked, what didn’t work. Much more discussion with patients about the content of the existing questionnaires – do they reflect what patients see as important in the encounter? And finally, how well do patient expectations and clinical outcomes match? If a clinician is viewed as doing all the right things by patients, do the patients they see do better? and on which outcomes?
I’m a reluctant qualitative researcher. Not because I don’t like qualitative research, because I do, but because it is so labour-intensive, and I do wonder about how readily qualitative research can be generalised. At the same time, I recognise the value of qualitative research in generating the data that can then be used to develop quality theory. Once we have really good quality theoretical models, then we’re able to begin good hypothesis testing. And it’s at the hypothesis testing phase that quantitative methodology really comes into its own.
I don’t think we’re at the point where hypothesis testing can be conducted when it comes to the health care interaction for people with chronic pain. Let’s instead focus on exploring and describing the experiences of both clinician and patient in consultations about chronic pain management. Let’s take a closer look at health encounters for people with chronic pain. We might learn something useful about ourselves and how we work.
Stomski, N., Mackintosh, S., & Stanley, M. (2010). Patient Self-report Measures of Chronic Pain Consultation Measures: A Systematic Review The Clinical Journal of Pain, 26 (3), 235-243 DOI: 10.1097/AJP.0b013e3181c84e76