How do we make decisions about treatment? What errors are we likely to make and can we counter those errors?
These are really important questions to ask ourselves as clinicians if we want to avoid leaping into decisions that won’t stand up to scrutiny. Unfortunately it does mean we need to learn a bit about our human fallibility – oh and something about cognitive psychology. And the latter means reading some fairly intense material! Thankfully the paper I’m discussing to day helps to unpack some of the cognitive psychology literature into a form that I can make sense of…
This is a paper by Abraham Schwab, who is based in the Philosophy Department of Brooklyn College. That in itself is interesting – philosophy being about reasoning…
Anyway, he has summarised some of the material that has an influence on how we make decisions in complex situations – and there is no doubt that sifting through the information we collect during an assessment is a complex situation, confounded by the fact that these are real people with problems that will affect their real lives. And emotions surely influence our decisions – think about the effectiveness of advertising if you don’t believe me!
The first part of this paper defends the use of cognitive psychological research findings in medicine. (From now on I’m going to call this ‘health care’ rather than medicine, because I think the findings apply to all health care decision making.) A lot of cognitive psychology is based on laboratory findings using fairly abstract and somewhat trivial decisions, and in settings that are quite different from the complexity of the real world. Schwab argues that while laboratory experiments may be construed differently by various participants, in the real world it’s hard to misconstrue a decision about ‘a treatment or lifestyle that should be started, continued, stopped and so on…there is little room for misunderstanding that a decision must be made about the treatment or lifestyle in question’.
Likewise, he argues that although laboratory experiments are precise, perhaps less-than ideal settings for decisionmaking, it is precisely because errors are identified in these controlled settings that we need to learn about them – health decision are messier, more complex, and more difficult so any errors made in a controlled setting are much more likely to occur in a real life setting.
His final justification is that although experiments lack incentives and are probably not especially ‘important’, and as a result may not be directly transferable to the real world, in the real world some researchers have found that incentives actually decrease the quality of judgements – and given the incentives health care providers have to make good decisions (yes it does matter if pain increases, the person decides you’ve made the wrong decision, or you harm them), it’s likely more errors are made in real life than in a laboratory.
So, it’s a good thing to look at laboratory findings in cognitive psychology and think about their implications for the real clinical world. At the same time, Schwab recognises that more field work does need to be done – I’d agree with that too!
What are some of the biases we know about?
The first one is the effect of imagining why an hypothesis might be true, or explaining the reason for a decision. Hmmmm, what happens is that offering an explanation ‘unjustifiably increases confidence because it changes the individual’s perception of the problem, his or her interpretation of relevant evidence, and the search for additional information about the problem’. This means that if a person is ambivalent about a decision, the process of justifying any particular choice in itself reinforces to the person that they are making a good decision.
We as health providers fall into this trap when we make an explanation to the patient about why we are carrying out a certain procedure, and patients fall into this trap when they are asked to make a decision then explain why. The more we explain our reasons, the more firmly entrenched our confidence in our own decision becomes. We stop looking for alternative options, we selectively seek confirmatory evidence, and the information we do have we tend to interpret in a way to favour our decision.
Another bias arising from explanation influences the actions people take. People asked to imagine the good effects of making a decision tend to act on that emotion more than those who are simply told about their options. This can be a good thing – perhaps if we ask our patients to imagine life where they don’t boom and bust, they feel relaxed and calm, and they sleep well, they may be more likely to do what we suggest! We can also ask a person to explain how they might fail – and this seems to enhance success, provided we don’t ask them to make a prediction about whether they will or will not fail… Somehow making a prediction about failure makes it more likely, while explaining why something will be successful (without making a prediction about whether it will happen) also enhances the chances of action being taken.
I’ll post more from this paper tomorrow – today I’ll leave you with this thought: how can you use these last two biases to help the people you’re working with take action more successfully? Can you integrate these into your approach to patients?
A SCHWAB (2008). Putting cognitive psychology to work: Improving decision-making in the medical encounter Social Science & Medicine DOI: 10.1016/j.socscimed.2008.09.005