Decision making and cognitive psychology iii


ResearchBlogging.org

OK, I said yesterday that I’d discuss debiasing, and I didn’t – so I will today!

Firstly, researchers have identified that ‘experts’ are typically over confident about their decisions.
(Henrion & Fischhoff, 1986)

One solution has been to ‘motivate’ clinicians to be accountable for their decisions, for example, by providing them with a total capped budget for treating all the patients in their area. The reasoning is that poor decisions will be less likely to be made if an error costs. Schwab finds three problems arising from this argument –
1. Methods that increase cognitive effort are useful only when the original decisions were made in a superficial way.
2. Accountability can actually exacerbate biases when judgments are based on the wrong information or when the judgment easiest to rationalize is biased.
3. Accountability has no effect when the biases result from inadequate training on how to make decisions and simply ‘trying harder’ won’t teach how to do this effectively.

Another possible solution is been to provide clinicians with feedback on their accuracy. It’s evident from many studies that good feedback given regularly helps improve the whole process – but how often do clinicians get this? Schwab says ‘when patients get better, it may or may not have been caused by the intervention. ‘Hindsight bias’ – the view that what has already happened was inevitable and, if they had taken the time, they would have predicted it all along – may limit the clarity of any … feedback.’

A further solution would be to train clinicians in how to make decisions. Although there is an awful lot of information and knowledge included in health professional training, specific instruction on how to make decisions (and especially about the cognitive biases that I’ve been discussing over the past few days) is not typically a strength. One study by Gambara and Leon (2002) has shown that training specifically in decision-making has ‘improved the breadth of considerations included in the decision as well as the orderliness of the strategies employed. This training increased the number of alternatives considered, which may help clinicians avoid missed diagnoses‘. Another study by Arkes and Harkness (1980: 574) also address the possibility of mis-diagnosis when they recommend that experts like physicians ‘note a diagnosis and all the symptoms observed that led to this diagnosis. Keeping these records avoids distortion of the symptoms presented through memory and also provides an easy reminder of the facts of a case, which may prove useful to combine with other feedback.’

Something that I think won’t be readily accepted by clinicians is this rather robust finding identified in Schwab’s paper: ‘‘Since 1954, every non-ambiguous study that has compared the reliability of clinical and actuarial predictions has supported Meehl’s conclusion [that actuarial models outperform clinician judgment]’’ (Bishop & Trout, 2002:S198). What this means is that by using data such as (in pain management anyway), gender, age, responses on psychometrics, more accurate decisions about patient care can be made than by clinician ‘intuition’ or ‘clinical reasoning’. I can hear the chorus of ‘yes, but’ from here!!!

In fact, Schwab finds that wherever it has been attempted to introduce statistical prediction rules another statement or proviso has typically been added, looking something like this: ‘‘[decision rules] should be used to augment or supplement, not replace or supplant, such individuals’ decision-making’’. As he points out ‘If the rule is more accurate than expert judgment, allowing the expert to ignore the rule short-circuits the rule’s effectiveness.’

What seems to be hard to give up is the ‘honour’ or ‘esteem’ placed upon the professional’s judgement – I have to quote this chunk from Schwab’s paper in full because he puts it so well…

Intuitively, it may seem that the individual with experience
is the best judge of what to do, but this intuition must be
supported by empirical evidence and be subject to rebuttal
in light of contradictory empirical evidence
. There is
a temptation to challenge robust evidence in favor of
statistical prediction rule because they automate decisionmaking
and so appear to undermine the value of human
decision-making.
And yet, if the aim is to produce the best
results, the value of human decision-making is realized
when the decision is made to use statistical prediction
rules, avoid the vagaries of expert judgment, and produce
the best results.

[Emphasis is mine]

I’ve summarised several of Schwab’s suggested ways of ‘debiasing’ based on knowledge about predictable and robust errors that people make when sifting through information to make decisions. Cognitive psychology has been able to identify the sort of errors humans make – but integrating this knowledge into practical ways to help clinicians avoid systematic errors isn’t easy. It can be done, however, and at Burwood Pain Management Centre, for over 13 years, the clinical decision-making process hasn’t altered very much. I can’t say it’s more accurate, but at least these steps are known to reduce some of the typical biases encountered in health care.

These are the methods that have been put in place to support less biased decision making:
1. Patients being seen about complex persistent pain problems are seen by three separate clinicians
2. These clinicians assess three different but overlapping domains of information – biomedical, functional and psychosocial
3. Patients also complete psychometrics that are reviewed before the patients are seen, along with past history from clinical records

These three steps help with ‘triangulation’ of information – consistency and discrepancies between the findings of each clinician can help form a much better picture of an individual when the findings are integrated than when they are held by separate clinicians

4. A semi-structured interview is used, consistent for every patient

This ensure that all domains are assessed equally for every patient irrespective of presentation, to reduce the potential for omitting important but not immediately notable information. A semi-structured approach allows for more detailed investigation of areas that may be needed, as identified perhaps in the psychometric data (eg depression, beliefs about the problem, avoidance)

5. A clinical meeting is held at the end of the assessment process, to discuss findings
6. A specific order is used to present information – psychometrics are reviewed, medical information, functional information then psychosocial information are presented in that order
7. Management options are only detailed after all the information is presented by all the members of the team

This process ensures that each team member can present uninterrupted (no hierarchy!); material presented first and last is often more readily recalled so the usually least known psychosocial material is presented in these two slots; teams often share common information rather than new information, so ensuring each clinician presents information summarised from their findings allows team members to hear about domains of knowledge that they wouldn’t usually hear about; and avoiding making management decisions until all the information is available reduces the opportunity for prematurely deciding on a diagnosis and finding it difficult to shift from it.

These relatively simple steps to ensure a structure is there to support effective decision-making can be carried out within even ‘virtual’ teams.  It does mean training clinicians in those teams to learn new ways of presenting their findings and relating to each other. And it does take time – but surely a little extra time spent making an unbiased decision will be more helpful for our patients than using error-prone ‘heuristics’ or ‘rules of thumb’ or ‘shortcuts’ that may not give good decisions in the end.

If you’ve enjoyed this post, and want to read more, you can! Either click on the RSS feed button above, or bookmark this site and come back. I post most working days during the week, and I love comments (even if you disagree with me – provided you’re respectful). Let me know you’re out there and leave a note!

Arkes, H. R., & Harkness, A. R. (1980). Effect of making a diagnosis
on subsequent recognition of symptoms. Journal of Experimental
Psychology, 6, 568–575.

Bishop, M. A., & Trout, J. D. (2002). 50 years of successful predictive
modeling should be enough: lessons for philosophy of science.
Philosophy of Science, 69(3 Supplement), S197–S208.

Gambara, H., & Leon, O. (2002). Training and pre-decisional bias in
a multiattribute task. Psicothema, 14, 233–238.

Henrion, M., & Fischhoff, B. (1986). Assessing uncertainty in physical
constants. American Journal of Physics, 54, 791–798.

Meehl, P. E. (1954). Clinical versus statistical prediction: A theoretical
analysis and a review of the evidence. Minneapolis, MN: University of
Minnesota Press

A SCHWAB (2008). Putting cognitive psychology to work: Improving decision-making in the medical encounter Social Science & Medicine DOI: 10.1016/j.socscimed.2008.09.005

4 comments

  1. Chris Burns has written a fascinating look at how we make decisions and how the mind fools us into making the WRONG decisions. He offers as exhibits a whole catalog of past disasters, of failures in information: i/11, the Titanic, the USS Vincennes, the space shuttle, 9/11. Are we making the wrong decisions on Avian Flu? The book explores this.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s