observation

Where does theory fit with practise?


ResearchBlogging.org
When I was an undergraduate, thinking about what postgraduate study I wanted to do, I wavered between enrolling in a Science Masters, or an Arts Masters. It made absolutely no difference in terms of the papers I could study – they were the same for either degree – but it did make a difference to the end degree. I decided on science. This is despite people saying ‘but therapy is just as much an art as it is a science’! Why? Loads of reasons, but several really spring to mind:

  1. Science emphasises the importance of and reliance on empirical observations and theory
  2. Science doesn’t rely on ‘intuition’ and ‘special insights’ into people and how they tick
  3. Scientific method supplies the tools I want to use to understand and investigate treatments that work because I want to ensure the patients I see get the very best, most effective input

This belief that science is critical to patient care is supported by many commentators – quoting from the paper I mentioned yesterday, ‘evidence based practice advocates that every rehabilitation and health professional should have an interest in delivering the best possible services to his or her clients, based whenever possible on the best clinical practices available from the research evidence.’ (Chwalisz & Chan, 2008)

Where does theory fit with this?

Dunn and Elliott (2008) state ‘a theory is a collection of coherent, related ideas derived from what is already known about some phenomenon in order to explain some existing behavior or to predict the occurrence of future behavior. Any theory, then, is used to establish causality and, in effect, to explicate what sequence of events led to what particular outcome or set of results.’

They go on to say that many theories are borne of clinical observations, or from the laboratory or simply from reading other pieces of literature.  Some of these settings are far removed from the often confusing and uncontrolled environment of the ‘real world’ – a laboratory doesn’t look a lot like a group pain management programme! But both settings provide opportunities for observing empirical phenomena – regular, stable features that occur and call for explanation.

I’ve blogged before about abductive reasoning: this is reasoning from descriptions of patterns to plausible explanations…moving from an ‘effect‘ to a potential causal mechanism.  It’s these possible causal mechanisms that go on to be developed into models or theories that can then be tested.  The problem I see is that there are many semi-formed theories that are being tested, and fewer really sound descriptions of stable clinical phenomena.

Anyway, back to theory.  Dunn and Elliott list nine advantages of a good theory:

  1. simplicity – straightforward, few special assumptions (no pleas to ‘energetic forces’!)
  2. consistency with what is already known – it can break new ground, but it should also fit with other knowledge eg phantom limb should ‘fit’ with medical knowledge about tissue healing, as well as other psychological knowledge
  3. empirical integration – it can borrow from empirical information from other domains of knowledge – eg occupational therapy theory can and does borrow from psychology theory, from cognitive psychology and even computer science
  4. organising and communicating findings – theories provide frameworks for organising what we observe, it needs to be readily understood by other professionals working in a similar field
  5. the importance of being general, not overly specific in scope – fits with more than one type of health condition
  6. shared, not owned – theories are public, living ideas within communities, open to criticism, extension and revision
  7. guiding and directing subsequent research – a good theory will generate questions, these questions will add to and open up areas for further investigation
  8. being highly practical – Kurt Lewin, social psychologist, made the point that ‘an effective theory remains useful as long as it predicts and explains relevant behavior’.  A good theory generates testable questions that can lead a researcher into unexpected directions to work with practical problems.
  9. open to adjustment and change – theories are meant to be tested against other theories to find out which gives the best, broadest, simplest and most accurate explanation.  This means they will be revised frequently, and even put aside if the evidence simply doesn’t fit.

What does this mean for yours and my clinical practice?

It means we need to keep our eyes open for patterns that have not been either fully described or fully explained. This means we need to be aware of our cognitive biases (see my previous posts on this!), we need to observe with our ears, eyes and hearts open.  We need to record accurately.  And every now and then we need to step back from our daily practice to take a look at what we’re actually seeing.

If we find an interesting pattern (for example, my finding in my Masters thesis that many people with chronic pain who are seeking work are socially anxious), we need to investigate it.  Now I don’t mean leaping in with an explanation: I mean taking some time to find out more about social anxiety and people with disabilities seeking work.  Does this observation hold for all people looking for a job change? Does it hold for all people with chronic pain? Does it only hold for those who have to change careers, or does it apply to people who are changing job only?

Once we’ve got a good handle on what it is we’re looking at, then we might be ready to come up with a tentative explanation or model or theory that can generate useful hypotheses.

How does this work within clinical practice?

Hopefully all of us record our observations, take notes, use questionnaires or other measures.  Hopefully we collate these observations so we can look at grouped data in different ways: using exploratory data analysis, and hopefully we will keep our biases in mind, and see what new relationships form between the factors we observe.  Then we can start to wonder and ponder and pose interesting questions about how and why.

And then we can start to consider a model or theory and try to organise our information around it, to see how well it fits.

And at the same time, we can draw on existing theories and models (eg the biopsychosocial model) to shape our clinical practice – remembering that none of us can state, with hand on heart, that we have ‘the answer’.  Remembering too, that the theories we rely on today should be constantly questioned.  Theories that don’t fit with what we see, especially if what we see occurs regularly, probably need to be revised.  It might not be ‘the patient’ who doesn’t fit or who is an anomaly, it may well be our theory or model.  Do you and I have the courage to say ‘I don’t think I know what’s going on?’ And then carry on finding out what might be?

Dana S. Dunn, Timothy R. Elliott (2008). The place and promise of theory in rehabilitation psychology research. Rehabilitation Psychology, 53 (3), 254-267 DOI: 10.1037/a0012962

‘its taken over my life’…


Each time I spend listening to someone who is really finding it hard to cope with his or her pain, I hear the unspoken cry that pain has taken over everything. It can be heartbreaking to hear someone talk about their troubled sleep, poor concentration, difficult relationships, losing their job and ending up feeling out of control and at the mercy of the grim slave-driver we call chronic pain. The impact of pain can be all-pervasive, and it can be hard to work out what the key problems are.

To help break the areas down a little, I’ve been quite arbitrary really. I’m going to explore functional limitations in terms of the following:
1. Movement changes such as mobility (walking), manual handling, personal activities of daily living
2. Disability – participation in usual activities and roles such as grocery shopping, household management, parenting, relationships/intimacy/communication
3. Sleep – because it is such a common problem in pain
4. Work disability – mainly because this is such a complex area
5. Quality of life measures

The two following areas are ones I’ll discuss in a day or so – they’re associated with disability because they mediate the pain experience and disability…as I mentioned yesterday, they’re the ‘suffering’ component of the Loeser ‘rings’ model.
6. Affective impact – things like anxiety, fear, mood, anger that are influenced by thoughts and beliefs about pain and directly influence behaviour
7. Beliefs and attitudes– these mediate behaviour often through mood, but can directly influence behaviour also (especially treatment seeking)

There are so many other areas that could be included as well, but these are some that I think are important.
Before I discuss specific instruments, I want to spend yet more time looking at who and how – and the factors that may influence the usefulness of any assessment measure.

Who should assess these areas? Well, it’s not perhaps who ‘should’ but how can these areas be assessed in a clinical setting.

Most clinicians working in pain management (doctors, psychologists, occupational therapists, physiotherapists, nurses, social workers – have I missed anyone?) will want to know about these areas of disability but will interpret findings in slightly different ways, and perhaps assess by focusing on different aspects of these areas.

As I pointed out yesterday, there are many confounding factors when we start to look at pain assessment, and these need to be borne in mind throughout the assessment process.

How can the functional impact of pain be assessed?

  • Self report, eg interview, questionnaires – and the limitations of these approaches are reliability, validity threats as well as ‘motivation’ or expectancies
  • Observation, either in a ‘natural’ setting such as home or work, or a clinical setting
  • Functional testing, again either in a ‘natural’ setting such as home or work, or a clinical setting – and functional testing can include naturalistic procedures such as the AMPS assessment, formal and structured testing such as the 6 minute walk test, the sock test, or even certain functional capacity tests; or it may be clinical testing such as manual muscle testing or range of movement, or even Waddell’s signs

All self report measures, whether they’re verbal questions, interview or pen and paper measures are subject to the problem that they are simply the individual’s own perception of the degree of interference they attribute to pain. The accuracy of this perception can be called into question especially if the person hasn’t carried out a particular activity recently, but in the end, it is the person’s perception of their abilities.

All measures need to be evaluated in terms of their reliability and validity – how much can we depend on this measure to (1) assess current status (2) contribute to a useful diagnosis (or formulation) (3) provide a basis for treatment decisions (4) evaluate or measure function over time (Dworkin & Sherman, 2001).

Reliability refers to how consistently a measure performs over time, person, clinician.

Validity refers to how well a test actually measures what it says its measuring.  The best way to determine validity is if there is a ‘gold standard’ against which the test can be compared – of course in pain and functional performance, this is not easy, because there is no gold standard!  The closest we can come to is a comparison between, for example, a self report in a clinic on a pen and paper test compared with a naturalistic observation in a person’s home or workplace – when they’re not being observed.

Probably one of the best chapters discussing these aspects of pain assessment is Chapter 32, written by Dworkin & Sherman chapter in the 2nd Edition of the Handbook of Pain Assessment 2001 (DC Turk & R Melzack, Eds), The Guilford Press.

Importantly for clinicians working in New Zealand, or outside of North America and the UK, the reference group against which the client’s performance is being compared, needs to be somewhat similar to the population the client comes from.  Unfortunately, there are very few assessment instruments that have normative data derived from a New Zealand or Australasian population – and we simply don’t know whether the people seeking treatment in New Zealand are the same on many dimensions as those in North America.

I’m also interested in how well any instruments, whether pen and paper, observation or performance-based assessment translate into the everyday context of the person.  This is a critical aspect of pain assessment validity that hasn’t really been examined well.  For example, the predictive validity (which is what I’m talking about) of functional capacity tests such as Isernhagen, Blankenship or other systems have never been satisfactorily established, despite the extensive reliance on these tests by insurers.

Observation is almost always included in disability assessment. The main problems with observation are:
– there are relatively few formal observation assessments available for routine clinical use
– they do take time to carry out
– maintaining inter-rater reliability over time can be difficult (while people may initially maintain a high level of integrity with the original assessment process, it’s common to ‘drift’ over time, and ‘recalibration’ is rarely carried out)

While it’s tempting to think that observation, and even functional testing, is more ‘objective’ than self report, it’s also important to consider that these are tests of what a person will do rather than what a person can do (performance rather than capacity). As a result, these tests can’t be considered infallible or completely reliable indicators of actual performance in another setting or over a different time period.

Influences on observation or performance-based assessments include:
– the person’s beliefs about the purpose of the test
– the person’s beliefs about his or her pain (for example, the meaning of it such as hurt = harm, and whether they believe they can cope with fluctuations of intensity)
– the time of day, previous activities
– past experience of the testing process

And of course, all the usual validity and reliability issues.
More on this tomorrow, in the meantime you really can’t go far past the 2nd Edition of the Handbook of Pain Assessment 2001 (DC Turk & R Melzack, Eds), The Guilford Press.

Here’s a review of the book when the 2nd Edition was published. And it’s still relevant.