On the challenges of ‘doing research’


As a clinician and an educator I need to be aware of, and ‘do’ research. It’s not easy, even when I’m enrolled in a PhD programme! The problem is that so much ‘research’ is not finding out new stuff, it’s gathering existing information into a format that allows for ‘gaps’ in knowledge to be identified and then prioritised. And this isn’t sexy, isn’t visible, doesn’t result in publications, and is often undervalued. A bit like teaching/educating itself!

I don’t know what it’s like in the rest of the world, but here in NZ there are an awful lot of hoops to jump through to even begin collecting data, and this is pretty off-putting. And much of our research has to be based on short-term, fixed end-point research that can tie into the duration of a funding contract. With the thrust for research to be able to directly contribute to something pragmatic, the emphasis isn’t on philosophical or even basic science research, it’s on ‘what is happening now and what can be done about it’. Short, sharp, and applied.

So all the background, really integral parts of research are essentially invisible.

In putting together a PhD proposal, I feel like I’m breaking some new ground. And a whole lot of what I’m doing just isn’t going to be visible in my completed thesis – yet it’s incredibly important. I work primarily amongst scientists who like the hypothetico-deductive model – and I’m certainly not against that in any way. I just think there is a whole lot that happens before a hypothesis can be generated that is absolutely essential, but ignored. And this isn’t easy for scientists of the H-D persuasion. What do I mean?

Well – where did any hypothesis come from?

If we’re honest, it probably came from some sort of observation where someone saw a pattern or phenomenon that was ‘interesting’. It had to be more than a single occurrence because that could just be a ‘blip’ in the time-space continuum (ok, perhaps not there, but a ‘statistical blip’). It only becomes interesting when it happens often enough to be identified as a pattern. Pattern detection itself is influenced by past learning and models, or to bring the analogy back to research, to theory or past research. But it’s not entirely from past learning – new patterns are detected as people spend time immersed in data (otherwise known as ‘information’).

findtheman.jpg

(it’s only after you’ve spent some time immersed in this picture that you’ll pick up the hidden man!)

The pattern recognition phase is not mentioned once in the H-D scientific method – yet it’s fundamental to original research (technically it generates what is called abductive reasoning, or inference to the best explanation, but more of that in later posts!) And Ethics Committees, and funding agencies seem to have trouble coping with non-H-D studies.

Exploring data to find patterns raises questions. Oh no – questions are messy and inconvenient. Questions like ‘I wonder why…’ – and all the ‘w’ questions (who, what, when, where, why – and how which is a complimentary ‘w’ word in this!) are about thoroughly describing a phenomenon or pattern. It’s only once we have a good description of it, can we then start to think about mechanisms to explain the ‘w’ questions.

But think back to your ‘research methods’ course…

How much time was spent on these topics:

– wondering and pondering

– graphing and describing

– philosophy and history

and how much was on:

– experimental design

– statistical analysis

– qualitative vs quantitative

In other words, I think many therapists may learn ‘techniques’ for research, but not an awful lot about why it needs to be done. And therapists learn a lot about confirmatory data analysis such as hypothesis development and statistical analysis, but not a lot about the messy end of making sense of raw data.

Yet – most of us deal with messy raw data every day. It’s the bread and butter of therapy. What we don’t do is systematically collect that data, then explore it so we can identify patterns.

Let’s think of some examples:

– there ‘seem to be’ a number of people who come to pain management with an erroneous understanding of what ‘pacing’ is (ie pacing equals ‘never allowing my pain to go above a certain level’)

– ‘some people’ attending pain management have trouble limiting their activity level rather than being deactivated

– ‘it looks like’ referrals come at different rates from different specialties

– ‘many’ people attend ‘many’ different pain management programmes, but are referred for more therapy

– people being referred to pain management ‘seem to have a lot of’ assessment but ‘less’ treatment

Each thing inside apostrophes is an unquantified phenomenon, and an opportunity for collecting information, then exploring it. Once the information is explored it’s possible to generate some competing explanations for the phenomenon.

For example, if we can identify that 70% of people with a pain duration of more than 18 months have 8 pain assessments, but 2 pain management programmes, and these people are funded by compensation funders; and that the remaining 30% of people with a pain duration of 18 months have 2 pain assessments, no pain management programmes, and are funded by general health funding, we can start to develop some explanatory hypotheses that can be systematically studied.

But this is messy, ‘nonproductive’ research. It doesn’t come up with ‘answers’. It doesn’t ‘explain’ things – it asks questions. It needs to lead to ‘more research’, it needs continuity of investigation so it doesn’t get dropped, and it is real.

In my PhD study, I’m looking to explore the ways people who cope in the community despite having chronic disease processes achieve important activities in their life despite pain. I’m basing this on the observation (that is supported by limited formal data) that there are more people who report pain and disability than are seeking treatment from pain management. I’m not sure what coping strategies I may find out about. I’m not sure how these people developed their skills. I don’t know whether the skills they use are the same as those developed in formal pain management programmes. I don’t have any hypotheses, I don’t have a control group (because I’m not ‘testing’ anything), I don’t want random sampling because I want to look at the widest range of coping skills there might be, I don’t want to eliminate people because they are ‘using high doses of opiates’ because this might be a coping strategy that works well for some people. And yet I’ve been asked whether I have any and all of these included or excluded in my study.

I’m using grounded theory – but grounded theory isn’t well known in the H-D world. It’s a well-recognised research strategy, but doesn’t adhere to the common precepts of the H-D model.

Part of the appeal of ‘doing research’ for me is not knowing what I may find out. The H-D model deliberately forces a researcher to decide before a study what he or she wants to find out. It’s certainly necessary for learning about ‘reality’ and ‘truth’, but in itself it’s not the only part of ‘doing research’.

This is one of quite a few posts I’ll be writing on science and therapy – keep watching!

Why is it important for therapists? As I said before, therapists work right inside ‘messy’ information all day, everyday. Learning to tolerate and manage this data allows us to notice important phenomenon. Perhaps things that haven’t yet been explored in depth, and that might generate new ideas and priorities for the confirmatory part of research.

If you’re keen on learning about abductive reasoning, I thank Associate Prof Brian Haig at Canterbury University for stimulating my thinking and exposing me to this line of science. Some of his writing includes:
Haig, B. D. (2007). Grounded theory. In N. J. Salkind (Ed.), Encyclopedia of Measurement and Statistics, Vol. 1 (pp. 418-420). Thousand Oaks: Sage.

Haig., B. D. (2006). Consistency tests establish empirical generalizations. Nature, 442, 632.

Haig, B. D. (2006). Qualitative research: A realist perspective. Proceedings of the 2006 Joint Australian and New Zealand Psychological Societies’ Conference, pp 150-154.

Haig, B.D. (2005). Exploratory factor analysis, theory generation, and scientific method. Multivariate Behavioral Research, 40, 303 -329.

Another paper to read is:
– this is a reasonably old paper but great reading.

6 comments

  1. Thanks for posting this. I am currently teaching an introduction to Research Methods (http://res300.wordpress.com) class for final year undergrads and I am going to point them to this for some interesting thoughts as they begin to think about and discover what ‘research’ is. I particularly appreciate your comments highlighting that not enough time is spent on wondering what needs to be done and why. I will try to remember as I totally agree but it is so easy to get sucked into teaching the ‘techniques’. Thanks again.

    1. Hi Clare
      Thanks so much for mentioning me to your students – I do hope they make their way here. I’m just re-starting my PhD which had to be put aside for a year while I recovered from my postconcussion syndrome. Now it’s sandwiched in between full time work, being responsible for my 15 year old daughter, taking care of my household and my fun activities (photography and belly dance). I love science and methodology and philosophy of science – it’s a wonderful area to consider when you’re doing any clinical work, or are a consumer of any research. Keep in touch – and I’ll dip in and out as I can.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.