The subject of heterogeneity (mixed~up~ness) in systematic reviews is tricky. A bit like ‘significance‘ you can think about it as both a clinical and statistical concept, and in the same way, you can get results that aren’t always concordant.
So now to go back to one of the big questions from the first blog of this series – ‘How are you even supposed to tell if a qualitative paper is even any good when there are no power calculations, blinding or difficult stats?’ Hopefully, if you’ve been reading through each blog, you might have begun to realise that there are different and valid ways to perform qualitative research. It therefore follows that there are different indicators of quality.
I know that’s a tricky question, and may make you think of cream pouring on apple crumble, discussions about chemotherapy, or episodes of Octonauts depending on exactly what frame of mind you’re in and background you have.
Within a research setting, however, how do we decide when something has been researched so much and folk have repeatedly found no/minimal effect, that we should just give it up. It doesn’t work (enough). This is a key decision to be made, and relies on a mixture of elements.
I’ve been thinking about this for quite a long time now, and this seems like a good time. I’ve spoken about this any number of times with students in clinic, and with doctors in training. The thing is, as soon as they hear me mention Terry Pratchett, I get the judgement. Or, to be more fair, I get the social profiling. The disguise I wear as a moderately competent 46 year old paediatrician with greying (OK, grey) hair and comfort in my surroundings slips away and there he is – the awkward, spotty, 14 year old. Actually, I didn’t come to Terry Pratchett until my university years but that’s almost too trivial to mention. The main point is that the stereotype is probably fair. This was an author with tremendous reach but with a core readership of the awkward, the geeky, the clever but, well, those with limited life skills.
What effect do you as a researcher have on your work? Perhaps the nice, neat, medical school answer is ‘we try to minimise how we influence research’. Certainly, quantitative techniques such as randomisation, blinding and objective measurements of results aim to reduce the potential for the researcher to influence the results of a study. However, in all research we have considerable influence on the results we get. Within qualitative research this concept is even more challenging, as the researcher is both a tool used to carry out the research, and one used to measure the result.
The UK Government set up an independent group to advise on strategies to improve the health outcomes of children and young people (from before birth to age 25 years) in January 2012. It’s role is to challenge the outcomes seen in England and offer advice on what strategies should concentrate on to improve.
It’s a Government document. It’s not David Walliums. But if you read our blog on medical management – and why engaging in this matters – then take a half hour, cup of tea, three (yes, three – you can tell your Mum I said it was OK) bourbon biscuits and have a read of the overview. You might even get so inspired you want to read the detailed reports, or seek to be involved in making things better yourself : tweet your thoughts with #cypoutcomesreport2015
– Bob Phillips
(Declaration of interests: I have three children whose outcomes I wish to be good, work in the NHS in England and am a member of the CYPHOF group.)
Imagine looking at a problem from different perspectives – perhaps the problem of why there are never any clean coffee cups on the ward.*
You might choose to count the coffee cups, monitor their usage, record where they are found at different times of day, or even ask members of staff about why they think there is a shortage. Using different methods to attempt to understand a problem is termed methodological triangulation.
(*Note – this is a somewhat unethical piece of work. There is no uncertainty in this situation. The cups are always in the doctors’ office. But please bear with me for the sake of this blog…..) more…
‘Course everyone can spot a child with autism. It’s there in the MRCPCH textbooks right? Something about a lack of speech and gaze avoidance and repetitive behaviour. That must be pretty amenable to a spot diagnosis.
This is me being a little provocative because hopefully very few, if any paediatricians think like this. Hopefully we all know that the condition exists on a spectrum (autistic spectrum disorders right?) and that often a diagnosis can be challenging.
With the publication and debate around Shape of Training (a UK-based review of how training the medical workforce will be revised for a new era of health care) there is a fair bit of … conversation … about a number of things. Some of these things include the question about how a ‘medical’ service is to be delivered with fewer doctors.
So, medical school taught us all about the rules of sampling in research – generally more is better, if you want to be more accurate then do a power calculation (although sometimes this may be akin to picking a number out of the air). And we all know that randomisation is good practice too – right?
Wrong. These principles hold true for lots of quantitative research, where you are going to use your results to calculate test statistics, and to answer questions about causation or relationships using numbers. However, remember back to the introduction to this series – in qualitative research, the questions, methods, reasoning and results are different to what we’ve been ‘brought up on’! This is also the case in sampling.