8 Dec, 13 | by Bob Phillips
Evidence based medicine – EBM – is a framework for thinking. It’s a process. It’s a method. It’s taking the most unbiased, patient-oriented, clinically relevant research, combining that with the wishes and opinions of the child/young person and family before you and integrating your own skills, expertise and resources to co-produce the most appropriate decision for them at that point in time.
It is transparent. It is explicit. If you ‘do’ EBM then though folk may disagree, they’ll be able to understand the thinking that they disagree with.
It is artistic. Without the ‘arts’ of communication, understanding, empathy and team working EBM may as well be a spreadsheet on an actuary’s hard drive. more…
4 Dec, 13 | by Bob Phillips
As part of our commitment to the International Committee on Medical Journal Editors, we at the Archives of Diseases in Childhood have supported the idea that all trials with a health-related outcome should be registered before they are undertaken, and made it obligatory for trials in our journal.
The reason for this is clear – it is to encourage the accurate and truthful reporting of a study, and avoid issues of altered interventions, meandering outcomes and selective outcome reporting (for more on that, see our recent blog here). It’s easy to do this; there are a range of international and freely accessible trial registries on which the information can be logged. It’s even possible to do this for systematic reviews – for exactly the same reasons. more…
1 Dec, 13 | by Bob Phillips
The Hideout has been promoting the voice of young people who have been subject to abuse in ways that young people can engage and understand; it’s a branch of the Women’s Aid charity in the UK who have been active in domestic violence support and prevention for over 30 years. more…
18 Nov, 13 | by Bob Phillips
Do you know a young person who would want to blog to a worldwide group of children/young people’s clinicians? Run the twitter account of an international journal for a day?
Next Friday, 22 November 2013, is the Children’s Takeover Day 2013 in the UK and we at the Archives of Diseases in Childhood, despite the fusty name, would like to offer the opportunity to use our social media to amplify the voices of some young people on how or what research, clinical innovation or approach should be done by our readers.
To offer yourself up, or support someone in doing so, get them to tweet @ADC_BMJ and we can then DM to work something out, or send an email to info.adc[ at ]bmj.com with the subject ‘Takeover Day’.
We look forward to hearing from you
2 Nov, 13 | by Bob Phillips
Outcome reporting bias: cherry-picking the best results
When planning an RCT, the choice of primary outcome is crucial. This is an integral part of the research question, and forms the basis of the sample size calculation. Secondary outcomes are also chosen, to give a wider indication of the effects of interventions, generate new hypotheses, and contribute to meta-analyses.
In many trials, outcomes are selectively reported, on the basis of the results, or the reported primary outcome differs from the one specified at the trial outset (again, often changed on the basis of the results). These practices lead to outcome reporting bias.
Selective outcome reporting renders the study report a biased reflection of the overall trial findings, which is misleading for the reader, and has substantial impact on the conclusions of Cochrane reviews (whose meta-analysis comprises positive results, but not all the negative results).
It can be difficult to evaluate for outcome reporting bias in an individual trial. It is important to check that all outcomes described in the methods are subsequently accounted for in the results. Consider whether other outcomes you might have expected to see are missing. It is also useful to check the trial protocol (if available) to check for outcomes that were measured, and not reported. In some conditions, core outcome sets have been agreed (which should be measured and reported in all trials in that condition), and it may be useful to compare these with what was reported in the trial.
29 Oct, 13 | by tessadavis
October’s #ADC_JC discussed this paper on bruising patterns in children with physical abuse.
Anyone working with children should be on the lookout for physical abuse – but can the number of bruises and the pattern of bruising actually tell us anything?
We were joined by Professor Alison Kemp, one of the study authors and I have storified the key discussion points HERE.
Our next #ADC_JC will be some time in November. The paper and date will be announced shortly – keep your eye on our landing page, or follow us on twitter to find out.
25 Oct, 13 | by Bob Phillips
Performance and detection bias – hiding who got what
Bias can occur if the treatment arm to which a given participant is randomized is known. When reading an RCT report, the term “double-blind” is often not sufficient to help appraise this. We need to know from whom treatment identity was masked, and how.
18 Oct, 13 | by Bob Phillips
Selection bias – some ‘equal groups’ are more equal than others
The groups of participants receiving interventions should be equal, otherwise confounding variables might give one treatment an advantage over another. If there is a systematic reason for this, the study is at risk of selection bias.
Randomization (sequence generation)
The first consideration is whether the treatments were allocated randomly. The best method is to predefine a randomization schedule, in which treatments are allocated by chance alone. Other methods could introduce differences between groups. An example with obvious implications (to highlight the point) would be to randomize preterm boys to one group, and girls to another. It might seem reasonable to allocate treatments, after participants enrol, by flipping coins or rolling dice, but this might make group sizes unequal. ‘Quasi-randomisation’ usually implies that patients receive treatments based on pre-randomisation assessment, which by definition introduces differences between groups, leading to particularly high risk of bias.
14 Oct, 13 | by Bob Phillips
There’s a really clear and neat idea that researchers do research, which gets published, and clinicians take this and do it in their practice. We know this isn’t true. But how to make the translation from study/publication at the clinic, onto the wards or out into the community is tricky.
Prof Trish Greenhalsh gave a really neat 15 minute video lecture on the theories behind this to the 2013 Nordic Conference on Implementation of Evidence-Based Practice, and it’s available here:
11 Oct, 13 | by Bob Phillips
Another new series of blogs here in the ADC website, from Ian Sinha of the Respiratory Unit, Alder Hey Children’s Hospital, Liverpool, UK, takes a look at explaining the deeper depths of critical appraisal of randomised controlled trials from the perspective of the Cochrane collaboration’s approach to this issue.
Clinical trials – reading between the lines
Just because a research study is called a “double-blind randomized controlled trial” (RCT), this may not be an accurate description. Even if it is a double-blind RCT, this does not mean it is scientifically robust.