You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Journalology

Guest blog: Improving peer review using peer-reviewed studies #PeerRevWk16

19 Sep, 16 | by aaldcroft

This week is the second ever “peer review week”. The theme for this year is “Recognition for Review”. Peer review week aims to highlight the importance of peer review, which is a crucial part of the research process. We asked Dr Adrian Barnett, from the Queensland University of Technology, and a member of our editorial board, to survey articles published in BMJ Open that present research on medical publishing and peer review

Avatar Image

It’s challenging to do peer review well and current models of peer review in health and medical research are regularly criticised by researchers who all have personal stories of when peer reviewers got things badly wrong. My own favourite recent example is a reviewer asking us to consider snow in our study of how rainfall impacts on salmonellosis in sub-tropical Queensland.

If we believe in peer review then we should believe in using peer review to improve peer review, and there are interesting studies that have highlighted problems with peer review. This introspective research is part of the growing field of meta-research or research on research, which uses research to examine and improve the entire research process. Such research is sorely needed considering that 85% of current health and medical research is wasted.

BMJ Open welcomes research on peer review and there are 54 papers in the category of “Medical publishing and peer review” including research on peer review as well other important meta-research issues, such as unpublished studies and how research is reported. The first paper in the category from 2011 examined reporting guidelines, and the most recent in 2016 looks at the reporting of conflicts of interest.

Can meta-research help when it comes to the difficult problem of recognition for review? To recognise good peer review we need to judge the quality of peer review, which means reviewing the reviewers.

An observational study compared the quality of reviews for reviewers suggested by authors with reviewers found by editors. The concern is that author-suggested reviewers may be too friendly, and in extreme cases be fake reviewers. The benefit of author-suggested reviewers is that it saves editors time in finding suitable experts. The study found no difference between the quality of reviews, but author-suggested reviewers were far more likely to recommend publication, with 64% of author-recommend reviewers recommending acceptance compared with just 35% of reviewers found by editors. It is possible that many authors suggest reviewers whose views agree with their own and whose work they have cited. Does this count as rigorous peer review, or would it be better if papers were critically analysed by researchers with a variety of views?

Another observational study examined peer reviewers comments for drug trials sponsored by industry compared with non-industry studies. The industry-sponsored studies had fewer comments on poor experimental design and inappropriate statistical analyses, and my guess is (based on personal experience) the industry trials employed more specialist staff because they have bigger budgets.

Both these studies had to spend time and effort reviewing the peer reviewers’ comments, and this extra effort is a key barrier to improving peer review.

Instead of reviewing every review a solution is to randomly check a sample of reviews. This would allow a reasonable number of reviews to be examined and graded in detail. If peer reviewers realise there’s a chance their work will be checked, then they should provide better reviews. The same idea is used by the tax office, who can’t afford to audit everyone but can increase compliance by random auditing.

Another benefit of regular random audits is that it would provide great data for tracking the quality of peer review over time, and allow a journal to ask whether things are getting better, or whether a policy change improved average review quality.

Of course the random tax audit works because there are severe penalties for those who are caught. A peer review audit would likely have to provide positive incentives, which could include a letter of commendation for the best reviews, promotion to the editorial board, or even the well-used incentive of money.

Dr Adrian Barnett is a statistician at the Queensland University of Technology, Brisbane. He works in meta-research which uses research to analyse how research works with the aim of making evidence-based recommendations to increase the value of research. @aidybarnett

BMJ Open now publishes cohort profiles

22 Aug, 14 | by Richard Sands, Managing Editor

 

BMJ Open currently publishes articles reporting research results or study protocols. We have now expanded our scope to include cohort profiles, articles that describe major, ongoing research cohorts.

What’s the difference between a protocol, a cohort profile and a research paper?
Detailed information about cohort profiles is in our instructions for authors. In brief, cohort profiles will describe large, collaborative prospective studies that identify a group of participants and follow them for long periods. They will usually be population based, with sufficient funding to ensure their intended lifespan, and the original investigators must welcome wide use of the datasets beyond their own research group.

We will publish cohort profiles to provide information on a cohort’s establishment that goes beyond what can reasonably be described in the methods section of a research paper and to advise other researchers of existing datasets and opportunities for collaboration.

If a study has yet to begin recruiting participants, is still recruiting or is still collecting baseline data, please submit the study protocol. If you have completed baseline recruitment and have at least baseline data to publish, we would consider this a cohort profile as long as the cohort meets our other requirements.

We publish protocols to alert researchers to forthcoming research and to explain how specific research questions will be answered. Research papers are traditional results papers and should address a specific research question. Many cohort studies are conducted at a single institution by a single research group with no plans to answer further questions. Here “cohort study” is a research method. We welcome protocol and results papers for these studies but would not consider cohort profiles.

Why publish cohort profiles?
When presented with cohort studies to review, editors, peer reviewers and readers often think “exactly how were patients recruited? how representative were they of the wider population? were the questionnaires used to gather information on diet reliable?” and so on; things that too often are not well enough reported in research papers.

There is a clear advantage to publishing detailed profiles of ongoing cohort studies in an open access journal like BMJ Open, so anyone interested can easily access them when planning or appraising studies that arise from them. We hope to generate an ongoing database for answering many different research questions.

Will cohort profiles be peer-reviewed?
Cohort profiles will be externally peer-reviewed as normal, regardless of the cohort’s age or funding status and article publishing charges will apply as for research papers.

Bringing old trials to light in BMJ Open

14 May, 14 | by Richard Sands, Managing Editor

 

Today we have published the first trial prompted by the Restoring Invisible and Abandoned Trials (RIAT) initiative.

Dr Tom Treasure from UCL, with colleagues from University of Sussex and Imperial College, have brought back from obscurity the results of the ‘CEA Second-Look’ trial.

The study asked the question: in patients who have undergone a potentially curative resection of colorectal cancer, does a ‘second-look’ operation to resect recurrence, prompted by monthly monitoring of carcinoembryonic antigen, confer a survival benefit?

As well as the inherent clinical significance of the study to colorectal surgery the paper is important in the context of the AllTrials and RIAT initiatives to bring greater transparency to the conduct and reporting of clinical trials.

We are delighted that this paper has been published in BMJ Open and you can read more about the background to the paper’s preparation in an accompanying Analysis piece in The BMJ.

We are enthusiastic supporters of the AllTrials campaign (BMJ was a founder). We encourage submission of so-called negative results, such as this trial of weekly chloroquine therapy for malaria-associated anaemia. These papers may show genuine evidence of absence of an effect, but they may also report trials that were inconclusive (reporting an absence of evidence).  Results of trials that had to shut down early, perhaps due to recruitment problems or unexpected side-effects will also be considered. As well as RIAT trials, trials that just happen to be old are also important to publish. We also publish trial protocols and research into trial methods.

For many years The BMJ has campaigned for all trial results to be published and the creation of BMJ Open in 2010 was intended, in part, to provide a venue for trials that may struggle to be published by journals looking only for definitive, new or positive results.

Unfortunately, we also have to turn away some trials that are submitted to BMJ Open.

Before sending any trial or trial protocol for review we check the registration details. We follow rigorously the International Committee of Medical Journal Editors’ recommendations that trials should be registered prospectively, i.e. before any participants are recruited. Unfortunately we receive several studies every month that fail this check and are rejected. Doubtless they’ll end up published somewhere, but that is where we set the bar for ethical and methodological soundness of trial conduct.

If you have any questions about whether your study is suitable for BMJ Open, or about trial registration, please contact the editorial office at editorial.bmjopen@bmj.com. We would be delighted to hear from you.

First impact factor announced: 1.583

20 Jun, 13 | by Richard Sands, Managing Editor

 

BMJ Open’s first impact factor has been announced: 1.583. We are delighted to have this further evidence that BMJ Open is considered a journal of credible, valued research.

Does a journal’s impact factor matter?

In short – yes. When we surveyed our authors earlier this year, we asked what improvements we could make to BMJ Open. By far the most frequent response was: get an impact factor. The impact factor is over-interpreted, misinterpreted and almost certainly too influential. For as long as it remains important to authors, though, it will remain something journals must promote.

The question of whether a journal’s impact factor should matter, and how much, has been discussed for years. The San Francisco Declaration on Research Assessment (or DORA) highlights some of the issues.

Unlike many journals, BMJ Open doesn’t attempt to select articles on the strength of their likely citation count. So we’ll never have the highest in our field, and you won’t find us worrying about that.

In the future our impact factor may go up or it may go down. Our influence over this will go no further than our continued efforts to ensure BMJ Open publishes thoroughly peer-reviewed open access research and serving all our authors as best we can, so we can build on our reputation as a reliable publishing choice for researchers.  

Article-level impact

The impact factor is an aggregate measure of BMJ Open’s articles and so says nothing about any individual paper. This is why we also publish article-level metrics. Alongside every paper we publish you can see its abstract, HTML and PDF views, and citations to the article from elsewhere on the HighWire publishing platform.

Such so-called ‘altmetrics’ are increasingly popular. There’s a wealth of information about these available, and the BMJ Web Development Blog is a great place to find out more and keep up to date on these.

It is important to remember that research impact extends well beyond an article’s citation rate. This is especially so in clinical research where impact on public health and clinical care cannot be captured by bibliographic measures. 

What is the impact factor?

The impact factor is a journal-level citation metric. It is usually calculated over three years, by adding the number of articles a journal publishes in years 1 and 2, and seeing how often, on average, these articles were cited in year 3. As BMJ Open only has two years’ worth of citation information, our impact factor was calculated using the number of articles we published in 2011 (which Thomson Reuters calculated as  151) and the number of times they were cited in 2012 (239). 239/151 = 1.583

Most-cited from 2011

The following are the papers we published in 2011 with the most citations in the Thomson Reuters Web of Science in 2012:

Armstrong PK, Dowse GK, Effler PV, et al. Epidemiological study of severe febrile reactions in young children in Western Australia caused by a 2010 trivalent inactivated influenza vaccine. BMJ Open 2011;1:e000016. doi:10.1136/bmjopen-2010-000016 

da Costa BR, Cevallos M, Altman DG, et al. Uses and misuses of the STROBE statement: bibliographic study. BMJ Open 2011;1:e000048. doi:10.1136/bmjopen-2010-000048

Cohen JI, Yates KF, Duong M, et al. Obesity, orbitofrontal structure and function are associated with food choice: a cross-sectional study. BMJ Open 2011;1:e000175. doi:10.1136/bmjopen-2011-000175

Hotchkiss JW, Davies C, Gray L, et al. Trends in adult cardiovascular disease risk factors and their socio-economic patterning in the Scottish population 1995–2008: cross-sectional surveys. BMJ Open 2011;1:e000176. doi:10.1136/bmjopen-2011-000176 

Using the SPIRIT statement to improve trial protocols

18 Jan, 13 | by Richard Sands, Managing Editor


We have updated our instructions for authors to show that we now encourage the use of the SPIRIT statement.

SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) is ‘an international initiative that aims to improve the quality of clinical trial protocols by defining an evidence-based set of items to address in a protocol’. Its creation was funded by four Canadian health research institutions.

The full statement has been published in the Annals of Internal Medicine, and the explanation and elaboration in the BMJ. BMJ Open’s editor-in-chief, Dr Trish Groves, is a member of the SPIRIT group.

BMJ Open published 17 protocols in 2011 and 79 in 2012, though not all were for interventional trials. We require ethics approval and registration in an ICMJE-approved registry for trial protocols, and encourage registration of systematic review protocols in PROSPERO, led by BMJ Open editorial board member Prof Lesley Stewart.

Publishing protocols provides a valuable service by allowing researchers to publicise ongoing work and hopefully facilitating cooperation and reducing duplicated efforts. By making the intended methods fully available the chances of a study’s replication may be enhanced. Protocol publication should also ensure that any changes to methods adopted during a trial are reported as such in results papers.

As the SPIRIT authors write, ‘High quality protocols facilitate proper conduct, reporting, and external review of clinical trials.’ We will be encouraging authors to use SPIRIT to help meet these goals.

Open roads and closed sessions

10 May, 11 | by Richard Sands, Managing Editor

 

A recent report on the next steps for increasing open access to UK research concluded that Green OA infrastructure (i.e. repositories) should be encouraged while an economically sustainable transition to Gold OA is worked through. 

‘Heading for the open road: costs and benefits of transitions in scholarly communications’ by CEPA and Mark Ware Consulting, was commissioned by the Research Information Network, the Wellcome Trust and others, and was released a few weeks ago.

The report assesses various ways to increase access to scholarly communications (i.e. journal articles) emanating from researchers based at UK institutions.

The authors produce a benefit—cost ratio for each scenario and discuss the relative risks, including risks to the publishing industry’s business models, acknowledging the value that the industry brings over and above the administration of peer review.

The report concludes that the most cost-effective scenario is the ‘delayed’ scenario, where articles are made freely available after an embargo period. As publishers retain control of this embargo period, subscription cancellations are thought to be less likely, and the set-up costs for such a transition are seen to be low. The embargo periods would quite probably be longer than preferred by funders, though, meaning that the idea is unlikely to gain traction. Another premise is that the preferred scenario should be one open to influence by public and funders’ policies, and the current run of mandates for repository deposition casts further doubt on this model’s success.

So the report concludes as follows.

‘[O]ur view is that the prudent stance for policy-makers seeking to promote access in the current environment is likely to be as follows:

to encourage the use of existing Green infrastructure [i.e. repositories] (whose costs are largely sunk);

but to be cautious about pushing for reductions in embargo periods to the point where the sustainability of the underlying publishing model is put at risk;
in parallel, to work to facilitate a transition to Gold OA (in specific disciplines first) provided that (i) the average level of APCs [article-processing charges] remain at or below £1,995; (ii) the proportion of articles funded through APCs moves broadly in line with global rates; and (iii) mechanisms are in place to ensure that total payments from UK universities and their funders do not rise as a consequence of this transition.’

In a reply posted in various parts of the blogosphere Green OA advocate Stevan Harnad has branded the second part of the report’s conclusions short-sighted, premature and mistaken. You can download the report, read his views and the response from RIN’s Michael Jubb, here.

In his response to Harnad, Jubb argues that ‘it is perverse not to recognise that the stakes are high for individual publishers and perhaps for the industry as a whole’. What is at stake was under discussion at the PA–ALPSP journal publishers’ forum on ‘Open access: the next 10 years’. The event was held under the Chatham House rule so comments can’t be attributed. There was a cautious approach, reflected in the session’s premises (‘Has the time come to turn the threat into an opportunity?’). There was a focus on: getting to grips with the current trajectory of public sector and funding bodies’ open access policies, rather than crystal-ball gazing to predict the state of play in a decade’s time; the rise of Green OA mandates; and the perceived need for a ‘compelling, coherent and above all positive story’ to tell about the value the scholarly publishing industry brings.

A more formal output is promised.