Guest blog: Improving peer review using peer-reviewed studies #PeerRevWk16

This week is the second ever “peer review week”. The theme for this year is “Recognition for Review”. Peer review week aims to highlight the importance of peer review, which is a crucial part of the research process. We asked Dr Adrian Barnett, from the Queensland University of Technology, and a member of our editorial board, to survey articles published in BMJ Open that present research on medical publishing and peer review

Avatar Image

It’s challenging to do peer review well and current models of peer review in health and medical research are regularly criticised by researchers who all have personal stories of when peer reviewers got things badly wrong. My own favourite recent example is a reviewer asking us to consider snow in our study of how rainfall impacts on salmonellosis in sub-tropical Queensland.

If we believe in peer review then we should believe in using peer review to improve peer review, and there are interesting studies that have highlighted problems with peer review. This introspective research is part of the growing field of meta-research or research on research, which uses research to examine and improve the entire research process. Such research is sorely needed considering that 85% of current health and medical research is wasted.

BMJ Open welcomes research on peer review and there are 54 papers in the category of “Medical publishing and peer review” including research on peer review as well other important meta-research issues, such as unpublished studies and how research is reported. The first paper in the category from 2011 examined reporting guidelines, and the most recent in 2016 looks at the reporting of conflicts of interest.

Can meta-research help when it comes to the difficult problem of recognition for review? To recognise good peer review we need to judge the quality of peer review, which means reviewing the reviewers.

An observational study compared the quality of reviews for reviewers suggested by authors with reviewers found by editors. The concern is that author-suggested reviewers may be too friendly, and in extreme cases be fake reviewers. The benefit of author-suggested reviewers is that it saves editors time in finding suitable experts. The study found no difference between the quality of reviews, but author-suggested reviewers were far more likely to recommend publication, with 64% of author-recommend reviewers recommending acceptance compared with just 35% of reviewers found by editors. It is possible that many authors suggest reviewers whose views agree with their own and whose work they have cited. Does this count as rigorous peer review, or would it be better if papers were critically analysed by researchers with a variety of views?

Another observational study examined peer reviewers comments for drug trials sponsored by industry compared with non-industry studies. The industry-sponsored studies had fewer comments on poor experimental design and inappropriate statistical analyses, and my guess is (based on personal experience) the industry trials employed more specialist staff because they have bigger budgets.

Both these studies had to spend time and effort reviewing the peer reviewers’ comments, and this extra effort is a key barrier to improving peer review.

Instead of reviewing every review a solution is to randomly check a sample of reviews. This would allow a reasonable number of reviews to be examined and graded in detail. If peer reviewers realise there’s a chance their work will be checked, then they should provide better reviews. The same idea is used by the tax office, who can’t afford to audit everyone but can increase compliance by random auditing.

Another benefit of regular random audits is that it would provide great data for tracking the quality of peer review over time, and allow a journal to ask whether things are getting better, or whether a policy change improved average review quality.

Of course the random tax audit works because there are severe penalties for those who are caught. A peer review audit would likely have to provide positive incentives, which could include a letter of commendation for the best reviews, promotion to the editorial board, or even the well-used incentive of money.

Dr Adrian Barnett is a statistician at the Queensland University of Technology, Brisbane. He works in meta-research which uses research to analyse how research works with the aim of making evidence-based recommendations to increase the value of research. @aidybarnett

(Visited 318 times, 1 visits today)