A traditional approach to assuring oneself of quality of a journal article was to look at the reputation of the journal (via the impact factor, perhaps) and an assurance of peer review. The process of peer review may be poorly understood and shrouded with a cloak of mystery, and papers emerge from it with a sliver sheen of respectability. During the coronavirus pandemic of early 2020, the explosion of papers of pre-print servers (open access locations of submitted but not accepted studies) the ‘need’ for peer review was raised again as a totem of validity.
Now there are definitely some good things about peer review; and all the Archi reports are peer-reviewed by content experts and discussions in the Editorial team. Peer review, in trials and our experience, allows papers to emerge more balanced, clearer and more consistent, and discussing more of the key flaws and limitations in their study. Peer review can rarely improve the science of the study however; in our area, you can’t go back and do another set of experiments on patients.
What we need to think when it comes to pre-prints is to treat them with a greater degree of caution and hold to the core concepts of critical appraisal. Ask the report all those key questions of bias and reliablity, of transferabiity and patient-focus, and of congruence with the wider literature on a subject area. Bring your medical and clinical expertise to the party; see the outcomes measured with the specs of your patients on. Don’t believe them. But don’t assume they are untrue. Ask, acquire, appraise, and only then apply the findings.