You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Writing for Publication

The academic publishing process: A lesson in antifragility

6 Jul, 16 | by Sheree Bekker

Mosaico Trabajos Hércules (M.A.N. Madrid) 02

Image: Mosaico Trabajos Hércules (M.A.N. Madrid) 02 by Luis García under CC BY SA 2.0

“Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. 

Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same;

the antifragile gets better”

Antifragile: Things that gain from disorder ~ Nassim Nicholas Taleb

Sheree Bekker and Dr Bridie Scott-Parker have teamed up to write this post on their experiences of the academic publishing process – they provide reflections from the point of view of a rookie researcher (SB) and a more experienced researcher (BSP). 

[Sheree Bekker] Congratulations rained in when I published my first academic paper. I had been open about my publishing journey on social media, and had shared each step it had taken me over the course of eighteen months – with rejections and revisions aplenty. The academic publishing process can be daunting for a rookie researcher, and sharing my failures and then ultimate success with my community on social media gave me a place to both vent and celebrate with others who had been through the process many times before.

As most academics do, I have now come to expect rejection. There is nothing unusual about such in academia – research is built on peer-review and journals with high standards and even higher rejection rates. Rejection is both a rite of passage and a way of life for academics. We are reminded of this often through corridor conversations, mentorship, and our own experiences.

Yet failure is not openly and honestly shared or spoken about in the wider sphere of academia. Sure, we all know that failure is the name of the game, but it is not really spoken about. I remember this negative CV doing the rounds on social media last year – and how radical it seemed at the time that someone was willing to share all their failures (gasp!) in an arena where a career – and for many academics self-worth – is tied to wins. Yet wins cannot be achieved without the failures. Go figure.

I am a big advocate for sharing our life’s work on social media, but have often wondered why we only share our ‘wins’. Do the failures speak to a lack of competency? No, I don’t believe so. Declaring a failure for the world to see speaks to shame and vulnerability – and also to courage and commitment. Now, I am not suggesting that all academics share all their failures all of the time – for I am told that you just get to a point where it does not bother you any longer as there are just too many to share – but I do believe that we owe it to emerging academics to, at the very least, open up the conversation a little more. Indeed, a Guardian piece recently reminded us academics: you are going to fail, so learn how to do it better.

The opposite of fragility is not resilience or robustness, it is antifragility. The ability to be poised to benefit or take advantage of stress, errors and change, the way, say, the mythological Hydra generated two new heads, each time one was cut off. Perhaps it is this antifragility that we need to cultivate as emerging researchers, rather than mere stubborn grit. Growing and learning out of our academic challenges, rather than merely ploughing through them.

[Bridie Scott-Parker] Seven years later, I still recall the daunting – nay terrifying – experience of submitting my first manuscript for peer review. While the paper from my Honours thesis emerged quite organically over a month or so (hilarious that I thought this took forever to happen), I actually spent more than 6 hours frantically checking everything was attached correctly, screens were completed, etcetera, then reading the entire PDF generated by the journal’s online submission system, before clicking ‘yes I want to submit this article for peer-review’ (and no I won’t change my mind because I am not allowed to….). Then the dreaded reviews arrived and I was crushed. Clearly I was a complete failure as a researcher, an academic, and as a human, and I should abandon all hope and live in a cave for the remainder of my life! Again, hilarious as my supervisors said that the comments were pretty good! I could see no good and I took some persuading that it is not personal. Having survived the review-revise-respond-review-revise-respond merry-go-round many many times since then, I have the benefit of hindsight to see that those reviews were indeed quite favourable. I also have the benefit of understanding that this is a normal part of disseminating findings, and that as researchers, academics, and authors, our skills are strengthened considerably each time we receive constructive feedback.

Please note also that I don’t live in fairy land. Sometimes reviews are one or two sentences along the lines of ‘this is rubbish, go away’ or ‘this is good and could be improved by some minor changes’, without actually providing any guidance regarding what was good, what was less good, and how to improve the manuscript. Such reviews are a waste of time for the reviewer, the editor who manages the review process, and the authors who are trying their best to share their research in an engaging and informative manner.

I have found over time that I have become the ‘poster child’ for antifragility. Take all feedback on board – good and bad – and learn from it. Where are my research and writing strengths? Where are my research and writing weakness? Don’t be afraid to ask peers and colleagues for unbiased feedback regarding your strengths and weaknesses. This information will only help you in the long run.

Last week I had the wonderful opportunity to serve as a mentor to a group of PhD students as they traversed the steeplechase-like process of preparing and submitting a paper for peer review. While I have no experience or skills in Indonesian diglostics, storm runoff, and conceptualisation of climate change adaptation, I have antifragility. I shared my own experiences and tips and tricks I have discovered to make the writing process that little bit easier. I also shared what reviewers look for, and cautioned against easy ways to ‘annoy’ reviewers and editors (my personal all-time favourite, don’t use any punctuation!). Yes, I completed my doctoral dissertation by peer-reviewed publication, and yes, I have a steady stream of post-doctoral peer-reviewed publications. However what you don’t see is the many frogs I proverbially had to kiss before the manuscripts turned into princes. My personal best (not my own project, I am pleased to say) was 18 different versions submitted to 9 different journals, and 3 email conversations between myself as corresponding author and the journal editor, before the paper was finally accepted. Each time the paper was revised, and sometimes it was resubmitted to the journal that provided the reviewer feedback (if not outright rejected). Yes, this is frustrating, but the final article (my silk purse) is so much better that the original submission (the sow’s ear). Bear in mind also that revising a manuscript in light of reviewers’ comments – even when you have done so 4 times – does not guarantee that it will be published within that journal.

Again, antifragility is the way to go 🙂

 

Censoring research

23 Jun, 16 | by Barry Pless

I am posting this for all Injury Prevention blog readers who are researchers or interested in research. I do so in part because John Langley is one of the pioneers in our field and was one of the Senior members of our editorial board from IP’s earliest days. But I also do so because the issue that prompted him to write this op-ed for his local Otago paper is by no means restricted to New Zealand. It is a widespread and important issue that has the potential to corrupt research in all manner of ways. IBP 

Read it and think carefully about the implications for your own work. Thanks to John for sharing it and for ODT for permission to reproduce it. Thankfully Injury Prevention, to the best of my knowledge, has never had to deal with this issue.

 Restrictive publication clauses in health research contracts

I was both pleased and disappointed to read the two ODT articles (20 February, 5 March) on this subject. Public airing of this important issue is long overdue. I was disappointed as I gained the impression that despite several scientists publicly expressing their concerns, the University of Otago Deputy Vice-Chancellor for Research appeared to have none. The Deputy Vice-Chancellor’s apparent lack of concern, however, is consistent with my experiences of research administration at the University.

For 20 years I was the Director of the University of Otago’s Injury Prevention Research Unit. This Unit was entirely dependent on funds from government agencies. Contrary to my expectations, the University of Otago did not pro-actively seek to protect me, or the public, from clauses in draft contracts that placed restrictions on publishing research findings. Protecting researchers increases the chances that they can serve the public interest, and meet the University’s legislated role of being the critic and conscience of society.

I spent a significant amount of time challenging clauses that would allow a government agency to censor a finding they disagreed with, or deny the right to publish work at all. On a couple of occasions during contract negotiations I was reminded by government agencies that they could purchase the outputs they wanted from other Universities or private organisations who would accept the restrictive clauses.

Why would Universities be accepting these clauses? I believe a key factor is the importance of, and competition associated with, generating research income. Collectively, NZ’s eight publicly funded universities derive approximately 15% of their income from research contracts. The success a University has, relative to others, in obtaining external research funds also has a significant role in determining the funding it receives from government through the Performance Based Research Funding scheme. Research income is critical to ensuring high quality research, thereby maintaining and enhancing a university’s reputation and thus attracting students, staff, and further funding. The purchasers of research can use the competition between universities, and researchers, to their advantage in getting restrictive publication clauses accepted.

Private research suppliers can accept restrictive publication clauses as they are typically uninterested in publishing in peer-reviewed journals and have no statutory or moral obligation to serve the public interest. While they would produce a research report for the purchaser, these reports are not frequently published, easy to access, or subject to rigorous quality control, e.g., independent peer review.

It was also my experience that many senior researchers did not care about the restrictive clauses. Why would this be so? Success in attracting research funding is, for most researchers, critical to pursuing their research, producing publications, and to promotion and public recognition.

This behaviour contrasts with Royal Society of New Zealand’s (RSNZ) Code of Professional Standards and Ethics in Science, Technology, and the Humanities. Section 8.1 of that code states that members of the Society must: “oppose any manipulation of results to meet the perceived needs or requirements of employers, funding agencies, the media or other clients and interested parties whether this be attempted before or after the relevant data have been obtained;”

I accept the right of a purchaser of research to see an advance copy of any paper for publication and to make any comments on it. But requiring modification beyond correcting factual errors is unacceptable. Even purchasers suggesting toning down a phrase here and there and putting in some qualifiers is problematic from a purchaser who is concerned to minimize bad publicity that might arise from the paper. It also places the researchers in a bind. Should they comply? If they don’t comply are they putting at risk future research funding from the purchaser? I suggest they might be. Why deal again with a ‘difficult’ group of researchers when you can purchase the work elsewhere?

The best approach to this issue is transparency. Make the deliberations between researchers and purchaser accessible to all as in the open review practiced by some scientific journals so readers can trace the discussion. This approach would not deal with contracts that explicitly prohibit the researchers publishing at all.

The Health Promotion Agency (HPA), a crown entity, charged with promoting healthy lifestyles recently put out a request for proposals (RFP) to assess whether the reduction in trading hours in Wellington has any impact on alcohol-related harm. The ‘indicative’ contract for this RFP stated: The Supplier will not publish the results of the Services undertaken pursuant to this Contract”. Only when challenged did HPA advise that it was a negotiable clause. Potential University researchers interested in bidding for such research should be able to take the ‘indicative’ contract as a reflection of the intent of the purchaser. Irrespective of this, preparing a high quality research proposal involves significant resources and many researchers would consider it was not worth the effort given the uncertainty of their right to publish.

In effect, some government purchasers are getting to decide what findings, if any, the public gets to see from research the public has paid for, either by pressuring some universities to accept restrictive clauses or by buying what they want from private suppliers.

Universities of New Zealand, the representative body of New Zealand’s eight universities, and the RSNZ need to enter in discussions with Government with a view to ensuring government research RFPs do not impose these restrictions. Interference with researchers ability to bid for, execute, and publish research compromises the role universities have as critics and conscience of society.

We need an independent audit of government research contracting to determine to what degree restrictive publication practices and the use of private suppliers is undermining the public’s right to be fully informed of the findings of research they have paid for.

Emeritus Prof John Langley

On turning journal articles into blog posts

6 Apr, 16 | by Sheree Bekker

4604140980_c838e29396_z

Typing by Sebastien Wiertz CC BY 2.0

Blogging can be a divisive topic amongst academics. It has been called frivolous, and a distraction from ‘real’ work by some – whilst others wax lyrically that it is the real work.

Fact is:

Social media and blogs are not just add-ons to academic research, but a simple reflection of the passion underpinning it 

Recently, Injury Prevention Editor-in-Chief Brian Johnston shared how to write a blogpost from your journal article in eleven easy steps with the social media editorial team. I have decided to turn this into a pleasingly-meta blog post about turning your – yes yourInjury Prevention papers into posts for this very blog.

Blogging is a genre in and of itself. Today, blogs are as much a part of scholarly discourse as papers, presentations, and corridor conversations. This represents a new manner of sharing your life’s work with others, in a more relaxed and personal (if you like) way. This genre allows you to share more of your personal story behind a piece of research, or to highlight findings that are especially interesting, or merely to share your passion with a community of engaged scholars. Further:

Academically a blog post boosts citations for the core article itself. It advertises your journal article in ways that can get it far more widely read than just pushing the article out into the ether to sink or swim on its own. A post reaches other researchers in your discipline (those who are not digital hermits). And because it’s accessibly written, it travels well, goes overseas, gets re-tweeted and re-liked

On behalf of the social media team, I invite you to contact us if you would like to turn your Injury Prevention paper into a blog post. I will leave it up to you to read the “how to” detailed in the post shared above, and to peruse the tips and tricks given. My advice, however, is much simpler: write the blog post about your paper as if you are explaining it to your mother/a teenager/friends at a party (in other words: plain language!).

The most exciting part of blogging is that it has no rules, unlike academic writing. The most terrifying part of blogging is that it has no rules, unlike academic writing.

We promise we will make this process as painless as possible (we are all injury prevention researchers after all!). We can alleviate common fears such as not knowing where to start by doing an interview style blog post together. We can help you overcome the lack of formality by guiding the structure and content of the blog post. Or you can write freely as you please.

For examples of authors who have already done this, see: here, here, and here.

We are always looking to hear more about your thoughts, views, and experiences as injury prevention researchers.

Contact me via Twitter on @shereebekker, or email me at s [dot] bekker [at] federation [dot] edu [dot] au

More on writing

5 Dec, 15 | by Barry Pless

I am not a fan of Elsevier and thus ambivalent about posting this. But, on balance, it may help some novice authors and perhaps some more experienced ones as well. Check out this link to the Elsevier Publishing Campus… many pdfs available to download on various aspects of writing and publishing. Hope it works.

https://www.publishingcampus.elsevier.com/pages/154/Colleges/College-of-Skills-Training/Resources-for-Skills-Training/Quick-Guides-and-Downloads.html

Workshop blog correction

2 Feb, 15 | by Bridie Scott-Parker

My apologies, it seems I need tuition in proof-reading! I mistakenly omitted Dr Ted Miller, Injury Prevention, as one of the Editors who will be leading the discussion at this great workshop.

SAVIR 2015 Workshop

1 Feb, 15 | by Bridie Scott-Parker

The very interesting workshop, Nurturing a Successful Academic/Early Professional Publishing Career, will be held at the SAVIR 2015 conference in New Orleans next month. The workshop will be held from 4.45pm to 6.00pm in the Oak Alley room, Sheraton Hotel.

Why are we holding this workshop? Because academic environments expect early career professionals to publish for their advancement in their career yet many university programs provide limited opportunities to their students to develop these abilities. The aim of this roundtable is to provide such opportunity for students and early career professionals in an informal setting. In this event, students and early career professionals will be able to closely interact and discuss with editors of leading injury research journals on the issues of identifying the right journal for your manuscript, writing informative abstracts, reporting statistical information, and how to address reviewer comments. This session is aimed at enhancing the capacity on improving the writing skills of early career injury researcher.

There will be two parallel roundtable sessions covering issues related to scientific manuscript preparation and publication. Discussions will focus on the following topics: writing informative abstracts,  how to address reviewer comments, how to identify the right journal for your manuscript, tips and suggestions for overcoming writers block, reporting statistical information: do’s and don’ts, and finally some common mistakes that you see made by researchers when publishing.

The roundtables will be limited to a total of 13 and 12 participants including the discussion leaders. The editorial board will consist of Dr. , Injury Prevention; Dr. Linda Degutis, Injury Prevention, Dr. Guohua Li, Injury Epidemiology; Dr. Frederick Rivara, JAMA Pediatrics; and Dr. Shrikant Bangdiwala, International Journal of Injury Control and Safety Promotion.

Don’t miss out – register for the workshop now!

 

How to Cite Social Media in Scholarly Writing

22 Feb, 14 | by Barry Pless

I have not posted anything for a week or so but came across this useful item for those who want to get it right when they get to the references section of a paper they are submitting to this or any other scholarly journal. It is timely because, increasingly, information is being used that comes from social media. But how to cite it?   The author is Camille Gamboa at SAGE US.

She writes: ” As it seems that social media will only play a bigger role in future research of all disciplines, I took to doing my own research on how Facebook posts, tweets, YouTube videos, etc. should be cited in academic publications. I came across the following table from TeachBytes that I thought would be helpful to share …

The Chicago Manual of Style

Not included in her chart (see below) is what she out about how to cite social media outlets following the Chicago Manual of Style.  Apparently there are not yet any official guides for Facebook, Twitter, and Youtube.

Blog Posts:

Firstname Lastname, “Title of the Blog Post Entry,” title or description of the blog with (blog), Date posted, url.  * Note – “(blog)” does not need to be included if the word “blog” is part of the name of the blog already. Citations of blog posts are part of the notes and not included in the bibliography unless they are frequently cited in one paper.

Emails:

Firstname Lastname, email message to XX, Date. Citations of emails are usually provided in a note and are rarely listed in a bibliography. Email addresses should not be included.

The chart she mentions is on the website below and applies to MLA and APA publications:

http://connection.sagepub.com/blog/2013/09/17/how-to-cite-social-media-in-scholarly-writing/ 

Nobel prize-winner criticizes elite journals

11 Dec, 13 | by Barry Pless

Writing in the Guardian, (Nobel prize winner) Schekman raises serious concerns over some journals’ practices and calls on others in the scientific community to take action. “I have published in the big brands, including papers that won me a Nobel prize. But no longer,” he writes. “Just as Wall Street needs to break the hold of bonus culture, so science must break the tyranny of the luxury journals.” Schekman is the editor of eLife, an online journal set up by the Wellcome Trust. Articles submitted to the journal – a competitor to Nature, Cell and Science – are discussed by reviewers who are working scientists and accepted if all agree. The papers are free for anyone to read.

Schekman criticises Nature, Cell and Science for artificially restricting the number of papers they accept, a policy he says stokes demand “like fashion designers who create limited-edition handbags.” He also attacks a widespread metric called an “impact factor”, used by many top-tier journals in their marketing. A journal’s impact factor is a measure of how often its papers are cited, and is used as a proxy for quality. But Schekman said it was “toxic influence” on science that “introduced a distortion”. He writes: “A paper can become highly cited because it is good science – or because it is eye-catching, provocative, or wrong.”

Daniel Sirkis, a postdoc in Schekman’s lab, said many scientists wasted a lot of time trying to get their work into Cell, Science and Nature. “It’s true I could have a harder time getting my foot in the door of certain elite institutions without papers in these journals during my postdoc, but I don’t think I’d want to do science at a place that had this as one of their most important criteria for hiring anyway,” he told the Guardian.

For more of what was written go to :

http://www.theguardian.com/science/2013/dec/09/nobel-winner-boycott-science-journals/print

Editors comment: I would have liked to see more evidence that these journals, his competitors, are truly guilty of the behaviour he alleges, but I suspect it is true. I do wonder, however, why he did not take this stand before winning the Nobel prize, but that may well be sour grapes. I don’t think this is much of a problem in our field and I am confident that Injury Prevention chooses papers to ‘stoke demand’.

Beware evidence-based evangelists

10 Dec, 13 | by Barry Pless

A colleague recently sent me a link to this piece in JAMA by RS Braithwaite, MD, MS  that cautions against placing too much weight on some ‘evidence based’ decisions. When the term became popular (was it really 20 years ago) I often referred to many of its more vocal proponents as evangelists.  I still think it is often oversold. Although I realize this piece applies far more to clinical decisions than those that are population based, I have the impression that when politicians want to avoid adopting a policy they often cover themselves in this sort of jargon and false reasoning. If you think I am way off base, please comment and say so. Incidentally, the colleague was Tom Lang who replied to my note of appreciation by adding that for clinicians the three most dangerous words are “In my experience..”

EBM’s Six Dangerous Words

 The six most dangerous words in evidence-based medicine (EBM) do not directly cause deaths or adverse events. They do not directly cause medical errors or diminutions in quality of care. However, they may indirectly cause these adverse consequences by leading to false inferences for decision making. Consider the following statements, each of which includes the six most dangerous words:

• There is no evidence to suggest that hospitalizing compared with not hospitalizing patients with acute shortness of breath reduces mortality.

• There is no evidence to suggest that ambulances compared to taxis to transport people with acute GI bleeds reduces prehospital deaths.

• There is no evidence to suggest that looking both ways before crossing a street compared to not looking both ways reduces pedestrian fatalities.

All of these statements are clearly absurd as foundations for decision making, yet they are technically correct. In each case, these hypotheses have been untested and therefore there is no evidence to suggest otherwise, presuming a definition of “evidence” that requires formal hypothesis testing in an adequately powered study.1 Indeed, as of this writing, “there is no evidence to suggest” appears in MEDLINE 3055 times, nearly as often as “decision analysis” (3140 times), a common framework for using evidence to make decisions. My anecdotal experience suggests that “there is no evidence to suggest” is a mantra for EBM practitioners, in a wide variety of settings. And it is infrequently followed by the clarifying aphorism “absence of evidence is not evidence of absence”2 or discussions of more inclusive definitions of “evidence.”3,4

Deciding not to intervene when “there is no evidence to suggest” the favorability of an intervention makes sense from a decision analytic perspective when the act involves potential harm or large resource commitments.5 However, deciding to intervene when “there is no evidence to suggest” also may make sense, particularly if the intervention does not involve harm or large resource commitments, and especially if benefit is suggested by subjective experience (eg, the qualitative analogue of the Bayesian prior probability).6

Indeed, the fundamental problem with the phrase “there is no evidence to suggest” is that it is ambiguous while seeming precise. For example, it does not distinguish between the vastly different evidentiary bases of US Preventive Services Task Force (USPSTF) grades I, D, or C, each of which may have distinct implications for decision making.7There is no evidence to suggest” may mean “this has been proven to have no benefit” (corresponding to USPSTF grade D), which has very different implications than alternative meanings for “there is no evidence to suggest” such as “scientific evidence is inconclusive or insufficient” (corresponding to USPSTF grade I) or “this is a close call, with risks exceeding benefits for some patients but not for others” (corresponding to USPSTF grade C). As a result, these six dangerous words may mask the uncertainty of experts. They even may be used to deny treatments with potential benefit, if they are interpreted as the equivalent of USPSTF grade D (“this has been proven to have no benefit”) but really mean the equivalent of USPSTF grade I (“scientific evidence is inconclusive or insufficient”).

Beyond its ambiguity, “there is no evidence to suggest” creates an artificial frame for the subsequent decision. It may signal to patients, physicians, and other stakeholders that they need to ignore intuition in favor of expertise, and to suppress their cumulative body of conscious experience and unconscious heuristics in favor of objective certainty. Suppressing intuition may be appropriate when the evidence yields robust inferences for decision making, but is inappropriate when the evidence does not yield robust inferences for decision making. Yet “there is no evidence to suggest” is compatible with either scenario. Because decisions are particularly sensitive to patient preferences when the favorability of an intervention is unclear (eg, USPSTF grade C), “there is no evidence to suggest” may inhibit shared decision making and may even be corrosive to patient-centered care.8 Indeed, it is instructive to note that most people make patient-centered decisions every day without high-quality (eg, randomized controlled trial) evidence, and these decisions are not always wrong. Furthermore, foundational papers in the EBM field make it explicitly clear that EBM was never meant to exclude information derived from experience and intuition.4 While some may argue that misuse of this phrase is only a symptom of not having received appropriate training in EBM, my experience with practitioners of EBM across the clinical, educational, research, and policy spectra suggests the contrary.

I suggest that academic physicians and EBM practitioners make a concerted effort to banish this phrase from their professional vocabularies. Instead, they could substitute one of the following 4 phrases, each of which has clearer implications for decision making: (1) “scientific evidence is inconclusive, and we don’t know what is best” (corresponding to USPSTF grade I with uninformative Bayesian prior) or (2) “scientific evidence is inconclusive, but my experience or other knowledge suggests ‘X’” (corresponding to USPSTF grade I with informative Bayesian prior suggesting “X”), (3) “this has been proven to have no benefit (corresponding USPSTF grade D), or (4) “this is a close call, with risks exceeding benefits for some patients but not for others” (corresponding to USPSTF grade C). Each of these four statements would lead to distinct inferences for decision making and could improve clarity of communication with patients.

EBM practitioners should abandon terms that may unintentionally mislead or inhibit patient-centered care. “There is no evidence to suggest” is a persistent culprit. Informed implementation of EBM requires clearly communicating the status of available evidence, rather than ducking behind the shield of six dangerous words

JAMA. 2013;310(20):2149-2150. doi:10.1001/jama.2013.281996.

Open access: I told you so

19 Oct, 13 | by Barry Pless

I have often inveighed against open access journals, or at least urged readers of this blog to be alert to predatory journals. Recently Retraction Watch posted an item from Science that greatly strengthens my concerns. The posting describes a paper sent to over 300 OA journals that was accepted by over one half. The only problem was that the paper was a spoof and carefully designed to make the problems both scientific and literary entirely evident. Here is part of what the report says:

Science reporter spoofs hundreds of open access journals with fake papers 

science 

…  today, we bring you news of an effort by John Bohannon , of Sciencemagazine, to publish fake papers in more than 300 open access journals.Bohannon, writing as “Ocorrafoo Cobange” of the “Wassee Institute of Medicine” — neither of which exist, of course — explains his process:

The goal was to create a credible but mundane scientific paper, one with such grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable. Submitting identical papers to hundreds of journals would be asking for trouble. But the papers had to be similar enough that the outcomes between journals could be comparable. So I created a scientific version of Mad Libs.

The paper took this form: Molecule X from lichen species Y inhibits the growth of cancer cell Z. To substitute for those variables, I created a database of molecules, lichens, and cancer cell lines and wrote a computer program to generate hundreds of unique papers. Other than those differences, the scientific content of each paper is identical.

Bohannon then combed the Directory of Open Access Journals (DOAJ) andJeffrey Beall’s list of possible predatory publishers , using various filters:

The final list of targets came to 304 open-access publishers: 167 from the DOAJ, 121 from Beall’s list, and 16 that were listed by both.

The results?

By the time Science went to press, 157 of the journals had accepted the paper and 98 had rejected it. Of the remaining 49 journals, 29 seem to be derelict: websites abandoned by their creators. Editors from the other 20 had e-mailed the fictitious corresponding authors stating that the paper was still under review…

Bohannon’s analysis , which goes into far more depth, demonstrates an appalling lack of peer review and quality control at the journals he spoofed. ….

Still, we will not be surprised if some traditional publishing advocates useBohannon’s sting as ammunition to fight wider adoption of open access. That gunpowder may be a bit wet, by the way. Bohannon writes:

Journals published by Elsevier, Wolters Kluwer, and Sage all accepted my bogus paper.

And Retraction Watch readers may recall that it was Applied Mathematics Letters— a non-open-access journal published by Elsevier — that published a string of bizarre papers , including one that was retracted because it made “no sense mathematically ” and another whose corresponding author’s email address was “ohm@budweiser.com.

Retractions, as readers may guess — and perhaps hope — will be forthcoming now that Bohannon’s sting has been revealed. Here’s part of a message theOpen Access Scholarly Publishing Association sent its members earlier this week:

In the event that your publishing organization has accepted and published the article, we expect you to follow recognized retraction procedures. if you require any assistance or guidance on retracting the article, the OASPA board will be happy to assist with this. In addition, should it be the case that OASPA members have published the paper, we will prepare a retraction notice/explanation that your organization may choose to use.

We’ll see if this changes the mind of the editor of the Journal of Biochemical and Pharmacological Research, who shrugged when “Cobange” told him the paper was fatally flawed and should be retracted. A correction would do, he said.

Editors note: This is funny but scary. I urge readers tempted to respond to one of those tempting time-limited offers to publish at a reduced rate to read this piece and the original if they can and then pause  and think again.

Latest from Injury Prevention

Latest from Injury Prevention