Monday, October 7, 2013

Science Unfairly Bashes Open Access Journals For Abysmal Peer Review

John Bohannon's publishing experiment hit a strong chord with many people when it was itself published in Science last week.

The gist of his experiment was this: He submitted a fake, computer generated article about cancer inhibiting compounds found in lichens to 304 Open Access journals to see how many of these would accept the paper.  The anti-cancer compounds from lichens story itself isn't groundbreaking (yes, it's been done) but is still worthy of a paper in a journal somewhere.

The punchline of Bohannon's experiment was that his fake paper was so obviously flawed that any reviewer "with more than a high-school knowledge of chemistry" should have rejected it.  Surprisingly, 157 of 304 of the Open Access journals he targeted accepted the paper.

At first glance, it's an interesting conclusion, which led to the subtitle driven home by Science:
A spoof paper concocted by Science reveals little or no scrutiny at many open-access journals.
In choosing to emphasize this, many people are going with the assumption that Bohannon's project demonstrates that Open Access journals publish sub-standard research.  That's a false assumption.

What this Science project does actually demonstrate is that many publishers pay lip service to peer review in hopes of earning publishing fees.  Part way through, I was willing to bet that similar problems would be found amongst closed access journals as well, and happily found that David Roos, a biologist at the University of Pennsylvania, was quoted deep near the end of the Science article saying exactly that.

The problems with this study and how Science appears to tar and feather Open Access have been well argued by the Society of Publishing and Academic Resources Coalition; no control group of subscription based journals was used, and that the selection of journals for the study was stacked towards predatory journals from the used of Beall's List. (Update: On Michael Eisen's blog, it's mentioned that Bohannon and Science didn't actually focus on Open Access journals and practical considerations limited the study to OA journals.)

SPARC's arguments, together with a response from the folks at the Directory of Open Access Journals (that points out Science's misleading quote of an email from the DOAJ), suggest to me that Bohannon's study can only claim to have identified many journals that use little, if any, quality control measures in their publication process.

Which means that that we can all avoid reading anything from them in the future.

The problem of terrible journals has even become recognized by mainstream media.  Canada's National Post points out that these bogus journals have the potential to pollute biomedical literature, specifically:
...that low-quality studies are accepted into the body of published work that future scientists, doctors and engineers use as a reference.
Scientists aren't getting paid to sift through mounds of trash to find good articles, they're trying to discover new knowledge and do something useful with it.  No one that depends on reading scientific literature should be spending their time QCing papers that a journal should have done properly in the first place.


Picking the HBOs of Journals


Like many products in the world, science journals exists as brand names, names that mean something to the people consuming that media.  A journal name implies as certain assurance of quality of the papers printed within.

When someone decides to read a paper from a particular journal, they expectations whether or not their time will be well spent, so if you see something in Science, Nature, or Cell, the assumption is that the work is novel or significantly important in some aspect.

Like this trio of journal brands, I happen to have a positive bias towards any series put out by HBO after watching Rome, Band of Brothers, and most recently, Game of Thrones.  I feel that my time will be well spent watching something put out by HBO.


Branding Using Open Peer Review 


The most significant issue revealed by Bohannon is that peer review at selected Open Access journals is terrible or non-existent.

In a way, it's not surprising.  There's nearly zero incentive to review articles in the current system.  Academics do not really get credit for reviewing papers, other than a line on a CV associating them with a journal.  No one knows how many articles someone has reviewed.  No one gets paid to review papers.  In fact, since peer review is usually closed, no one can really tell whether a paper has been properly.

Some journals do offer open peer review - BMC Medicine being a notable example - where names of reviewers are published with the article.  Here's a system where authors can be assured that their work is evaluated fairly, and that reviewers can't hide behind anonymity and unreasonably delay or be overly critical of papers.

With open review, a reader can decide whether or not to trust the reviewer's assessment of the paper.  Was it done well, or was it rubber-stamped?  Reviewers (and the places that employ them) could count their work as contributions to the scientific community.  Authors could also contact reviewers after the fact and potentially spark up new projects.  Hey, it would even be neat to search PubMed for papers approved by a certain reviewer, something that's currently impossible to do.

But most importantly, knowing who reviewed the paper could lend credibility to the paper itself, especially if the authors aren't well known or established.

If a scientist with a strong personal brand says a paper is good enough for publication, it signals that it might be interesting enough to read, regardless of where it was published.  I would read a paper reviewed by strong reviewers in any journal over a paper with three lame reviews in Science.

Coming back to Bohannon's study; it managed to raise important issues surrounding the publishing industry and how proper peer review is being ignored by many fly-by-night journals.

However, this is overshadowed by the appearance that Open Access has been opportunistically dragged through the mud when more significant problems in the publishing industry ought to have received much more attention.