About two weeks ago Dwight emailed me his paper saying that he'd read a post I'd written last month for the Scientific American guest blog called "What is peer review for?" (you can check out my interview on Skeptically Speaking on this topic as well).
After reading Dwight's paper I want to make sure it gets as much exposure as possible. I can't it justice because it's so well-written and clear. But before I shuffle you off to read it I wanted to highlight their proposed system and ask you all what you think.
What are the barriers to instantiating their proposed system?
In the SciAm piece I concluded by saying:
But the current system of journals, editors who act as gatekeepers, one to three anonymous peer-reviewers, and so on is an outdated system built before technology provided better, more dynamic alternatives.
Why do scientists-–the heralds of exploration and new ideas in our society–-settle for such a sub-optimal system that is nearly 350 years old?
We can–-we should-–do better.
Well, it looks like Kravitz and Baker put a lot more thought into this problem than I and they've come up with an incredibly novel alternative system to peer review.
They nail it. There's almost nothing that I disagree with.
I love this paper.
They succeed here not because of their criticisms--which abound in the sciences--but rather because of their inclusion of a viable, creative, intelligent solution that addresses problems of motivation, utility, practicality, and even finance for an alternative model for peer review and scientific publication.
They begin by describing the current peer review system in the context of the neurosciences. They have an amusing graph that highlights the 17 levels of hell that is the peer review process loop. These guys crack me up.
"In the case of a rejection the Authors generally proceed to submit the paper to a different journal, beginning a journal loop bounded only by the number of journals available and the dignity of the Authors."
Before I outline what their alternative proposal is, I want to highlight some of the problems regarding the costs and problems of the current system that Kravitz and Baker identify.
...what is striking is less the average amount of time [it takes to publish a paper], which is quite long, but more its unpredictability. In total, each paper was under review for an average of 122 days but with a minimum of 31 days and a maximum of 321. The average time between the first submission and acceptance, including time for revisions by the authors was 221 days (range: 31–533)...
...Beyond the costs of actually performing the research and preparing the first draft of the manuscript, it costs the field of neuroscience, and ultimately the funding agencies, approximately $4370 per paper and $9.2 million over the approximately 2100 neuroscience papers published last year. This excludes the substantial expense of the journal subscriptions required to actually read the research the field produces and the unquantifiable cost of the publishing lag (221 days) and the uncertainty incurred by that delay...
...Authors are incentivized to highlight the novelty of a result, often to the detriment of linking it with the previous literature or overarching theoretical frameworks. Worse still, the novelty constraint disincentives even performing incremental research or replications, as they cost just as much as running novel studies and will likely not be published in high-tier journals.
Okay, so what is their alternative model?
It breaks down like this:
* No more editors as gate-keepers ("Their purpose is to serve the interests of the journal as a business and not the interests of Authors").
* Publication is guaranteed, so no more concern over "novelty".
* Editors instead coordinate the process and ensure that double-blind anonymity is maintained.
* Reviews are passed on to the authors as part of a "pre-reception" process which allows the authors to revise or retract their work before making it publicly available.
* Once public, the post-publication review process begins.
* An elected Editorial Board acts as an initial rating and classification service to put the paper in context.
* Members of the Board are financially incentivized... but the money doesn't go into their pockets, rather it can be put into their own research fund coffers.
* Papers are put into a forum wherein members of that forum can ask questions and offer follow-up suggestions.
* Forums provide a more living, dynamic quality to papers, as well as metrics for each manuscript.
* With better metadata for papers, ads could be more targeted by paper topic (no more ads for PCRs in cog neuro papers, for example).
* Kravitz and Baker note that something like a Facebook page for each paper could serve this purpose.
The Kravitz-Baker system saves money and time wasted by the needless "walking down the impact factor ladder" that usually occurs.
They address a lot more issues beyond what I've described above. They note that a common counter-argument to double-blind review, for example, is that a reviewer can "often guess" who the authors of the paper they're reviewing are because sub-fields are so small. But Kravitz and Baker point out that "the identity of the Authors might be guessed by the Reviewers, any ambiguity should act to reduce this bias". This seems so obvious.
The Kravitz-Baker system seems really well thought out, and one I'd love to see in place. But I'm worried I'm missing some critical fault here.
Anyone see any glaring issues?
Kravitz, D., & Baker, C. (2011). Toward a New Model of Scientific Publishing: Discussion and a Proposal Frontiers in Computational Neuroscience, 5 DOI: 10.3389/fncom.2011.00055