darb.ketyov.com

Caveat lector: This blog is where I try out new ideas. I will often be wrong, but that's the point.

Home | Personal | Entertainment | Professional | Publications | Blog

Search Archive

26.12.11

A New Model for Scientific Publishing

There's a new paper out in Frontiers in Computational Neuroscience that is relavant to my interests. The paper is by Dwight Kravitz and Chris Baker from the NIMH and is titled "Toward a new model of scientific publishing: discussion and a proposal".

About two weeks ago Dwight emailed me his paper saying that he'd read a post I'd written last month for the Scientific American guest blog called "What is peer review for?" (you can check out my interview on Skeptically Speaking on this topic as well).

After reading Dwight's paper I want to make sure it gets as much exposure as possible. I can't it justice because it's so well-written and clear. But before I shuffle you off to read it I wanted to highlight their proposed system and ask you all what you think.

What are the barriers to instantiating their proposed system?

In the SciAm piece I concluded by saying:

But the current system of journals, editors who act as gatekeepers, one to three anonymous peer-reviewers, and so on is an outdated system built before technology provided better, more dynamic alternatives.

Why do scientists-–the heralds of exploration and new ideas in our society–-settle for such a sub-optimal system that is nearly 350 years old?

We can–-we should-–do better.

Well, it looks like Kravitz and Baker put a lot more thought into this problem than I and they've come up with an incredibly novel alternative system to peer review.

They nail it. There's almost nothing that I disagree with.

I love this paper.

They succeed here not because of their criticisms--which abound in the sciences--but rather because of their inclusion of a viable, creative, intelligent solution that addresses problems of motivation, utility, practicality, and even finance for an alternative model for peer review and scientific publication.

They begin by describing the current peer review system in the context of the neurosciences. They have an amusing graph that highlights the 17 levels of hell that is the peer review process loop. These guys crack me up.

"In the case of a rejection the Authors generally proceed to submit the paper to a different journal, beginning a journal loop bounded only by the number of journals available and the dignity of the Authors."

(click to enlarge)

Before I outline what their alternative proposal is, I want to highlight some of the problems regarding the costs and problems of the current system that Kravitz and Baker identify.

...what is striking is less the average amount of time [it takes to publish a paper], which is quite long, but more its unpredictability. In total, each paper was under review for an average of 122 days but with a minimum of 31 days and a maximum of 321. The average time between the first submission and acceptance, including time for revisions by the authors was 221 days (range: 31–533)...

...Beyond the costs of actually performing the research and preparing the first draft of the manuscript, it costs the field of neuroscience, and ultimately the funding agencies, approximately $4370 per paper and $9.2 million over the approximately 2100 neuroscience papers published last year. This excludes the substantial expense of the journal subscriptions required to actually read the research the field produces and the unquantifiable cost of the publishing lag (221 days) and the uncertainty incurred by that delay...

...Authors are incentivized to highlight the novelty of a result, often to the detriment of linking it with the previous literature or overarching theoretical frameworks. Worse still, the novelty constraint disincentives even performing incremental research or replications, as they cost just as much as running novel studies and will likely not be published in high-tier journals.

Okay, so what is their alternative model?

(click to enlarge)

It breaks down like this:

* No more editors as gate-keepers ("Their purpose is to serve the interests of the journal as a business and not the interests of Authors").
* Publication is guaranteed, so no more concern over "novelty".
* Editors instead coordinate the process and ensure that double-blind anonymity is maintained.
* Reviews are passed on to the authors as part of a "pre-reception" process which allows the authors to revise or retract their work before making it publicly available.
* Once public, the post-publication review process begins.
* An elected Editorial Board acts as an initial rating and classification service to put the paper in context.
* Members of the Board are financially incentivized... but the money doesn't go into their pockets, rather it can be put into their own research fund coffers.
* Papers are put into a forum wherein members of that forum can ask questions and offer follow-up suggestions.
* Forums provide a more living, dynamic quality to papers, as well as metrics for each manuscript.
* With better metadata for papers, ads could be more targeted by paper topic (no more ads for PCRs in cog neuro papers, for example).
* Kravitz and Baker note that something like a Facebook page for each paper could serve this purpose.

The Kravitz-Baker system saves money and time wasted by the needless "walking down the impact factor ladder" that usually occurs.

They address a lot more issues beyond what I've described above. They note that a common counter-argument to double-blind review, for example, is that a reviewer can "often guess" who the authors of the paper they're reviewing are because sub-fields are so small. But Kravitz and Baker point out that "the identity of the Authors might be guessed by the Reviewers, any ambiguity should act to reduce this bias". This seems so obvious.

The Kravitz-Baker system seems really well thought out, and one I'd love to see in place. But I'm worried I'm missing some critical fault here.

Anyone see any glaring issues?

ResearchBlogging.orgKravitz, D., & Baker, C. (2011). Toward a New Model of Scientific Publishing: Discussion and a Proposal Frontiers in Computational Neuroscience, 5 DOI: 10.3389/fncom.2011.00055

20 comments:

  1. Looks good, Bradley.

    Will need to spend some time though reading through this in more detail.

    * Publication is guaranteed, so no more concern over "novelty".

    I get the second part, but not the first.

    As I said, I'll need to spend more time on this. Currently knackered.

    ReplyDelete
  2. Yup, blogged about this one as well:
    http://bjoern.brembs.net/comment-n812.html
    and a new, related one:
    http://bjoern.brembs.net/comment-n815.html

    ReplyDelete
  3. Not sure exactly what the editorial board is supposed to do in practice. Give a numerical rating that you can sort your incoming paper announcements by, or what?

    Also not thrilled with the implied idea of having a single place to submit to for an entire field.

    ReplyDelete
  4. @Graham: The authors note that any paper that is ever submitted to a journal is already practically guaranteed publication anyway: >98% of papers that are submitted to any journal are eventually accepted into some journal due to the plethora of journals available with loose publishing standards.

    Physics, computer science, and math already use a model like this: arXiv.org. It's a pre-print server where anyone can upload anything. The "good stuff" gets communally floated to the top... peer-review is often just a formality.

    ReplyDelete
  5. @Björn: wow, so sorry!

    ReplyDelete
  6. @Jan: Yeah, a one-stop-shop does seem a little off... having a single point-of-failure doesn't seem like the best idea.

    As for the editorial board, I believe they're supposed to act as a curation service of sorts.

    ReplyDelete
  7. Maybe I should wait to comment after I've read the paper, but this sounds a lot like how PLoS One is already doing things...

    https://blogs.plos.org/plos/2009/07/plos-journals-measuring-impact-where-it-matters/

    ReplyDelete
  8. Love the first diagram. I would point out that one key role of editors are to prevent unnecessary reviewer experiments. Of course the counter-argument is that with editors out of the way, reviewers may be "nicer".

    As Jan mentioned, the one size fits all philosophy may work with something like PLoS One but the editors are certainly going to be overwhelmed. What happens when one of those major "important" papers does slip through the cracks? If the post-publication review process doesn't follow through, things can get ugly.

    Regardless it certainly seem plausible and well thought out.

    ReplyDelete
  9. Hi. I'm a developer on eprints.org which is free & open source software for creating repositories of research papers.

    I see major technical issues with this but it still has a paper as a fundamental unit of scholarly communication. I think this isn't ideal.

    Let me throw out a few other reforms;

    The length requirement/restriction should be scrapped -- units of scholarly communication should be the length that they need to be. There's no page-limit on the web.

    Get over PDF. It's simulated A4 paper. A better format would be HTML5. http://scholarlyhtml.org/ for so many reasons, but not limited to the fact that people should be reading research papers on kindles (or similar).

    Require data to be published to support papers. If the data is too large, require the author to supply the data at cost -- it may be that for a terrabyte of data the P&P runs to hundreds of pounds, but that's better than not having it. Anything up to a few gigabytes should be available online -- my colleagues have been looking into using the bittorrent protocol for this!

    Reward reviewers professionally. Academic life (in the UK) has a bunch of tasks including teaching, researching, publishing research, reviewing research, and attending, presenting at and organising conferences & workshops, making (and winning) bids for funding and sometimes consultancy. Many of these are not appropriately incentivised in the way people are promoted/hired.

    Reviewers & editors should gain professional credit.

    We should abolish "impact factor". It's rubbish. Individual works (I'm trying to avoid saying "papers") should be judged on their merit. Individual citation counts give a better impression, although a free for all in publishing could create a situation where items are published just to do the citation version of search-engine-optimisation.

    We should introduce the idea of a non-positive citation. ie. one that should not be used in counts. Negative citations are maybe a step too far, but there should be a way to reference a work without improving its standing. See the idea of rel='nofollow' in the SEO field. If you start scoring individual works on their citations then a work which is used as an example of an anti-pattern could gain an unreasonably high score due to notariety.

    (continued in next comment due to character limit)

    ReplyDelete
  10. ...

    Reviewers & editors should gain professional credit.

    We should abolish "impact factor". It's rubbish. Individual works (I'm trying to avoid saying "papers") should be judged on their merit. Individual citation counts give a better impression, although a free for all in publishing could create a situation where items are published just to do the citation version of search-engine-optimisation.

    We should introduce the idea of a non-positive citation. ie. one that should not be used in counts. Negative citations are maybe a step too far, but there should be a way to reference a work without improving its standing. See the idea of rel='nofollow' in the SEO field. If you start scoring individual works on their citations then a work which is used as an example of an anti-pattern could gain an unreasonably high score due to notariety.

    As should people who *only* publish data. Not everyone has the same skills. All research data should be published. Analysis is a separate skill. Tim Berners-Lee suggested that we should do most research like a data mashup.

    Right now we decentivise early publication of interesting results as it gives the competition a leg up.

    All publically funded research data and discussions must be available to anybody to read, quote, and backup.

    arXiv is a good model to look at-- the physics community pre-publishes its papers as a matter of course and then they are reviewed after.

    Medicine may be a special case. It's possibly the only field where non-professionals may use the information to make life and death decisions, and unreviewed papers could cost lives. That said there appears to be much glee in reporting any paper which says that chocolate/red wine/social networking is good/bad for you so that horse may have already sailed.

    Oh, and while most research output is becoming electronic we should consider and plan two scenarios which are both very likely to happen sooner or later.

    The first is a near complete loss of current technology levels. There must be non-digitial archives of at least the cream of our knowledge.

    The seco is a war between first world nations. If this happens it's likely that the internet would become deliberately fractured. I can see a scenario when suddenly it turns out nobody has access to data/papers for some subject because it happens that the servers are all in countries on the other side of whatever dumb war we get ourselves into. This is why I think the freedom to backup & republish should be built into the system

    Anyway, maybe I should write a blog post myself; this turned into a bit of a TL;DR.

    ReplyDelete
  11. Welll, I believe one thing is not actually working very well : post peer review. People have hardly any time to review papers, why would they review them in forum after publication ? just look at the comments section in any PloS paper. Post peer review looks power law driven to me : very few papers get a lot of attention, while the tremendous majority of papers get none. And if there is no robust system to tell "this paper is a good paper", the signal to noise ratio will become even lower than it is now. The big advantage of the current system, to me, is to somehow make a hierarchy and a selection between papers.
    ArXiv is often cited as an example of success, but I do not believe in an arXiv for biology for instance. Physics is a much older and formalized science than any other one, much less prone to bullshit so signal to noise ratio stays naturally high enough.

    ReplyDelete
  12. could there also be a means for data (in addition to the publication) be uploaded to a common repository? this way data can be reused, verified and further investigated. This has the potential to increase the rate of innovation when data is easily shared.

    ReplyDelete
  13. I have just listened to your interview on Skeptically Speaking.

    If you can spare 20 mins or so, would like to have a Skype (or whatever) with you.

    This kind of stuff (generally) is pretty much up my street.

    ReplyDelete
  14. Many excellent ideas; a main difficulty is to incentivize readers to comment. Probably not difficult for what will eventually be seen as high-profile papers, but for the vast majority of papers it is an issue.

    ReplyDelete
  15. Dwight Kravitz + Chris Baker07:42

    We have just posted a reply to the issues raised here as well as some other issues raised in other forums. The reply is on the Frontiers page for the article (Comment link on bottom of the page):

    http://bit.ly/uVggz9

    Thanks.

    ReplyDelete
  16. Faculty of 1000 is working on something very much along the lines of the original post by Dwight Kravitz and Chris Baker, as well as a number of the ideas posted above, both in terms of immediate publication and post-publication peer review (of course what we are best known for), as well as data sharing issues related to those articles. We will be formally announcing our plans in the next couple of weeks. We are very keen to work with the research community to work out the best way to approach these various issues and consequently I would be keen to connect with any of you who might be interested in helping us work through the issues and maybe test out some real live examples (rebecca.lawrence@f1000.com).

    ReplyDelete
  17. Dwight & Chris: Great! Heading over to the comment thread now...
    Rebecca: Very interesting... I'm glad to see F1000 is working on innovations in this domain.

    ReplyDelete
  18. It's kind of a late response to all these blogs but I thought I'd say it anyway .... did anyone actually consider the journal this paper was published (http://www.frontiersin.org/)? When Kravitz and Baker submitted their script to Frontiers in Computational Neuroscience, their script was handled almost exactly as they outlined in their paper. One value addition provided by the community that was not mentioned in their proposal is the fact that by disclosing the names of the reviewers and handling editor, the latter were openly rewarded for their time, effort and the so often unseen quality contributions during interactive peer-review that ultimately led to a better quality paper. So maybe we are already much closer to the kind of scholarly publication and communication that Kravitz and Baker proposed than what was described. Great to hear that F1000 is almost on board ...

    ReplyDelete
    Replies
    1. Dwight Kravitz + Chris Baker06:20

      Though we both support Frontiers and think it is a step in the right direction, our proposal went much further than our experience publishing there. Most importantly none of the comments from the reviews were made available to the community in any form, digested or otherwise. Second, publication wasn't guaranteed nor was a single round of review nor was it double blind. In fact, our colleagues experience has generally been multiple rounds of review at Frontiers. Third, having ones name published as a reviewer is somewhat scant reward, and tends to engender discomfort because without the reviews being made public it can be read as an endorsement of the paper which may or may not be true. Finally, none of the statistics which track the paper's reception are readily available for use in searching and prioritizing the papers published in Frontiers.

      Delete
  19. Agreed NItrogenCyclist. I've been talking up the Frontiers system for a while now. I've been both a reviewer and an author with them, and my experiences have been great.

    ReplyDelete