Friday 15 November 2013

'Scientifically sound' What does that mean in peer review? Cameron Neylon asks...

Cameron Neylon, Director of Advocacy at the Public Library of Science, challenged the audience at The Future of Peer Review seminar.

He suggested that if we're going to be serious about science, we should be serious about applying the tools of science to what we (the publishers) do.

What do we mean when we say 'scientifically sound'? Science works most of the time, so we tend not to question it because of this. Should we review for soundness, as a pose to reviewing for soundness and importance.

How can we construct a review process that is scientifically sound? The first thing you would do in a scientific process is to look at the evidence, but Neylon believes the evidence is almost totally lacking for peer review. There are very few good studies. Those that exist show frightening results.

We need to ask questions about the costs and the benefits. Rubriq calculated there are 15 million hours of lost time reviewing papers that were rejected in 2012. (This video post illustrates the issues they raise). This is equivalent to around $900m if you calculate reviewers' time. How can we tell this is benefitting science? We need to decide whether we would be better spending that money and time on doing more research or improving the process.

Neylon asked what would the science look like to assess the effectiveness of peer review? There are some hard questions to ask. We'd need very large data sets and interventions including randomised control trials. But there are other methods that can be applied if data is available. Getting data about the process that you are confident with is at the heart of problem.

The obvious thing is to build a publishing system for the web. Disk space is pretty cheap and bandwidth can be managed. Measure what can be measured. Reviewers are pretty good at checking technical validity of papers. Importance is more nebulous. Taking this approach, Neylon believes that you end up with something that looks like PLOS One.

The growth curve for PLOS One is steep as it tackles these issues. In addition to this growth trajectory, 85% of papers are cited after 2 years: well above average for STM literature. There still remains a challenge of delivering that content to the right people at the right time. Technical validity depends on technical checks. PLOS One has six pages of questions to be answered before it goes to an editor. How much could we validate computationally? Where are computers better than people?

What has changed since similar talks 5 years ago? New approaches that were being discussed then are happening now (e.g. Rubriq). The outside world is much more sceptical about what happens with public funding. According to Neylon, one thing is for sure when it comes to peer review. The sciences need the science.

These notes were compiled from a talk given by Cameron Neylon at ALPSP's The Future of Peer Review seminar (London, November 2013) under CC-BY attribution. Previous presentations by Cameron can be found on his SlideShare page.

No comments:

Post a Comment