Tuesday, 12 November 2013

The peer review landscape – what do researchers think? Adrian Mulligan reflects on Elsevier's own research.

Adrian Mulligan ponders: what do researchers think?

What do researchers think? Peer review is slow. Generally in STM journals, average peer review is 5 months. Social sciences can go up to 9 months. From when you submit an article through to final submission. Peer review can be prone to bias and can hold back true innovation (here he cites the Nature articles from October 2003, Coping with peer rejection: Accounts of rejected Nobel-winning discoveries highlight conservatism in science).

There is a small group of people who decide what is happening with a manuscript and they tend to be quite conservative. It's time consuming and redundant and does not improve quality. This view was reflected in The Guardian article by Dr Sylvia McLain 'Not breaking news: many scientific studies are ultimately proved wrong!' 17 September 2013. Jeffrey Brainard reported in The Chronicle in August 2008 'Incompetence tops list of complaints about peer reviewers' on how there are too few qualified overworked reviewers.  Recently, The Scientist article 'Fake paper exposes failed peer review' by Kerry Grens (October 6, 2013) highlighted how peer review may not be good at preventing fraud or plagiarism.

Elsevier undertook a research project to find out What do researchers think? From author and reviewer perspective. They surveyed individuals randomly selected from published researchers and received 3,008 responses. Most researchers are satisfied with current peer review system: 70% in 2013 (1% higher than in 2009 and 5% higher than in 2007). The satisfaction level is higher among chemists and mathematicians, and lower among computers scientists, social scientists (inc. arts, humanities, psychologists and economists). Chinese researchers are the most satisfied and there is no difference by age.

Most believe that peer review improves scientific communication. Almost three quarters agreed that the peer review process on unsuccessful submissions improved the article. The amount of researchers who had gone through multiple submissions is relatively low (29% submitted to another journal, article submitted average of 1.6 times before accepted). Few believe peer review is holding back science, but the amount is growing: in 1997 - 19% agreed, in 2009 it was 21%, and in 2013 it had risen to 27%.

Pressure on reviewers is increasing including time and a lack of incentives. Some reviewers lack the specialist knowledge required and it could be argued that too many poor quality papers are sent for review.

Mulligan observed that on a national level, the contribution in terms of submissions should be equal in terms of a country's contribution of reviews. China publishes far fewer papers compared to reviews and the reverse is true for the US. He noted that Chinese researchers are more likely to accept an invitation to review.

Over a third of those who responded believe peer review could be improved. Reviewers need more guidance, researchers are less willing to volunteer their time to conduct reviews and the fact that the system is biased/needs to be completely anonymous should be reviewed. Another challenge for the industry is whether or not peer review can genuinely detect fraud and plagiarism.

More people are shifting to open peer review, however, the preference in North America is for more traditional peer review (single blind and double blind). So what is Elsevier doing? They are reducing the number of reviews - transferring from one journal to the next. They are recognising reviewers' contributions - rewarding reviewers with certificates awards. And they are getting reviewers together to improve speed and quality.

1 comment:

  1. I beg to differ that all forms of peer review are 'conservative'. My research into purported resistance to new scientific ideas brought me to understand that there are at least two sets of intertwined dynamics at play. First are the dynamics of peer review itself and I looked into the paradigmatic pre-publication journal peer review. Second, are editorial readers' evaluation of manuscript content. Based on twenty-five case studies, it would appear that editorial readers are resisting explanations for new scientific ideas - not the ideas per se. I explore some of the scientific explanation imperatives that appear to be used by editorial readers in this preprint:http://hdl.handle.net/10393/31198 Gaudet, J. 2014. How pre- publication journal peer review (re)produces ignorance at scientific and medical journals: a case study. uO Research. Pp. 1-67. or PDF directly http://www.ruor.uottawa.ca/bitstream/10393/31198/5/how_peer_review_reproduces_ignorance_gaudet.pdf.

    Open and post-publication peer review, in contrast to pre-publication peer review, maximizes scientific exchange... similar to dynamics in the twelfth century!