Cameron Neylon, Director of Advocacy at the Public Library of Science, challenged the audience at The Future of Peer Review seminar.
He suggested that if we're going to be serious about science, we should be serious about applying the tools of science to what we (the publishers) do.
What do we mean when we say 'scientifically sound'? Science works most of the time, so we tend not to question it because of this. Should we review for soundness, as a pose to reviewing for soundness and importance.
How can we construct a review process that is scientifically sound? The first thing you would do in a scientific process is to look at the evidence, but Neylon believes the evidence is almost totally lacking for peer review. There are very few good studies. Those that exist show frightening results.
We need to ask questions about the costs and the benefits. Rubriq calculated there are 15 million hours of lost time reviewing papers that were rejected in 2012. (This video post illustrates the issues they raise). This is equivalent to around $900m if you calculate reviewers' time. How can we tell this is benefitting science? We need to decide whether we would be better spending that money and time on doing more research or improving the process.
Neylon asked what would the science look like to assess the effectiveness of peer review? There are some hard questions to ask. We'd need very large data sets and interventions including randomised control trials. But there are other methods that can be applied if data is available. Getting data about the process that you are confident with is at the heart of problem.
The obvious thing is to build a publishing system for the web. Disk space is pretty cheap and bandwidth can be managed. Measure what can be measured. Reviewers are pretty good at checking technical validity of papers. Importance is more nebulous. Taking this approach, Neylon believes that you end up with something that looks like PLOS One.
The growth curve for PLOS One is steep as it tackles these issues. In addition to this growth trajectory, 85% of papers are cited after 2 years: well above average for STM literature. There still remains a challenge of delivering that content to the right people at the right time. Technical validity depends on technical checks. PLOS One has six pages of questions to be answered before it goes to an editor. How much could we validate computationally? Where are computers better than people?
What has changed since similar talks 5 years ago? New approaches that were being discussed then are happening now (e.g. Rubriq). The outside world is much more sceptical about what happens with public funding. According to Neylon, one thing is for sure when it comes to peer review. The sciences need the science.
These notes were compiled from a talk given by Cameron Neylon at ALPSP's The Future of Peer Review seminar (London, November 2013) under CC-BY attribution. Previous presentations by Cameron can be found on his SlideShare page.
Friday, 15 November 2013
Wednesday, 13 November 2013
Ulrich Pöschl on advancing post-publication and public peer review
Ulrich Pöschl |
Ulrich Pöschl is based at the Max Planck Institute for Chemistry and is professor at the Johannes Gutenberg University in Mainz, Germany.
He initiated interactive open access publishing with public peer review and interactive discussion through the journal Atmospheric Chemistry and Physics and the European Geosciences Union.
In his talk at The Future of Peer Review seminar, he presented a vision of promotion of scientific and societal progress by open access and collaborative review in global information commons; access to high quality scientific publications (more and better information for scientists and society); documentation of scientific discussion (evidence of controversial opinions and open questions); and demonstration of transparency and rationalism (role model for political decision process).
Pöschl believes the most important motivation of open access is to improve scientific quality assurance. Why is it not a threat to peer review? Traditional peer review is fully compatible with open access. Information for reviewers is strongly enhanced by open access. Collaborative and post-publication peer review can be fully enabled by open access. Predatory open access publishers and hoaxes are a side-issue: a transition problem and red herring (partly caused by the vacuum created by the slow move of traditional publishers).
Pöschl went on to outline a range of problems that affect peer review. Quality assurance can be an issue with manuscripts and publications often carelessly prepared and faulty. The tip of the iceberg can be fraud. Common practice can lead to carelessness. Consequences can be waste and misallocation of resources.
Editors and referees may have limited capacity and/or competence. Traditional pre-publication review can lead to retardation and loss of information. Traditional discussion can be sparse and subject to late commentaries. For Pöschl, he doesn't have time for pure post-publication review (open peer commentary) as he has enough to do with his scientific work.
The dilemma at the heart of peer review is speed versus quality. There are conflicting needs of scientific publishing: rapid publication versus thorough review and discussion. Rapid publication is widely pursued. The answer? A two stage process. Stage 1 involves rapid publication of a discussion paper, public peer review and interactive discussion. Stage 2 comprises review completion and publication of the Final Paper.
The advantages of interactive open access publishing are that it provides an all win situation for the community of authors, referees, editors and readers. The discussion paper is an expression of free speech. Public peer review and interactive discussion provides lots of benefits, but in particular, they foster and document scientific discourse (and save reviewer capacities).
Four stages to interactive open access publishing |
- Pre-publication review and selection
- Public peer review and interactive discussion
- Peer review completion
- Post-publication review and evaluation
This needs to be in combination or integration with repositories, living reviews concept, assessment house concept, ranking system/tiers and article level metrics.
At Atmospheric Chemistry and Physics, the rejection rate is as low as 10%. Submission to publication time is a minimum of 10 days and up to 1 month. The publication charge is 1000 euros and they have up to 50% additional comments pages. The achievements for combining these approaches include top speed, impact and visibility, large volume, low rejection rates and costs. The journal is fully self-financed and sustainable.
Pöschl passionately believes that these stages can be adjusted to other priorities and can therefore work for other disciplines and research communities. Future perspectives to take into account include an efficient and flexible combination of new and traditional forms of review and publication, as well as multiple stages and levels of interactive publishing and commenting.
Labels:
#alpsppeer,
alpsp,
Atmospheric Chemistry and Physics,
Future of Peer Review,
peer review,
Ulrich Pöschl
Tuesday, 12 November 2013
Why is peer review is such an enduring factor in the research process? Mark Ware provides an overview
Mark Ware: why is peer review an enduring factor? |
Peer review is not broken in his mind. It is overwhelmingly supported by researchers, but that doesn't mean it can't be improved. Publishers need to take new developments into account.
Peer review is one part of the publishing cycle as well as the broader research cycle. It is important. Benefits include improving the quality of published articles (and this comes from the research community). It provides filters, seal of approval and a reward system that works for researchers.
However, no human activity is perfect so what are the limitations? They include validity, effectiveness, efficiency, the burden on researchers and fairness. In a world where there's a drive for transparency, we have to take these criticisms seriously. Peer review is ripe for improvement in many areas.
So who's who in peer review?
- Authors (co-authors, contributors) - the definition of authorship has become more formalised in recent years.
- Editors and editorial boards - editor role is crucial, misconception that it is the reviewers that make the decision. This ignores the fact that peer review is a process. Editors use their judgment. The best examples are a constructive discussion with the aim of making the paper the best it can be.
- Reviewers
- Publishers and editorial staff - these roles are often overlooked by those that claim reviewers do all the work. Editorial independence is a reason why we might want this. Peer review process diagram: the danger is that we think this is the system, but actuallyit is one small part of a publishing process.
- Readers (post-publication review).
Peer review flow chart: just one part of process |
Retractions are booming:
In 1977 there was c.2 retractions per 100k publications.
In 2010 this had risen to 50 per 100k publications.
What's driving new approaches?
What are the problems we're trying to solve?
Which problems are we trying to mediate?
What are the opportunities to improve? Fairness and bias, delays, inefficiency, reproducibility, research data and information overload all figure.
Pre-publication innovations include:
- 'Soundness not significance'
- Cascade review
- Portable review
- Open and interactive
- Registered reports.
Post-publication innovation includes:
- Comments and ratings
- Metrics and altmetrics
- Article evaluation and tiering systems (e.g. Frontiers)
- Overlay journals.
Labels:
@mrkwr,
#alpsppeer,
alpsp,
Future of Peer Review,
mark ware,
peer review
The peer review landscape – what do researchers think? Adrian Mulligan reflects on Elsevier's own research.
Adrian Mulligan ponders: what do researchers think? |
What do researchers think? Peer review is slow. Generally in STM journals, average peer review is 5 months. Social sciences can go up to 9 months. From when you submit an article through to final submission. Peer review can be prone to bias and can hold back true innovation (here he cites the Nature articles from October 2003, Coping with peer rejection: Accounts of rejected Nobel-winning discoveries highlight conservatism in science).
There is a small group of people who decide what is happening with a manuscript and they tend to be quite conservative. It's time consuming and redundant and does not improve quality. This view was reflected in The Guardian article by Dr Sylvia McLain 'Not breaking news: many scientific studies are ultimately proved wrong!' 17 September 2013. Jeffrey Brainard reported in The Chronicle in August 2008 'Incompetence tops list of complaints about peer reviewers' on how there are too few qualified overworked reviewers. Recently, The Scientist article 'Fake paper exposes failed peer review' by Kerry Grens (October 6, 2013) highlighted how peer review may not be good at preventing fraud or plagiarism.
Elsevier undertook a research project to find out What do researchers think? From author and reviewer perspective. They surveyed individuals randomly selected from published researchers and received 3,008 responses. Most researchers are satisfied with current peer review system: 70% in 2013 (1% higher than in 2009 and 5% higher than in 2007). The satisfaction level is higher among chemists and mathematicians, and lower among computers scientists, social scientists (inc. arts, humanities, psychologists and economists). Chinese researchers are the most satisfied and there is no difference by age.
Most believe that peer review improves scientific communication. Almost three quarters agreed that the peer review process on unsuccessful submissions improved the article. The amount of researchers who had gone through multiple submissions is relatively low (29% submitted to another journal, article submitted average of 1.6 times before accepted). Few believe peer review is holding back science, but the amount is growing: in 1997 - 19% agreed, in 2009 it was 21%, and in 2013 it had risen to 27%.
Pressure on reviewers is increasing including time and a lack of incentives. Some reviewers lack the specialist knowledge required and it could be argued that too many poor quality papers are sent for review.
Mulligan observed that on a national level, the contribution in terms of submissions should be equal in terms of a country's contribution of reviews. China publishes far fewer papers compared to reviews and the reverse is true for the US. He noted that Chinese researchers are more likely to accept an invitation to review.
Over a third of those who responded believe peer review could be improved. Reviewers need more guidance, researchers are less willing to volunteer their time to conduct reviews and the fact that the system is biased/needs to be completely anonymous should be reviewed. Another challenge for the industry is whether or not peer review can genuinely detect fraud and plagiarism.
More people are shifting to open peer review, however, the preference in North America is for more traditional peer review (single blind and double blind). So what is Elsevier doing? They are reducing the number of reviews - transferring from one journal to the next. They are recognising reviewers' contributions - rewarding reviewers with certificates awards. And they are getting reviewers together to improve speed and quality.
Labels:
#alpsppeer,
Adrian Mulligan,
alpsp,
elsevier,
Future of Peer Review,
peer review,
publishing
Friday, 8 November 2013
Copyright – business or moral right?
Pippa Smart: is copyright a business or moral right? |
"Many years ago I wrote a short “how to get published” guide. Now, I’m not going to pretend it was the best guide ever; I’m sure there are plenty of others (in fact I know there are) that are more succinct, more instructive and more useful to authors. But it was my own work, and I was (quietly) pleased with it. It was downloaded from the company website and – I hope – useful to at least one author, somewhere in the world.
Then I discovered that someone had taken it and reproduced it in a journal. I can’t pretend that I wasn’t flattered, but I was a bit annoyed that my name (and my employer’s) was removed. We wrote to ask for a correction – no reply. So, after a sigh and a shrug of the shoulders we moved on and forgot it – after all, nobody was killed, there was just a little bit of injured pride.
Would we have reacted differently, I wonder if the article had been for sale? Would we have been more concerned if we thought the author benefitted financially rather than just reputationally? Perhaps.
This came to mind recently when a friend of mine had an article she had published in an open access journal posted on a reprints site, being sold for $5. She was furious. She streamed her angst on the airwaves. She named names and pointed fingers. After a few postings reminding her that the CC-BY licence allowed this reprints company to do exactly what they were doing, she calmed a little – then asked her publisher to demand a take-down. The publisher obliged and the reprints site capitulated.
These examples raise several important points. Copyright protection is there to protect authors, not just to make money for big business. And publishers have a duty to help authors protect their rights. Authors care about their content – and may not understand how copyright can protect them, and when it cannot. Add into this mix different legal obligations and cultural expectations, and we live in a complex IPR world.
I forecast more examples like these (copyright and plagiarism) in the next few years. There will be a greater need for publishers to help (and to educate) authors, and a need for them to understand the wider debates about access and the intersection with legal and moral issues. Interesting times."
Pippa is author of ALPSP's eLearning course International Copyright. Take the new online demo for the course and receive up to an hour of free training.
Pippa Smart is a research communication and publishing consultant with over 20 years experience, working for CAB International, Blackwell and Cambridge University Press to name a few. She now researches and writes on publishing issues, runs training courses and runs PSP Consulting. She is the editor of the ALPSP Alert and can be contacted at pippa.smart@gmail.com.
Labels:
#alpsp,
alpsp,
copyright,
eLearning,
International Copyright,
Pippa Smart,
PSP Consulting
Thursday, 7 November 2013
Keeping pace with changes in eJournal technology
Tracy Gardner: keeping pace with eJournal technology |
SK: What is the main challenge that publishers face in the field?
TG: The pace of change within eJournals technology is fast. This technology has removed the barriers between production, editorial, marketing, sales, customer services and most importantly – the customers. Renew Training started running business technology courses specifically for publishers around 7 years ago and during all that time the same course has never been delivered twice!
SK: What is driving the pace of change?
TG: Changes in how libraries authenticate their patrons, how they manage reader navigation and the implementation of new search and discovery tools has changed the eJournal landscape dramatically.
SK: Who does this affect?
TG: For those in sales, marketing and customer service it can be hard to understand the business ramifications of how eJournal technology affects the way librarians and researchers find and access content. How does the fact a library uses a proxy, or only has one IP address for their entire institution, or indeed if their IP addresses are a state secret impact how researchers read your content? Does Shibboleth or Athens solves these issues, or does it create news ones? What about OpenURLs and working with link resolvers – and what are resource discovery services and tools and why should you worry about them?
For those in operational or technology roles, the business technology side of eJournals can seem daunting and especially for those new to the industry, the way the information community works can seems counter to the way many other business sectors operate.
SK: How can you keep pace of these changes?
TG: Educate those in sales, marketing, customer services, product development, editorial, project management and IT in the technologies. These roles are all vital to the delivery of eJournals. You need to clearly position these technologies in the context of the industry issues they aim to solve so your teams understand how they are used throughout the supply chain internally and by librarians through to end users. Understand a) your customers' technical and business requirements, and b) how technology plays a role in discoverability and deploying eJournals.
Tracy Gardner has over 17 years’ experience in marketing and communications and has worked for CatchWord, Ingenta, CABI Publishing and Scholarly Information Strategies. Her career has focussed on improving communication channels between publishers, intermediaries and librarians and she understands the business of scholarly publishing from many different perspectives.
Tracy is co-tutor with Simon Inger on Understanding eJournal Technology course run by ALPSP in association with Renew Training. If you are flummoxed by any of the above terminology, or if you would like to understand more about how your customers are using business technology to serve their patrons, then come along to the next course on 13 November 2013 in Oxford.
Labels:
alpsp,
ALPSP training,
e-journals,
eJournal,
journal,
Renew Training,
simon inger,
technology,
tracy gardner,
Understanding eJournal Technology
Subscribe to:
Posts (Atom)