Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Tuesday, 20 August 2019

Spotlight on Ripeta - shortlisted for the 2019 ALPSP Awards for Innovation in Publishing

On 12 September we will be announcing the winners of the 2019 ALPSP 
Awards for Innovation in Publishing.  In this series of posts leading up to the Awards ceremony we meet our four finalists and get to know a bit more about them.  
logo Ripeta
First question has to be: Tell us about your company

Ripeta was founded by a team of three science-related experts who joined forces whilst working at Washington University in St. Louis. Leslie, Anthony and Cynthia, like many of their colleagues, were increasingly burdened with the task of improving science without having the resources to do so. They found that much of the curated data they produced at a data center were used but was not cited. 


They quickly became frustrated and saw the need for a tool and practices to remedy the situation, and the idea for Ripeta was born. The Ripeta software was developed to assess, design, and disseminate practices and measures to improve the reproducibility of science with minimal burden on scientists, starting with the biomedical sciences. In 2017, Ripeta launched its first alpha product.

What is the project that you submitted for the Awards?

Main features and functions of Ripeta

Ripeta is designed for publishers, funders, and researchers. We provide a suite of tools and services to rapidly screen and assess manuscripts for the proper reporting of scientific method components. These tools leverage sophisticated machine-learning and natural language processing algorithms to extract key reproducibility elements from research articles. This, in turn, shortens and improves the publication process while making the methods easily discoverable for future reuse.

Ripeta Software:  Our software is built on a reproducibility framework that includes over 100 unique variables grouped into five categories highlighting important scientific elements and creating a report: 
  1. Study Overview
  2. Data Collection
  3. Analytics
  4. Software and Code
  5. Supporting Material (new)
This report is generated in seconds providing immediate feedback.

image ripeta report


















Portfolio Analysis:   While many publishers have adopted checklists to evaluate reproducibility reporting criteria, the standard practice is for editors or reviewers to manually assess document adherence to guidelines. This review process is neither quick nor consistent. Ripeta offers a critical addition to the scientific publishing pipeline while significantly reducing the time to review and assess manuscripts.   We look across a group of articles based on your criteria and provide an overview, comparisons, and suggested improvements. Do you want to know how your journal or organization is performing overall? Want to know the reporting practices of your grantees? These reports provide insights into reproducibility practices.
graphic Portfolio analysis

Tell us about the team 

Dr. Leslie McIntosh, PhD is the founder and CEO. She has led multi-million dollar projects building software and services for data sharing and reuse. Leslie has a focus on assessing and improving the full research cycle and making the research process reproducible.


Anthony Juehne, MPH is the Chief Science Officer with speciality skills in epidemiology and biostatistics. His current work focuses on developing best-practices for conducting and reporting clinical research to enhance reproducibility, transparency, and accessibility.



Cynthia R. Hudson Vitale, MLIS is the Chief Information Scientist. She has worked with faculty on projects to facilitate data sharing and interoperability while meeting faculty research data needs throughout the research life-cycle. Her current research seeks to improve research reproducibility, addressing both technical and cultural barriers.

In what ways do you think Ripeta demonstrates innovation?

Research reproducibility is increasingly important in the scholarly communication world, yet researchers, publishers, and funders do not have a streamlined method for assessing the quality and completeness of the scientific research.

Ripeta aims to make better science easier by identifying and highlighting the important parts of research that should be transparently presented in a manuscript and other materials.  By detecting and predicting reproducibility in scientific research we provide a “credit report” for scientific publications. Our aim is to improve science and ensure resources are well spent, and we offer a pre peer-review to a paper and report post-publication.

The Ripeta solution is unique because we have identified the components across scientific fields necessary to responsibly report a scientific process. We are using machine learning and natural language processing to programmatically extract these components from scientific manuscripts and present them in a user-friendly report.

Ripeta helps save time, money, and improve reputations.

For publishers, Ripeta offers a quick assessment of a submitted manuscript. It’s hard to find reviewers and the reviewers are experts in specific areas. Ripeta allows a rapid check of important elements that should be in the manuscript and presents a report for editors and reviewers. This is measured in time to complete a pre-review.

For researchers, Ripeta offers a means to improve the manuscript. By rapidly reviewing the manuscript, the report will highlight which elements are missing and which elements have been found in a machine-readable manner. This allows researchers the opportunity to improve their manuscript before submission. This is measured through Ripeta report completeness from first to final submission.

What are your plans for the future?

The Ripeta go-to-market strategy is focused on developing tools for a subscription-model compatible with the needs of publishers and funders where users can assess a single publication. Subscribers will be charged a fee per report with an optional Ripeta software enterprise edition for robust analytics and bulk manuscript evaluation. We are currently engaging and conducting pilots with multiple publishers, universities, and researchers.

Our long-term goals include developing a suite of tools across the broad spectrum of sciences to understand and measure the key standards and limitations for scientific reproducibility across the research lifecycle and enable an automated approach to their assessment and dissemination.
  • Enable researchers to upload a single manuscript at a time at no charge;
  • Work with  Pre-print services (e.g., bioRxiv), who could charge to have a ripetaReport linked to the pre-print; and,
  • Work with large research and development firms, who would purchase enterprise installations for private hosting.
website: https://www.ripeta.com/
twitter: https://twitter.com/ripeta1

See the ALPSP Awards for Innovation in Publishing Finalists lightning sessions at our Annual Conference on 11-13 September, where the winners will be announced.

The ALPSP Awards for Innovation in Publishing 2019 are sponsored by MPS Ltd.







Wednesday, 7 August 2019

Spotlight on Scite - shortlisted for the 2019 ALPSP Awards for Innovation in Publishing

On 12 September we will be announcing the winners of this year's ALPSP Awards for Innovation in Publishing.  In this series of posts, we meet the finalists to learn a little more about each of them.

In this post, we hear from Josh Nicholson, co-founder and CEO of scite.ai

Tell us a little about your company

logo scite
The idea behind scite was first discussed nearly five years ago in response to a paper from Amgen reporting that out of 53 major cancer studies this company tried to validate, they could only successfully reproduce 6 (11%). This paper sparked widespread media coverage and concern and since then this problem has come to be known as the “reproducibility crisis.” While this paper received the most attention, perhaps because the numbers are so dire, it was not the first or only paper to reveal this problem. Indeed Bayer had reported similar findings in other areas of biomedical research, while non-profit reproducibility initiatives revealed the problem in psychology and other fields, suggesting a systemic issue. This is worrisome, to say the least, because scientific research informs nearly all aspects of our lives, from how you raise your children to the drugs being developed for fatal diseases and if most work is not strong enough to be independently reproduced, we are wasting billions of dollars and impacting millions of lives. scite wants to fix this problem by introducing a system that identifies and promotes reliable research.

We do this by ingesting and analyzing millions of scientific articles, extracting the citation context, and then applying our deep learning models to identify citations as supporting, contradicting, or simply mentioning. In short, allowing anyone to see if a scientific article has been supported or contradicted.

As a funny, aside my co-founder, Yuri Lazebnik, and I first proposed that someone else, like Thomson Reuters, Elsevier, or NCBI should implement the approach used now by scite. After some waiting, we realized that if we wanted it to exist we would need to build it ourselves and here we are, five years later, with over 300M classified citations citing over 20M articles!


Tell us a little about how it works and the team behind it

As mentioned, scite is a tool that allows anyone to see if a scientific paper has been supported or contradicted by using a deep learning model to perform citation analysis at scale. In order to do this, we need to first extract citation statements from full-text scientific articles, which in most cases means extracting citation statements out of PDFs. To accomplish this, scite relies upon 11 different machine learning models with 20 to 30 features each. This is very challenging as there are thousands of citation styles and PDFs come in a variety of different formats and quality. We’re fortunate to have Patrice Lopez on the team, who has been developing the tool to accomplish this for over ten years. Once we’ve extracted the citations from the articles, we use a deep learning model to classify citations as supporting contradicting or mentioning.

To show the utility of the tool, I like to show my PhD research as it is seen with and without the lens of scite. This study looked at the effects of aneuploidy on chromosome mis-segregation, that is, if you add an extra chromosome to a cell does it make more mistakes during cell division. Our work was published in eLife and was a collaboration between our lab at Virginia Tech, and labs in Portugal, and at the NIH. It has been cited 40 times to date and viewed roughly 4,000 times. In general, these features are what we as a community look at when assessing a paper–who the authors are, the prestige of the journal it appears in, affiliations, and some metrics like citations and perhaps social media attention (Altmetrics). This information is used to decide if we want to read or cite a paper, if we want to promote this author, join their lab, or give them a grant. These are our proxies of quality. Yet, none of them have anything to do with quality. With scite, in just a few clicks, you can see that my work has been independently supported by another lab (i.e. it has a supporting cite). To me, this is something like a super power for researchers, because without scite one would need to read forty papers to find this information or consult an expert and even then they might miss it.
screenshot demonstrating cite



To make scite happen requires a special team and I believe that is what we have created and continue to create at scite. I like to joke that scite is a multinational corporation with offices in Kentucky, Brooklyn, France, Germany, and Connecticut. While true, it is not an entirely accurate representation of the company, just as citation numbers are not an entirely accurate representation of a paper. In fact, scite is a small team of six scientists and developers united not by geography but by a passion to make science more reliable. 

In what ways do you think it demonstrates innovation?

The idea behind scite has been discussed as early as the 1920’s, as there exists a similar system in law called Shepardizing (lawyers need to make sure they don’t cite overturned cases  as they will quickly lose their argument this way). However, despite such discussions happening nearly a hundred years ago and multiple attempts to bring something like scite to fruition, even by juggernauts like Elseiver, it did not happen until scite came to life. scite is innovative in that in unlocks a tremendous wealth of information by successfully pushing the latest developments in technology to its limits. With that said, there is so much that we still need to do and we’re excited about the future and working with many stakeholders in the community.

What your plans for the future?

In the near future, we think anywhere there is scholarly metadata, there is an opportunity for scite to provide value. We are working with publishers to display scite badges, citation managers to display citation tallies, submission systems to implement citation screens and are in discussions with various pharmaceutical companies to help improve the efficiency of drug development. Moreover, we will start to expand out citation analytics from articles, to people, to journals, and institutions.

Longer term, we envision scite as being the place where people and machines go to identify reliable research and researchers. We have plans to explore micro-publications, so as to offer more rapid feedback into our system, plans to further invest in machine learning to see if we can predict citation patterns as well as promising therapeutics in drug development, and I think much more that we can’t even predict right now. The scientific corpus is arguably the most important corpus in the world. It’s a shame that it is easier to text mine twitter than it is cancer research. However, it’s also an opportunity, one which we’re seizing now.

photo Josh Nicholson
Josh Nicholson is co-founder and CEO of scite.ai, a deep learning platform that evaluates the reliability of scientific claims by citation analysis. Previously, he was founder and CEO of the Winnower (acquired 2016) and CEO of Authorea (acquired 2018 by Atypon), two companies aimed at improving how scientists publish and collaborate. He holds a PhD in cell biology from Virginia Tech, where his research focused on the effects of aneuploidy on chromosome segregation in cancer.

Websites
https://scite.ai/
Chrome plugin: https://chrome.google.com/webstore/detail/scite/homifejhmckachdikhkgomachelakohh
Firefox plugin:
https://addons.mozilla.org/en-US/firefox/addon/scite/

Twitter:
@sciteai

See the ALPSP Awards for Innovation in Publishing Finalists lightning sessions at the ALPSP Conference on 11-13 September. The winners will be announced at the Dinner on 12 September.

The ALPSP Awards for Innovation in Publishing 2019 are sponsored by MPS Ltd.  

Tuesday, 4 June 2019

'Making the case for embracing microPublications: Are they a way forward for scholarly publishing?'

Albert Einstein said: “An academic career, in which a person is forced to produce scientific writings in great amounts, creates a danger of intellectual superficiality”. 

Researchers have been working with the pressures of ‘Publish or Perish’ for decades. The default response is to question the value of microPublications that are produced as a result. But what about when microPublications are carefully defined; peer review is stringently completed; and they enable publishers to more efficiently produce the ‘longer story’ research articles with pre-validated research outputs? Are there largely unknown opportunities and values to be gained quickly? Can microPublications enable synthesizing and distilling of information and integrate this information in established repositories to create a more meaningful and greater corpus of knowledge - dare we say, global knowledgebase?

In this blog we hear from scientific curators with new roles as editors of a microPublication, and from a publisher who encourages this new publishing genre.

Chair: Heather Staines, Head of Partnerships, MIT Knowledge Futures Group.

Contributors:

  • Daniela Raciti, Scientific Curator for Wormbase and Managing Editor, microPublication Biology
  • Karen Yook, Scientific Curator for Wormbase and Managing Editor, microPublication Biology
  • Tracey DePellegrin, Executive Director, Genetics Society of America

Heather Staines: As an historian and former acquiring editor for books, I’ve long thought of articles as short-form publications and have struggled with the ‘less is more’ school of thought. When I started to hear about microPublications a few years back, I was intrigued. I wondered how researchers would define the scope of these postings, how they would be viewed within their respective disciplines, and how they would fit within the larger scholarly communications infrastructure. I was thrilled to be asked to moderate the ALPSP webinar, to get to hear directly from the folks at microPublication Biology and at the Genetics Society of America. Here is a bit of what I’ve learned in preparation for the session.


Question 1: How would you define a microPublication?

microPublication Biology: A microPublication is a peer-reviewed report of findings from a single experiment. A microPublication typically has a single figure and/or results table, the text is brief, but has sufficient relevant background to give the scientific community an understanding of the experiment and the findings, and there is sufficient methodological & reagent information and references that the experiment can be replicated by others.

Genetics Society of America (GSA): I’ve got to agree with my colleagues on this one. I think one key here is that the findings in microPublication Biology are in fact peer-reviewed. They’re also discoverable, so they’re not lost in the literature. And I love the idea that these are compact yet powerful components scientists can build upon.


Question 2: What was the driving force behind the decision to move forward with microPublications?

microPublication Biology: There are two driving forces. The first is to increase the entry of research finding into the public domain. These findings are of value to the scientific community, they give the authors credit for their work, and publication fulfils the agreement researchers make with funding agencies (and taxpayers) to disseminate their findings. The second is to efficiently incorporate new data into scientific databases, such as WormBase. Scientific databases organize, aggregate and display data in ways that have tremendous value for researchers, greatly facilitating experimentation (increasing efficiency, decreasing cost). Databases are most useful when they are comprehensive; the microPublication platform allows efficient and economical incorporation of information into databases. We hope that in the long term, other scientific publishers will come on board to directly deposit data from publications into the authoritative databases.

GSA: GSA is supportive of microPublications for several reasons. First, incorporating new data into scientific databases is critical. Researchers in our fields depend on model organism databases like WormBase, FlyBase, Saccharomyces Genome Database (SGD), the Zebrafish Information Network (zfin), and others, many of which are supported by the National Human Genome Research Institute (NHGRI) and included in the Alliance of Genome Resources. These databases are critical in understanding the genetic and genomic basis of human biology, health, and disease, and are curated by experts in the field. The microPublication platform helps authors by incorporating their findings into these databases in a way that’s seamless and painless for busy scientists. Second, microPublication Biology reduces the barrier of entry for scientists hoping to freely share their peer-reviewed research in a credible venue. Also, it’s terrific that microPublication provides the opportunity to publish a negative result. Negative results are important, yet too few journals publish them. The bottom line is that microPublication Biology addresses a need in scholarly publishing, serving authors and readers alike by filling a gap existing journals don’t serve.


Question 3: How does the peer review process differ, if at all, from the peer review of longer articles?

microPublication Biology: The peer-review process is similar to other journals, with a few distinguishing features. First, since the publication is limited in scope and length, it is simple and quick to review. Second, the publication criteria are straightforward – is the work experimentally sound? - does the data support the conclusion? – is there sufficient information to allow replication? – and, are the findings of use to the community? The last point goes along with the categorical assignment of the microPublication as a New finding, Finding not previously shown (unpublished result in a prior publication), Negative result, Replication – successful, Replication – unsuccessful, and Commodity validation.

GSA: Because I’m not an editor at microPublication Biology, I can only generalize here. But I will use this opportunity to underscore the importance of high-quality peer review as well as editors who are well-respected leaders in the field. One glance at the editorial board of microPublication Biology shows that these scientists are in a position to guide the careful review and decision on submitted data in their respective fields. I also find the categorial assignments interesting – especially the idea of a successful (or unsuccessful) replication.


Question 4: What do you see as the future for microPublications?

microPublication Biology: Huge! This publishing model will help change how researchers communicate with one another, how a researcher’s accomplishments are evaluated and tracked, and provide an earlier step for budding researchers to be introduced to scholarly communication. The microPublication venue easily lends itself to expansion into entirely new fields. However, such expansions need to be driven by the field’s scientific community (the group that will submit manuscripts, peer review the manuscripts, and maintain community standards).

GSA: The sky’s the limit. I agree with everything (above). In times where we’re trying to encourage grant review panels and others to evaluate scientists by the data they’re publishing (rather than the impact factor of the journal in which the article appears), such venues as microPublication Biology provide a chance for researchers to get credit for contributions that might not otherwise be recognized. And that’s progress!

------------------------------------------------------------------------------------

Heather Staines: I’d like to take this opportunity to thank our panellists for taking the time to weigh in on these questions. I hope you will now agree with me that microPublications provide an interesting and useful twist on the traditional journal publication model.

To learn more, please register for the ALPSP webinar: 'Making the case for embracing microPublications: Are they a way forward for scholarly publishing?'

Wednesday 26 June.
16:00-17:00 BST, 11.00-12:00 EDT, 17:00-18:00 CEST, 08:00-09:00 PDT.

The webinar is ideal for: publishing executives, editors, librarians, funders and researchers.



Tuesday, 13 November 2018

Thinking of reviewing as mentoring

In this blog Siân Harris shares her personal experiences of being a peer reviewer for Learned Publishing.


Earlier this year I was contacted by Learned Publishing about reviewing a paper. This was an interesting experience for me because although I had been a researcher and then a commentator on scholarly publishing, including peer review, for many years, this was the first time I had done a review myself.

The paper I was invited to review was about publishing from a region outside the dominant geographies of North America and western Europe. Ensuring that scholarly publishing – and, in particular, the research that it disseminates – is genuinely global is something that I am passionate about (in my day job I work for INASP) so I was very happy to take on this review.

There have been plenty of complaints about peer review being provided freely to publishers and rarely recognized as part of an academic’s job description (it’s also not part of my non-academic job). And some researchers can feel bruised when their papers have been handled insensitively by peer reviewers.

On the other hand, there are powerful arguments for doing peer review in the interests of scholarship. What I’d not heard or realised until I did a review myself was how doing peer review is – or should be - a lot like mentoring. Since my time as a (chemistry) researcher I have regularly given others feedback about their papers, books and other written work, most recently as an AuthorAID mentor supporting early-career chemistry researchers in Africa and Asia. I also found, as I did the review, that I was very happy to put my name on it, even after recommending major revisions.

As I read the Learned Publishing paper I found I was reading it with that same mentoring lens and I realised there was an opportunity to help the authors not only to get their paper published but also to explain their research more clearly so that it has greater potential to make a difference. I wanted to encourage them to make their paper better — and to suggest what improvements they could make. Crucially, I didn’t feel like I was doing a review for the publisher; I felt I was doing the review for the authors and for the readers.

As I’ve seen with so many papers before, the paper had some really interesting data but the discussion was incomplete and a bit confusing in places; it felt to me a bit like an ill-fitting jacket for the research results. I made positive comments about the data and I made suggestions of things to improve. I hoped at the time that the authors found my feedback useful and constructive and so I was pleased that they responded quickly and positively.

The second version was much better than the first; a much clearer link was made between the data and the discussion and some answers had been given to many of those intriguing questions that had occurred to me in reading the first draft.We could have left it there but there were still some residual questions that the paper didn’t address, so in the second round I recommended further (minor) revisions.

Quickly, the third version of the paper came back to me. I know it can be frustrating for authors to keep revising manuscripts but the journey of this paper convinced me that it is worth it. The first version had great data that intrigued me and was very relevant to wider publishing conversations, but the discussion lacked both the connection and context to do the data justice. The second version was a reasonable paper but still had gaps between the data and the discussion that undermined the research. But the third version thrilled me because I realised I was reading something that other researchers would be interested in citing, and that could even be included in policy recommendations made in the authors’ country.

Having reflected on this process during this year's Peer Review Week with its theme of diversity, I am pleased that I read this paper and was able to provide feedback in a way that helped the authors to turn good data into an excellent article. First drafts of papers aren’t always easy to read, especially if the authors are not writing in their native language.  Authors can assume that readers will make connections between the results and the conclusions themselves, resulting in some things being inadequately explained. But peer review – and mentoring -– can help good research, from anywhere in the world, be communicated more clearly so that it is read, used and can make a difference.

Dr Siân Harris is a Communications Specialist at INASP. 


Monday, 21 August 2017

Navigating the safe passage through the minefield of predatory publishing

Philip J Purnell and Mohamad Mostafa, Knowledge E

logo Knowledge E
Like many of the world’s scholars, young researchers at Al-Nahrain University in Iraq have been told they need to publish research articles in academic journals in order to progress in their careers. Aware of the low acceptance rates and lengthy publication delays in traditional journals, many turn to relatively recently launched open access journals that market their quick turn-around and low publishing charges. Dr. Haider Sabah Kadhim, head of Microbiology at the university says these young researchers are being duped into paying 100 – 200 USD on the promise of fast-track publication within one week. He has seen countless academics proudly show their published papers only to be told they won’t count because the journal isn’t on an approved list. Dr. Haider expressed his worry about the long term harm this publishing practice could cause to their careers.

The same pressure is felt by the research community across the region, Egyptian Assistant Prof. Hossam El Sayed Donya teaches medical physics at faculty of science King Abdulaziz University in Saudi Arabia. He sees similar challenges and says that some publishers boast of being indexed the top databases like Web of Science, Scopus, and MEDLINE, having Journal Impact Factors and that they assign digital object identifiers (DOIs) to all published articles. Often these promises turn into disappointment when the researcher realises their article cannot be found in the databases, the journal doesn’t really have an impact factor or that the DOI is not deposited in the Crossref database. Prof. Hossam said there is a need for information and guidance in both English and Arabic for early career researchers that teach them the signs to look for in a good journal/publisher and those to avoid.

The dilemma

Modern scholars are coming under increasing pressure to demonstrate their academic productivity, by which their output is determined by the number of research papers they have published and their impact is determined by counting the citations to each researcher’s articles. Indeed, universities and promotion committees often set targets and thresholds for academic progression based on publications and citations. Universities do this because they also operate in an increasingly competitive space and are themselves responding to pressure for a good performance in the international university rankings such as Shanghai, Times Higher Education, QS and others that count publications and citations to the articles published by the entire university faculty.

One enviable mark of a high-quality journal is being indexed in a renowned database such as the Web of Science, Scopus or MEDLINE. An even more elite group of around 10,000 high impact journals are given Journal Impact Factors which are listed each year in the Journal Citation Reports – the JIF is calculated as the ratio of the journal’s citation impact to the volume of research papers it publishes. Millions of researchers are incentivised to publish in ‘Impact Factor journals’ and ambitious scholars are easily enticed into sending their manuscripts to journals that prominently display their Impact Factor. The problem here is that many questionable journals state that they are indexed in such databases when they’re not. Or they announce their ‘Impact Factor’ even when it has not been provided by Clarivate Analytics (formerly ISI and Thomson Reuters), the owner of the Journal Citation Reports and provider of the Journal Impact Factor. Most young academics don’t realise that many of these questions can easily be checked online:

Is the journal indexed in Web of Science?

Is the journal indexed in Scopus?

Is the journal indexed in Medline?

Likewise, once an article has been published in a journal, it is easy to check that the publisher has deposited the digital object identifiers (DOIs) in the Crossref database. Once the DOI has been correctly deposited, then it officially exists and people all over the world can find the article through search engines like Google Scholar and Microsoft Academic, and even more importantly they can accurately cite it pointing fellow academics to the DOI link. Again, a quick check for DOIs is freely available here:

Is this DOI deposited in the Crossref database?

Open Access


The frustration of the academic community by rising subscription prices and a feeling of having to pay twice when research has been funded by public money and then published behind a subscription wall led to the open access movement by which the reader pays no charge to access the results. However, once an article is accepted, the author is usually asked to pay an article processing (or publishing) charge (APC) which often costs hundreds, and can easily run into thousands of dollars, a fee which in many developing countries is covered by the researchers themselves. Under pressure to publish and with little guidance on journal choice, some academics are falling prey to unscrupulous publishers who charge APCs but do not provide a professional publishing service, these have been termed ‘predatory publishers’.

Predatory publishing


Such publishers may exaggerate or misrepresent their services by claiming to be based in a traditional publishing hub while hiding their real location, claiming to provide rigorous peer review but publishing far too quickly for that to be possible or presenting editorial boards of academics who do not know or agree to be listed. Defining a publisher or publication as ‘predatory’ however, is no simple matter and sometimes there is a fine line between acceptable and unacceptable behaviour, e.g. at what point do repeated calls for papers become ‘spam’? Some publishers have found their journals on a predatory publishers blacklist while at the same time being indexed in one of the prestigious databases assumed to be whitelists. One university librarian, Jeffrey Beall maintained a list of ‘probable, possible and potential predatory publishers’ since 2008 (no longer available) and which most recently listed more than 12,000 titles and publishers each included for questionable publication practice based on a list of over 50 criteria.

There is no universally agreed definition of a predatory journal or publisher, indeed nor is there a standard for a ‘high quality journal’. Most people who provide advice on identifying predatory journals start by warning people to watch out for spelling mistakes, typos and grammatical errors on a journal’s website or submission instructions. But equating imperfect English with questionable publication ethics in regions where millions of non-native English speakers are engaged in education and research is itself, an assumption that should be taken in context. Native-level English should not be a pre-requisite for publishing quality research in quality journals, there must be other ways to ensure safe submission of manuscripts and evaluation of journals and publishers.

The Committee on Publication Ethics (COPE) celebrates its 20th anniversary this year and now boasts more than 10,000 members. It has produced a code of conduct and a range of guidelines for authors, editors and peer reviewers. Most serious publishers now adhere to the COPE Code of Conduct and guidelines and this is one of the first things authors should check for.

Think, Check, Submit

logo Think Check Submit
Researchers need to be routinely trained on how to conduct a rudimentary evaluation of a journal. They need to be trained on what to look for and what are the tell-tale signs that should set alarm bells ringing. So, what can the busy researcher do to distinguish good journals from bad?

Several international publishing associations have pooled their resources and launched the Think. Check. Submit. campaign – this was launched during the 2015 meeting of the Association of Learned and Professional Society Publishers, ALPSP. It leads the researcher through the three main steps and includes a check list for researchers to look through before they submit their manuscript to any journal. In the Arab region, we believe that following the Think. Check. Submit. campaign will help the regional research community avoid these pitfalls and publish safely, to view the initiative click here:

Think. Check. Submit: http://thinkchecksubmit.org/

Think. Check. Submit (Arabic): http://knowledgee.com/thinkchecksubmit-ar/

This post was first published on the KnE Blog on 28 February 2017.


Wednesday, 16 August 2017

Spotlight on Publons - shortlisted for the 2017 ALPSP Awards for Innovation in Publishing



On the 14th of September we will be anouncing the winner of the 2017 ALPSP Awards for Innovation in Publishing, sponsored by MPS Limited at the 10th Anniversary ALPSP Conference.  In this series of posts leading up to the Awards ceremony we meet our six finalists and get to know a bit more about them.  

logo Publons

First up, we speak to Andrew Preston, Co-Founder and Managing Director of Publons


Tell us a bit about your company

Publons is the home of peer review. Our cross-publisher platform collects peer review activity, providing researchers with a verified record of all of their reviewing contributions. Publons makes it possible to:

     Recognise the contributions of reviewers worldwide
     Improve and expand the reviewer pool through our free online training course
     Gain novel insights into global peer review and researcher behaviour
     Find, screen, contract, and motivate reviewers (an important part of an editor's job)

All of these threads feed into our mission to speed up research through the power of peer review.

 

What is the project that you submitted for the Awards?

We submitted the Publons reviewer recognition service. This is essentially what you find on Publons.com and comprises of a free offering for researchers and a paid service for publishers.

 

Tell us more about how it works and the team behind it

Researchers sign up to create a free account. Publons then makes it really easy to import, verify, and store a record of every peer review they've performed and every manuscript they've handled as an editor, across any journal in the world. Researchers also have access to personal statistics and a downloadable, verified review record, which provides evidence of their service and standing in their field.

Records can be added regardless of whether the manuscript goes on to be published. Publons works with researchers and journals to set policies about what information can be displayed publicly. We support all review policies, ranging from completely open to double-blind, and we also enable post-publication review.


Partner journals are able to add Publons to their peer review workflow. Every time a researcher reviews for one of our 1,350 partner journals they are able to opt in to have their review seamlessly added to their Publons record. Researchers love this, making it really easy for a publisher to provide an enhanced service to reviewers.

 

Why do you think it demonstrates publishing innovation?

Peer review is at the heart of the research ecosystem. We rely on it to validate findings, evaluate their importance, and to flag issues in the publication. When peer review suffers, so does the quality, pace, and public perception of research.

Publons is the first platform to bring some transparency to the peer review process. It took significant technical innovation to develop the infrastructure to handle the peer review policy and editorial management system for really any journal in the world. The scale of uptake -- 180,000 reviewers and 1,350 journals from many of the top publishers in the world -- highlights the importance of the market need we're filling.


What are you plans for the future?

Publons was recently acquired by Clarivate Analytics. Together, as a trusted neutral player in the market, we believe we can tackle the challenge of bringing trust and efficiency to peer review at scale. We plan to expand our offering to make it even easier for researchers to get recognition for their work and for publishers to have a deeper understanding of researcher workload and expertise.

Right now though we are preparing for Peer Review Week - a global event celebrating the essential role that peer review plays in maintaining scientific quality and looking forward to finding out who the winners of the Publons Peer Review Awards are too.


Publons Co-Founders - Andrew Preston (right) & Daniel Johnston
 Photo by: Image Service,Victoria University of  Wellington
Andrew Preston is the co-founder and Managing Director of Publons. He was an active researcher in physics, first as a PhD student at Victoria University of Wellington, then as postdoctoral researcher at Boston University. He founded Publons with a mission to speed up research by improving peer review. Publons recognises reviewers for their work and -- with more than 170,000 researchers and over 1,350 partner journals -- is the world’s largest peer review platform. Publons was acquired by Clarivate Analytics in June 2017. 

Website: www.publons.com

Twitter: @arhpreston @publons  
Facebook: facebook.com/publons
Publons: https://publons.com/author/1/andrew-preston
LinkedIn: http://nz.linkedin.com/in/arhpreston/


See the ALPSP Awards for Innovation in Publishing Finalists lightning sessions at our Annual Conference on 13-15 September, where the winners will be announced. 

The ALPSP Awards for Innovation in Publishing 2017 are sponsored by MPS Ltd.