Friday 18 November 2022

The Evolution and Future of Peer Review

In this guest post, Michael Casp and Anna Jester look back at the ALPSP 2022 Annual Conference and Awards. 

Wiley logo

Peer review in the Digital Age relies heavily on email communications and web-based peer review platforms to contact reviewers and organize recommendations. In many ways, peer review today is just a recreation of the pre-internet mail, fax, and file cabinet template, but pasted online.  With the current advancements in preprints, social media, and communication platforms, it is possible – even likely – that the model of peer reviewing, and the technology that supports it, have more evolutions to come.

As communication begins to move beyond traditional text formats, so does content. Getting “beyond the PDF” has been a staple at scholarly publishing conferences for years, and as it becomes more commonplace, we are navigating its demands on the peer review process.

But it’s not just about technology. We must also focus on developing and empowering the next generation of reviewers in order to maintain a robust and sustainable reviewer pool. This happens through teaching and developing early academics, which can offer a technology-versed, diverse pool of reviewers that can continue to utilize the technologies we develop.

At the ALPSP 2022 Conference we were treated to many innovative ideas that will have a direct impact on peer review, potentially changing it and hopefully improving it, for the researchers and publishers and ultimately providing better value for society.

Beyond Email

Peer review’s reliance on email is a given. That is, unless you’re in China, where the app WeChat has in many ways supplanted email as the default communication system. Western publishers that seek to engage with Chinese researchers might struggle if they only use email. However, Charlesworth presented the ALPSP audience with a possible solution with their new product, Gateway. Gateway uses an API to allow journal-related communications to be sent to authors, reviewers, and editors via WeChat. This solution allows journals to meet Chinese researchers where they are, rather than trying to pull them into an (let’s face it, antiquated) email system.

Given this shift, eLife presented the ALPSP audience with Sciety, a new preprint evaluation platform that allows academics to review, curate, and share preprints and reviews with their peers. Preprint servers have also started to become social hubs for researchers to connect in more of a real time environment than traditional publishing tools. This system holds the promise of opening up peer review and publishing to a wider user base, allowing more people to curate and review research than ever before. The challenge presented by the scale of preprints is immense, and Sciety has the potential to reorganize how we deal with all this research in a social-focused way.

One more data point to mention: with the pervasive use of social media throughout the world, it is no surprise that academics would have their own version. Despite controversies, ResearchGate has maintained its position as the largest academic social network, connecting about 17 million users. With this many scholars connected, it’s possible we could see something like peer review networks emerge, though that doesn’t seem to be ResearchGate’s focus at the moment.

Beyond the PDF

A decade or so ago, getting “beyond the PDF” was still a new idea being speculated about at conferences. It is now a reality, with authors providing data sets, code, and detailed methods notebooks alongside their traditional manuscripts. As a partner to those authors, we’ve come up with ways to publish this content, but it can present special problems during peer review.

For starters, journals that employ an anonymized review model can find it quite difficult to extract identifiable information from complex content like code or reproducible methods. Sometimes an author’s institution might be inextricably linked to this content, making anonymization impossible – or at least impractical.

Other forms of content, like ultra-high-resolution images, can present logistical problems. Industry standard peer review management systems have hard limits on the size and format of files they can manage. For example, fields like pathology can rely on extremely detailed images of microscope slides, and these multi-gigabyte files are hard to move from author, to editor, to reviewer. Paleontology research can also require larger-than-usual images, as sharing highly detailed photos of artifacts is crucial to the field. Dealing with these kinds of challenges at the peer review stage can require a lot of creativity and patience for all involved without a more flexible solution.

Massive datasets can also present review challenges. Beyond the logistics of moving large files, there are also often more basic concerns, like is this data organized and labeled in a useful (and reusable) way? Is it actually possible to do a real review of a large dataset in the time that reviewers have to give to a paper? FAIR data principles are targeted at answering some of these questions, and services like Dryad and Figshare seek to help by curating and quality controlling datasets, ensuring they meet basic standards for organization, labeling, and metadata. But these services come with an additional cost that not everyone can bear. And a data review still depends on a reviewer willing to go the extra mile to actually review it.

Moving peer review beyond the PDF is still a work in progress, but many of these are solvable problems as our technology and internet infrastructure improve. Our J&J Editorial staff regularly handle content like videos, podcasts, and datasets. At EJournalPress, our platform is integrated with third parties including Dyrad, Code Ocean, and Vimeo. These integrations are an added convenience, as most journals and societies have to have direct agreements with third parties for the integrations to be fully utilized. But we often have to work around the peer review system, rather than with it, relying on basic cloud file servers (e.g., Dropbox or OneDrive) instead of more purpose-built technology.

Open/Pre-submission Review

Another decade-old conference trope was the constant talk about new open peer review models. You might recall that people were split on the wisdom of this approach, but the rise of preprints has done a lot to push open peer review and pre-submission review into the limelight.

Organizations like Review Commons are working with EJournalPress to make pre-submission and peer review a viable choice for authors by building review transfer relationships with more traditional journals. The Review Commons model is to take preprint submissions and have them peer reviewed. These reviews can then be shared with participating journals if authors choose to submit. Journal editors can use the existing Review Commons reviews to evaluate whether or not to publish the work. In data presented at ALPSP 2022, manuscripts that came into journals with reviews rarely needed to be sent out for further review. This has many benefits, saving the editor time in soliciting reviews, and giving a journal’s (probably over-taxed) reviewer pool a little break.

Review Commons is currently free for authors, being supported by a philanthropic grant. It will be fascinating to see if they are able to pivot towards a sustainable financial model in the future.

We won’t exhaust you with the long list of other open peer review initiatives, but suffice it to say, a lot of smart people are working hard on making this a standard part of research communication.

Developing the Next Generation of Reviewers

None of what we’ve written so far will matter one iota if there aren’t enough people in place to do the actual content reviews. One of the interesting revelations we had while managing journal peer review was the incredible range that exists in review quality. From the in-depth, multi-page discussions of every point and nuance of an author’s manuscript, to the dreaded “Looks good!” review, anyone in peer review can tell you that we can (and must!) do a better job training our reviewers. We can offer guideline documents and example reviews, but some people need a more engaging approach to understand and deliver what editors expect.

It would be lovely if reviewing was a required part of the science curriculum. It currently seems to happen in a piece-meal, ad hoc fashion, many times driven by people who are willing to just figure it out themselves. A more standardized approach is called for, especially as reviewable content becomes more complex and varied.

One of the best examples we’ve seen of reviewer training was actually a writer’s workshop for researchers wishing to submit to a medical journal. The EIC of the journal led this workshop, asking authors to submit their manuscripts ahead of time to serve as examples for the workshop. During the workshop, the EIC talked through several of these manuscripts, giving the authors invaluable feedback and what amounted to a free round of review prior to the official journal submission.

Though this workshop was ostensibly for authors, it was equally valuable for reviewers. Participants got to watch the EIC go through a paper in real time, ask questions, pose solutions, and talk through the subject matter with someone who had written hundreds and reviewed thousands of manuscripts. This program has always stuck out as a great way to train authors and reviewers, while also building the relationship between the journal and its community. Win-win-win!

Formal peer review training benefits all parties, such as the Genetics Society of America’s program which also includes direct feedback from journal editors. If you’re thinking of implementing something like this, your organization may wish to conduct a pilot prior to a full rollout. Another great model for peer review training is to pair mentors and mentees to simultaneously provide training and increase the number of high quality trained peer reviewers in the field broadly. If your team is willing to study the results of your reviewer training efforts, be sure to submit them to the next International Congress on Peer Review and Scientific Publication so that we can all benefit from your findings.

Demographics and Diversity

Many of the journals and publishers we work with are prioritizing diversity within their community by making efforts to extend their reach to people who might have been historically left out of the conversation. These organizations are also looking inward to seeing what their current level of diversity looks like in order to improve it.

Many organizations have begun collecting demographic information regarding their authors, reviewers, and editors. We recommend a thoughtful approach when embarking on this project, as it can be fraught with pitfalls and unexpected consequences if you don’t get it right. Before your organization embarks on this endeavor, please consider best practices regarding data collection and clearly define the initiative’s goals. Wondering where to start? Do a landscape scan of what other organizations aim to do with this data and please use standardized questions for self-reported diversity data collection.

Fortunately, many people are working on demographics data initiatives, and there is lots of support and ideas available from our community.

Summary

To put it mildly, there is a lot going on right now. The technology we use has the potential to upend the traditional research communication process, and in some cases (like preprints) it already has. With a host of new data, content, and equity concerns, people involved in the peer review process have more to deal with than ever before. And it’s unlikely that we’re doing enough to equip them with the knowledge and training they need to succeed. But we can do better, and I’m heartened to see the many people in and around our industry who are trying to improve the situation. From our end, eJournalPress is supporting societies and journals as they work to collect and evaluate demographic information and metadata, and J&J Editorial staff are always investigating ways to support journal innovations through a combination of technology and experienced staff.

I often think about peer review in the context of that old Churchill quote about democracy: “It has been said that democracy is the worst form of government, except all of those other forms that have been tried from time to time.” Peer review might not be the best method of scientific evaluation, but it’s the best we’ve got, and who knows, maybe we’ll make something even better. But until then, we’ve got work to do.

photo Anna Jester

Anna Jester, VP of Marketing and Sales, eJournalPress, Wiley Partner Solutions


photo Michael Casp

Michael Casp, Director of Business Development, J&J Editorial, Wiley Partner Solutions

Wiley Partner Solutions was gold sponsor of the ALPSP Conference and Awards held in Manchester UK in September 2022.  The 2023 ALPSP Conference will be held in Manchester from 13-15 September 2023.


Wiley is one of the world’s largest publishers and a global leader in scientific research and career-connected education. Founded in 1807, Wiley enables discovery, powers education, and shapes workforces. Through its industry-leading content, digital platforms, and knowledge networks, the company delivers on its timeless mission to unlock human potential. Visit us at Wiley.com. Follow us on Facebook, Twitter, LinkedIn and Instagram.

Thursday 10 November 2022

Guest Post: Centering reproducibility and transparency in health science research

By Grainne McNamara, Research Integrity/Publication Ethics Manager at Karger, Silver sponsor of the ALPSP Conference and Awards 2022 

At Karger Publishers, we have over 130 years of experience connecting healthcare practitioners, researchers and patients with the latest research and emerging best practice, covering the whole cycle of knowledge. As a publisher in the health sciences with a broad audience, we have always centered the needs of our readers by tailoring our content to them. Complementing our long history of publishing journals and books for clinicians, researchers, and patients, in 2021 we launched the online blog The Waiting Room. In 2022 the Waiting Room Podcast launched, bringing breaking research to patients, caregivers, and the general public in easy-to-understand, jargon-free formats. Also this year, we made it possible for authors to submit plain language summaries in our journal Skin Appendage Disorders with their articles. These provide interested readers with easy-to-understand descriptions of the latest research in this field of dermatology. In all these developments, we are acutely aware of the great responsibility that comes with communicating health science research to a general audience.

Karger Publishers
At Karger Publishers, we employ a rigorous peer review process and are transparent about the evaluation criteria for articles that we publish. However, with increasing digitalization comes big challenges, as information can be disseminated, but also misinterpreted, at speed. With these challenges come opportunities to do better, and we asked ourselves: How can we do more for our community?

At Karger, we are open for Open. We have embraced the Open Science movement – in thought and action. Since 2021 all our research articles have been published with a data availability statement, directing readers to the location of the dataset(s) underlying these findings and providing thorough guidance to authors on the how and why to share their data. However, we see data availability statements as just the beginning.

Well-established reporting guidelines exist for almost every study type and provide a structure for authors, reviewers, and readers to understand what is, and should be, reported in an article. As such, adherence to community-standard reporting guidelines, such as CONSORT or PRISMA, has been the expectation for all our journals for many years. As a health sciences publisher, we know that case reports are an early stage but crucial part of evidence-based medicine decision making. That is why we have eight specialty Gold Open Access journals dedicated exclusively to communicating case reports. For case reports to be their most responsibly influential and effective, clear and transparent reporting, again, is crucial. That is why, as of September 2022, completion of the CARE checklist is a requirement for all submissions to these journals. We believe that by improving the consistency and transparency of case report reporting, we underscore the importance of transparency in health sciences research.

We present breaking research findings every day to our growing community and take great care in the trust placed in us by readers as a publisher of health sciences. We recognize that the ultimate guarantee of the reliability of a finding is its reproducibility - that is, the ability to find the same result again and again. We also know that transparency and detail of methodology are significant barriers to the reproducibility of a result and that adherence to reporting guidelines can improve the clarity of an article. That is why we recently expanded our guidance to authors on the use of reporting guidelines and endorsed the Materials Design Analysis Reporting Framework, as part of every journal’s reproducibility policy. We believe that by centering the reproducibility of methodology and reproducibility of results in our publications, we are progressing the conversation around reproducible-by-default health sciences research.

Grainne McNamara, Research Integrity/Publication Ethics Manager at Karger
We could not make these steps towards reproducibility-by-default without the support of our community of outstanding editors and reviewers. We aim to recognize and support our community of experts in a variety of ways. This includes providing training for the next generation of peer reviewers as well as interactive discussion webinars on the latest topics in peer review and reproducibility. We also benefit from the cross-publisher organizations, such as COPE and ALPSP, that facilitate conversations on best practices in reproducibility.

At Karger, as we expand our portfolio of research communication, we grow, in step, our focus on reliability and trust in those findings. By empowering researchers, institutions, funders, and policy makers to maximize impact in health sciences, we are taking strides toward our goal to help shape a smarter, more equitable future.


Karger was a silver sponsor of the ALPSP Conference and Awards held in Manchester UK in September 2022.  The 2023 ALPSP Conference will be 13-15 September 2023.

Tuesday 18 October 2022

Guest Blog - SciencePOD


 OA implementation lags behind rhetoric
As ALPSP celebrates its 50th anniversary, poor change management levels hamper OA adoption


The
Association of Learned and Professional Society Publishers (ALPSP) celebrated its 50th anniversary during the 2022 Annual Conference. The event, held at the Hotel Mercure Piccadilly in Manchester, will be remembered for its glittering chandeliers. The rhetoric around digital change sparkled just as brightly, but was there substance beyond the shine?

At the time of the conference, most scholarly publishers and learned societies had already pledged to implement digital transformation, shifting toward greater Open Access (OA). Few of the discussions at the conference, however, focused on how, in practical terms, they would manage change along the way. Embedding digital-first processes requires strong leadership to overcome barriers, coupled with widespread transparency around the OA-readiness of each scholarly publisher.



Open Access

Over three days, the scholarly audience attended a series of discussions on the benefits of OA, but these were largely preaching to the converted. The Open Pharma forum, one of the satellite events, demonstrated the value of OA clinical studies, which are well suited for conversion into
plain language summaries; these enable Pharma to communicate the latest research findings outside scholarly circles, mainly to doctors and patients—a requirement imposed by the European Medicines Agency (EMA), that science content creation solutions, proposed by the likes of SciencePOD, routinely deliver. 
During the opening keynote, Peter Cunliffe-Jones (University of Westminster), discussed the role of OA in reducing misinformation for policy-makers, media professionals and fact-checkers, as well as the wider scientific community, while discouraging predatory journals. 
Further discussions focused on the need to expand the OA business model, following the recent announcement by the US Office of Science and Technology (OST) to make OA mandatory for publications derived from publicly funded research. The move offered further validation for the OA model, and offers further opportunities for publishers serving Stateside learned societies or university presses. At the time of writing, 45% of publications by volume are already published under OA, under 2021 Delta Think data. 

Change Management

Achieving the cultural agility and processes necessary for meaningful digital change is the biggest challenge faced by our industry. A dedicated session, “A look back at the evolution of publishing focusing on the changes in industry in the past 50 years”, looked at progress so far.
Delivering a smoother OA experience for authors is an issue of change management. Large organisations often struggle to adopt change in the face of inertia, political undercurrents and the ebb and flow of leadership directions. Smaller organisations like societies are more agile, but often resistant to change due to associated costs.




Stimulating Innovation

All that said, our industry has proven it is capable of change. The ALPSP Innovation Award nominees demonstrated the innovative initiatives the scholarly industry is piloting, particularly among small- or mid-size organisations. One of the co-winners of the
2022 ALPSP Award for Innovation in Publishing, Charlesworth’s Gateway is using WeChat social media communication technology to enhance communication between Chinese authors and publishers.
Others are focusing on solving data-sharing issues. GigGaByte journal, the other 2022 Award co-winner, caters for rapidly changing fields by publishing updatable datasets and software tools of value to more specialist communities.

Leadership toward company-wide transformation

Despite these promising initiatives, digital transformation must be company-wide, touching every aspect of the scholarly product lifecycle. We need to understand the internal change management process required to move toward a more author-centric offering that built on digital technologies.
There was no shortage of expert consultants with the change-management knowledge at the conference. Consultants are typically brought in to propose new processes towards a more effective digital approach. However, this can cause internal friction when members of staff have already identified the specific, detailed changes required, outside standard change management methods. 

Trust
 
In times of change, trust is paramount between publishers and their staff, as is the need for leadership that fosters an agile, adaptable culture open to digital change based on bottom-up suggestions. Transparency is key. During the session entitled, “Transparency in OA: Moving out of the Black Box?” speakers such as Julian Wilson, Head and Sales and Marketing at IOP Publishing, pledged to compile the appropriate data for scrutiny.
This data is difficult to assemble, not because publishers are holding back, but due to poor overall collection of OA-readiness data for display. So, when the US OST issued its new policy for publicly funded research to be made available in OA, US customers began asking publishers to identify which parts of research were publicly funded.

Transparency

The difficulties for
publishers in providing Plan S-mandated OA-readiness data show that our industry lacks established standards (time of first acceptance, number of reviewers, peer review length, etc.). Publishers need to create norms across the entire industry, allowing authors to make their own comparisons between OA outlets.
Introducing such new metrics would come under change management methodology. Some publishers, like Hindawi  are already making Plan S-requested metrics public, according to Chair Catriona MacCallum. PLOS, represented by its Director of Open Research Solutions, Iain Hrynaszkiewicz, announced at the conference an extended partnership with DataSeerAI to measure and publish multiple ‘Open Science Indicators’ across the entire PLOS corpus, going back to 2019; this would be ongoing for newly-published content, and would include rates of data-sharing in repositories, code-sharing and preprint-posting, in addition to future plans for protocol-sharing.
Tech-driven publishers, such as MDPI and Frontiers, present in the audience, have been metrics-conscious from their inception. They streamline every step in the peer-review process for prompt publication timelines and have automated many of their processes to the extent that they sometimes encounter resistance from scientists themselves. Voices have expressed concern to strict turnaround times for peer-review, for example, which were interpreted  by some as pushing peer-reviewers into rushing their work.  Yet, workflow streamlining and optimisation are a by-product of the digital transformation of our industry.

Sustainable Development Goals

Once change management has been implemented, there is real hope for applying research to global causes, such as the
Sustainable Development Goals (SDG), which were discussed at length during the conference.
Christina Brian, Vice President Business, Economics, Politics, Law Books at Springer Nature, concluded that SDG content is more likely to be published under OA, to be highly cited, and to receive more attention—as measured by altmetrics—than other research content.

Moving forward with OA

Although the audience at the ALPSP Annual Conference 2022 no longer needs to be convinced about the benefits of OA publishing, they have yet to fully adopt transparent criteria to measure their OA readiness, and focus on
improving the author experience.  They must also open the scholarly research publishing process further, illuminating excellent OA research for the benefit of the wider knowledge economy with the help of author-centric content marketing materials like plain language summaries, infographics and author podcast or video interviews to spread the latest discoveries far and wide.
 
About the Author

Sabine Louët, Founder and CEO, SciencePOD, purveyors of science content for educational, informational and content marketing purposes.


Friday 30 September 2022

Guest Blog Post - Wiley

Research outputs beyond the PDF: Why they matter, and how to get started

Going beyond the default


While the PDF has become the de facto global standard for publishing articles online, the publishing tools of today offer a whole range of ways to publish content that is more flexible, more engaging, and more user-friendly, as well as better addresses current publishing standards. It’s time to broaden our scope beyond the PDF! 

Compared to PDF, both HTML and EPUB3 are better formats for accessibility—not just in the technical sense of image descriptions and ARIA roles, but also in the broader sense of allowing the user to resize and reflow text or zoom in on images. 

But there are also many other ways to publish both the outcomes of research and the materials that lead to those outcomes, and many compelling reasons to use them. The three we’ll focus on here are making research data and protocols available to other researchers around the world; engaging your users with more varied and interesting offerings; and translating research for non-scientists and non-specialists. 

Publishing data sets: Why, what, and how

The why of making researchers’ data available, in keeping with FAIR data principles, is well recognized: when the data behind the research is findable, accessible, interoperable, and reusable, replication studies can be more easily carried out, and testing the replicability of published results is key to advancing science and improving research integrity. 

The how may be more challenging. “Publish researchers’ data alongside the articles or books based on that data” seems perfectly straightforward—until you start thinking about all the things “data set” might mean. Depending on the research and the discipline in question, the data behind a published article might be anything from a vast database of testing data or geolocation coordinates to computer code, from a linguistic corpus to a collection of audio recorded interviews, archival photographs, or tweets. (We won’t get into collections of cells, core samples, or water samples.) While all are data sets and deserve FAIR data treatment, each brings a different set of technical challenges to the publication process.

Using a feature, like Digital Objects on the Literatum publishing platform, that supports multiple publication formats allows publishers to offer—or even require—researchers to make their data available on the same platform and at the same level of discoverability as the publications based on it. A wide variety of data types—essentially, anything that exists in a digital format—can be hosted alongside an article or book, assigned a Datacite or Crossref DOI, and linked bidirectionally with the publication and any other data sets or research products. Depositing a DOI makes data sets easier to find and to cite, benefiting the researchers on both ends of the data-reuse transaction.

Protocols, notebooks, and more

Just as important for replicability as data sets are the protocols used in collecting and analyzing the data—from survey instruments to lab procedures to focus-group guidelines. Publishing research protocols alongside data and findings further encourages replication studies.

A related use case is that of computational notebooks, which are widely used by researchers in many scientific disciplines to carry out, manage, and share their workflows and data analyses but which most publishers can’t yet accommodate as part of the research output. Wiley and Atypon are part of the Sloan Foundation–funded project Notebooks Now!, led by the American Geophysical Union, aimed at developing a standard model for publishing computational notebooks.

Why is this important? As Shelley Stall et al. write,

Providing notebooks as available and curated research outputs would greatly enhance the transparency and reproducibility of research, integrating into computational workflows. The notebooks allow deeper investigations into studies and display of results because they link data and software together dynamically with what are often final figures and plots. [Read the full Notebooks Now! proposal at https://doi.org/10.5281/zenodo.6981363.]

Publishing computational notebooks is just one way that making researchers’ data findable, accessible, interoperable, and reusable helps elevate both integrity and equity across research and publishing.

Access, accessibility, and knowledge translation

Data availability is critical. But publishing “beyond the PDF” isn’t just for data sets!

When we talk about Open Access, or about public access to publicly funded research, we need to consider much more than whether or not a publication is paywalled. It’s important to ask, “Can a member of the public download and read this article?” But we also need to ask, “Will a non-expert reader understand the key findings of this article?” Researchers generally write for other researchers in their field, and the typical editorial process does a good job of facilitating that expert-to-expert conversation—which simply isn’t designed for readers without that specific expertise.

This is where knowledge translation comes into the picture. What alternatives to expert-to-expert academic writing can we provide, in order to make key research findings—for example, from high-quality and up-to-date studies in public health, epidemiology, and occupational safety—both freely available and genuinely useful for non-experts who would benefit from understanding them?

Plain-language summaries, “explainer” blog posts, and static infographics are a great place to start. Translating these into other widely spoken languages takes us further. But why stop there? Publishing platforms like Literatum allow publishers to host audio, video, podcast, and interactive visual content. So consider “explainer” video or podcast interviews where researchers highlight key findings; consider how an interactive graph can help a non-expert understand demographic changes, the spread of a disease, or how languages change over time; consider how an animated map can illustrate economic, weather, or population data across space and time. 

Finally, we need to consider accessibility. Step one, of course, is to make sure that our websites are WCAG compliant. Step two is ensuring that all text content—whether articles, books, blog posts, news updates, or data files—is machine readable, so it can be interpreted by screen readers, and available in HTML or EPUB format (either natively or via a PDF rendering tool such as Atypon’s eReader), so that it’s friendly to those who need large print, reflowable text, and zoomable images. Step three is to work on making non-text content as accessible as possible: well-written descriptions for all non-decorative images, closed captions for all videos, transcripts for audio content … 

All of these elements, too, can usefully have DOIs deposited, to make citing them easier and help direct readers to the version of record.

Make time for metadata!

Whatever you’re publishing, in whatever format, metadata remains key. The challenge comes in determining what metadata are necessary and appropriate for new types of content. To maximize discovery of non-PDF content, it’s important to accurately identify in the metadata what it is (data set? interview? photo archive?), what the format is (.csv? .mp4? .jpeg? .zip?), what it’s about (few things are more frustrating than thinking you’ve discovered a good study of chess and then finding it’s a study of cheese), and all the ways it’s connected to other pieces of content. Metadata should also include information about how and where to access the content, who created it, and what users can and can’t do with it. An additional part of the metadata equation is deciding which elements are supplementary to the article, and which are effectively on the same level.

Finally, depositing a DOI (or other appropriate persistent ID) is important for everything you publish. On a practical level, using and maintaining DOI links makes citation easier, directs readers to the version of record, and ensures that whatever element of their work is cited, authors’ citation stats benefit from the use of their work. On a more symbolic level, depositing DOIs for non-article content signals a commitment to treat these content types as they deserve: as part of the published scholarly record.

So now what?

Your publishing platform provider can tell you what non-PDF content types can be hosted on your site and how to get them there, and we can also help you resolve metadata questions and deposit your DOIs.

The more challenging—and more exciting—part is up to you and your contributors: Deciding what content types best suit your contributors, their research, and your audience, and then making it happen. We’re here to help you all the way!


Author Bio


Sylvia Izzo Hunter joined the marketing team at Atypon as Community Manager in 2021 and has been Marketing Manager at Inera since 2018, responsible for content marketing and social media. Prior to shifting to marketing, she worked in editorial, production, and digital publishing at University of Toronto Press. A past SSP Board member (2015-2018), Sylvia has also served on SSP's Communications, Education, and DEIA committees and is a member of the NISO CREC Working Group.


Guest Blog - Morressier

 Today’s workflows are tomorrow’s headaches: Is it time to change?



Trust in research has never been more important. I’ve lost track of the number of think pieces and surveys, meta-analyses and pessimistic opinion pieces about the critical nature of our time and how that trust is slipping away. 

How is that trust built? It's a complex system with many stakeholders whose competing priorities make it a challenge to find common ground. The media wants certainty and stories, and scientists rely on statistics and replicating results of data sets that few in the general population can understand. But setting how science is shared and communicated to one side, we’re interested in the infrastructure of disseminating science. What about the workflows that build research integrity into the process itself, and the technologies that keep peer review and content management moving forward? 

Today those processes might be part of the problem, but they can lead the way toward a bigger solution. Each time a headline hits the news about a retraction, or fabricated clinical trials, or ethics violations, we have already missed an opportunity to solve the problem at its source in the publishing workflow. And with each headline of that nature, trust in science erodes.

Here’s my vision for the future of research integrity: 

  1. It all starts with early-stage research. By the time a journal article is submitted for publication, years worth of research have already gone into those conclusions, and that science is hidden away or ephemeral in the form of a conference presentation two years ago. Imagine having a record of early-stage research, one that was already peer reviewed for a conference, and then workshopped and validated by the peers in the room. And, thanks to an integrated infrastructure, the record of those review stages are linked to the submitted manuscript. 
  2. It relies on technology to automate the process so human focus can be reapplied to important matters. Organizations should be able to set up the workflow they need with simple technology solutions, and sit back and watch the infrastructure they set up work for them and not against. Peer review is too important to waste time by sending out and managing manual messages or prompts. Imagine a re-designed workflow infrastructure that never requires leaving the platform? That’s the future, freeing up the time and resources of staff to use workflow data to build pools of new reviewers and authors, or to forecast trends for the discipline. 
  3. It is user-friendly and flexible. As valuable as a streamlined workflow is for organizations like publishers and societies, it's equally valuable for reviewers and authors. It's well known that peer review takes too long, these time-intensive contributions are not well recognized or rewarded, and it's frustratingly technical, with standards and policies that can be different for each journal or conference. Peer review should be something researchers line up to do, because it helps move science forward and validate discoveries that could become tomorrow’s innovations. Peer review is the foundation on which trust in science is built. But when the process is cumbersome, hard to justify in terms of the career growth it promises, and all the processes and stages are impossible to remember, it becomes less appealing. Workflows and infrastructure can solve this problem, leaving the path to trust simpler. 
  4. Our values are enforced at every step of the process. Infrastructure is a balancing act, and we have to keep our values close as we build. If we focus solely on efficiency, for example, then we might cut publication times but lose some of the rigour of close review. Workflow technology needs to be designed to uphold research integrity. Only by embedding the values and principles of the research industry in the infrastructure built to support it, can we find the right balance and success.

Publishing workflows cannot exist in a vacuum. They need to talk to each other, share data and insights that help human reviewers make informed decisions. We need to stop wasting time on tasks that technology can take over, but we need to retain the most ethical and integrity-rich practices possible. 

Author: Sami Benchekroun, CEO and co-founder of Morressier




Monday 5 September 2022

Guest Blog - Frontiers Public Trust, Societies and Open Science


Public Trust, Societies and Open Science 

In the context of extreme global events, I find myself turning more and more to the possibilities of a collective response. Scientists have made enormous efforts in recent years for deeper and faster collaboration. And scientific research publication bears a profoundly important social responsibility. On both these fronts, society publishers are in the vanguard.

The context for their efforts is stark. Too frequently, in an often poor-quality, binary public debate, public trust in the veracity of science, in its intentions and its cost, falls away. Political accountability grows weaker when we don’t have the science, the trade-offs, and the difficult choices in view.

At Frontiers, we want to help change that. We are a fully open access publisher. We want all science to be open. To us, global, existential threats call for scientific breakthroughs at pace, based on full and immediate access to the latest research.

Now the move to open access is underway across parts of the publishing industry, and I know the appetite for it is building. But in my view, the pace of change does not match the aspirations I sense in society publishers. As the campaign group cOAlition S itself points out, more than half of the two thousand transformative journals enrolled in the Plan S program have missed their annual targets in the move to full open access. Meanwhile two thirds of the world’s science remains behind a paywall.

Add to this the arrival of transformative agreements – "read and publish" or hybrid deals – which have in our view sown confusion and opacity, and of course, societies are looking hard for certainty and clarity. With a decision to publish open access – and the commitment to deeper and faster scientific collaboration – I believe they can find both. It is possible to find the right fit. It is possible to meet the appetite for open access while protecting, and growing, sustainable income.

At Frontiers, we offer a platform that is industry standard while also being open to a tailored approach to a society’s specific needs. We can extend the brand, dissemination, and financial future of societies. We support societies with guaranteed minimum incomes, when necessary. We are building partnerships and agreements with funding institutions across the world to broaden opportunities to society authors. And we work hard to be financially transparent with our partners, to share our evidence and expectations of sustainable profit. We believe the traditional subscription model leads to excessive costs. It is still the case that the average price of an article in a legacy journal is significantly higher than it is in open access journals.[1]

So, we need to realign expectations. And with flexibility, ambition, and focus, I think commercial and society publishers have an enormous opportunity to drive change that is both good for business, and good for society. As we face down global challenges, open access science can grow our chances of success. And it can help meet public appetite for accountability, transparency, and trust.



[1] It is not transformation if nothing changes, 2022 (figure 2), Frontiers, 2022



About the Author

Robyn Mugridge











Robyn joined the Open Access publisher Frontiers in 2018. In 2019 she moved onto the role of publishing partnerships manager and established the Publishing Partnerships department. Promoted to head of publishing partnerships in 2022, her work now focuses on strategic collaborations with societies and associations, supporting them as they engage with their communities and develop their publications by transitioning to open access publishing models.



Further Information

Twitter

Frontiers Publishing Partnerships @FrontPartners

Frontiers @FrontiersIn

Robyn Mugridge @MugsPubs


LinkedIn

Frontiers https://www.linkedin.com/company/frontiers/

Robyn Mugridge https://www.linkedin.com/in/robyn-mugridge-8a461b86/

Guest Blog from ALPSP Conference sponsor - Silverchair

Why Having Independent Partners Matters

We at Silverchair recently announced that we have received a significant growth investment from our new capital partner (Thomson Street Capital Partners) to help us continue to scale our business and offer even more valuable products and services to learned and professional society publishers (i.e. the LPSP of ALPSP).






One of the crucial and most desirable aspects of the investment is that enables Silverchair to remain an independent, non-conflicted partner for society publishers. The feedback of the society publishers in our community has been resoundingly positive, as the investment is designed to increase the breadth of products and services available to them as well as attract additional society publishers into their community—with our ultimate goal of assembling and supporting a strong, sustainable community of independent publishers who can leverage Silverchair’s services and their peers’ knowledge and experience to react and thrive together as industry conditions change.

ITHAKA’s Roger C. Schonfeld recently provided his (independent) analysis of the TCSP investment in Silverchair in SSP’s Scholarly Kitchen blog, under the title of “Keeping Publishing Infrastructure Independent,” noting that “Silverchair remains vital infrastructure for some 400 scholarly publishers, which can feel a sense of relief that it remains independent.”

But Why Does the Independence of Your Key Partners Matter? 

As our President Will Schweitzer says (a lot), “Our top priority is to support our customers’ top priorities—in everything we do we must help publishers make money, save money, and best achieve their missions.” 

Maintaining independence allows Silverchair to avoid 1) conflict of interest or 2) conflict of priorities with our independent society customers. To expand on these two types of conflict:

1.     Many forms of partner conflict of interest are obvious – such as using platforms or services from a publisher that also publishes journals in your field and thus competes for finite authors, manuscripts, OA dollars, and subscription dollars. It is questionable how these partners can fulfill their legal responsibilities to their shareholders and yet also put society interests ahead of their own in the long run. However, there are legal structures and financial constructs that societies can use to try to identify and control for these obvious conflicts, so they can be seen as at least somewhat manageable.

2.     A partner’s conflict of priorities are less obvious (and more dangerous). An owner can put their own product development priorities above that of their customers’ needs when determining their forward roadmap or can cut back partner-facing resources, such as account management or client services. They can slow down the pace of product development in one area in order to refocus resources to other technology platforms (especially if they are a large organization with a variety of platforms). They can gather and use data about your submissions, authors, and registered readers to further their own author recruitment and sales. They can throttle support services to customers in order to have more staff to pursue these other strategies, which can disrupt operations or delay a society’s own product development plans. Crucially, these conflicts of priorities are not easy to name and control for in legal/financial terms, and thus the society may have little recourse if partner priority conflict worsens mid-relationship. (Worse, these examples are all drawn from real experiences society publishers have shared with us.)

This is why Silverchair puts such emphasis on our independence. We serve no other corporate parent or funder strategies. We run an open roadmap so that all of our customers can watch in real time (and have meaningful input into) the priorities and development of our platform. We succeed in the long run only if our society customers succeed in the long run, and so we are laser-focused on making that happen.

Silverchair believes that thriving, independent society publishers are an essential component of an optimal scholarly publishing future, and the lack (or diminishment) of these publishers would be a huge loss for researchers, professionals, and science writ large in society.

Independence matters – for you and your partners.

Want to learn more about our plans? Jake and other members of the Silverchair team are excited to be attending the ALPSP meeting and would love to set up a time to chat. Get in touch: jakez@silverchair.com.

Jake Zarnegar

Chief Business Development Officer

Silverchair

jakez@silverchair.com

 

Friday 19 August 2022

Spotlight on: Impact Services

This year, the judges have selected a shortlist of seven for the ALPSP Awards for Innovation in Publishing.  Each finalist will be invited to showcase their innovation to industry peers on 14 September on the opening day of the ALPSP 2022 Conference in Manchester. The winners will be announced at the Awards Dinner on Thursday 15 September.

In this series, we learn more about each of the finalists. 

Emerald Publishing logo

Tell us about your organization  

Emerald Publishing is one of the world's leading digital first publishers, commissioning, curating and showcasing research that can make a real difference. We work with thousands of universities and business schools across the world to share knowledge and provoke the kind of debate that leads to positive change. We are a family founded business, passionate about people, and doing things differently. Going beyond the bounds of a traditional publisher, we want to be a facilitator of impact, encouraging equitable, healthy and sustainable research and publishing for all.  

What is the project/product that you submitted for the Awards?  

Impact Services has been created in collaboration with academic impact experts, as well as universities and institutions in the UK and Australasia, with the aim of making ‘impact culture’ a daily reality for researchers and increasing the opportunities for research to make a difference. Alongside Emerald’s partners, we’ve created a service unique to the publishing industry which instead of focusing on the measurement and evaluation of impact, it creates a research environment conducive to producing high quality research that leads to real world change. 

Tell us a little about how it works and the team behind it  

logo impact services
Impact Services is a subscription, cloud-based service, made up of three parts: 

Impact Planner: A comprehensive planning tool, through which researchers are guided through a process to map impact pathways, engage with society and drive meaningful and measurable change. The planner is underpinned by the principles of Impact Literacy, navigating researchers through the processes of identifying need, articulating impact goals, identifying stakeholders and considering barriers to generate an overall plan.  

Impact Skills: Developed and created with our academic partners, Impact Skills is a set of learning materials to build impact competencies across the research ecosystem. The content reflects academic insights into the skills and the framework needed for building impact literacy, and specific content has been commissioned from key members of the research community, augmented by existing, relevant content from our online learning platform sister company, Mindtools. Coupled with and accessible from the impact planner – provides a rounded service in support of impact literacy planning and action.   

Impact Healthcheck: In parallel with the more individually focused Planner and Skills aspects, this section focuses on institutional practices and how to build ‘healthy’ approaches to impact. Our academic collaborators have identified how the pressures to deliver impact within the research sector risk unhealthy and non-inclusive practices, and this Healthcheck provides institutions with a diagnostic tool to understand what they are doing well and what areas they might want to focus on.  

Impact Services was based on the research and experience of Dr Julie Bayley and Dr David Phipps, both experts in the field of research impact. Alongside Emerald and Mindtools (an online learning company part of the Emerald Group), there were workshops with researchers and the research office, the commissioning of authors in the research environment and finally the digital development team to create the service. Now, Impact Services is worked on by a dedicated team at Emerald with expertise in UX, sales operations, customer operations, marketing, and publishing. We regularly action on feedback from our customers, and Dr Bayley and Dr Phipps, to ensure the service satisfies the appetite of the research community. 

In what ways do you think it demonstrates innovation?  

The significance and innovation of Impact Services is threefold.  

Firstly, the suite of tools focuses on the development of healthy, literate practices based on the expert knowledge of those who support impact on a daily basis. Where many tools in the sector focus on measurement or capturing evidence of impact, Impact Services uniquely seeks to equip individuals with the means to build their own literacy and address practices of their work environment to maximise the likelihood of impact.   

Secondly, Impact Services represents a significant step change in the publishing sector, taking an end-to-end approach to impact from the inception of a research idea through to actual societal change, rather than solely focusing on better communication and dissemination of research outputs.   

Thirdly, the principles underpinning Impact Services have been embedded across Emerald’s business as a whole. Emerald updated its Impact Manifesto in 2022, is a signatory of DORA, and has since co-funded additional research to understand challenges around impact literacy in the funding application process. Impact Services will continue to evolve and refresh in line with the sector’s need.  

What are your plans for the future?  

We want to continue to support changes in research evaluation, showcase the stories of new approaches, and work to advocate healthy approaches to research impact.   

Through Impact Services, Emerald is supporting the need for a service which helps to plan for impact to enable stronger research outcomes. It helps to de-mystifying impact and to provide a structured way for research to lead to real change in the world. Emerald has an active roadmap and ensures new features are prioritised based on customer feedback and demand. We have always solicited regular and comprehensive feedback from customers and prospective users and want to ensure that we reward their time and honesty with the right improvements to the service. For example, we are scoping enhancements to the collaborative functionality of the service.  

About the author

photo Steve LodgeSteve Lodge is Head of Services for Emerald Publishing.  He has worked for the business for almost ten years, most recently looking at ways in which they can support research staff in developing their impact literacy; the ability to understand, appraise and make decisions on how research resonates with the outside world.

.