Monday, 24 April 2023

How do we overcome research misconduct?

Guest post by Sami Benchekroun, CEO and Co-founder, Morressier

As the volume of research output grows larger and larger each year, new forms of research misconduct appear around the edges. While research misconduct represents a small percentage of overall published works, each instance seems much larger under the magnifying glass of public scrutiny. Without public trust, science has no power, and research misconduct is a disease that threatens the validity of our enterprise as a whole. Today, we have treatments to this disease - like retractions - but these treatments can still leave publishers scarred and damaged in the long term. 

What we need is a preventative and proactive solution. Something to stop research misconduct before it happens. In order to create that solution, we have to start by understanding why research misconduct happens, and address it on the industry level, not just within our workflows. 

Why does research misconduct happen?

Research misconduct happens when a system is under pressure. There are limited resources, and a constant call to do things faster. On the part of the researcher, they face immense pressure to publish, in order to advance their careers and build their personal reputation. This pressure leaves them vulnerable to paper mills and predatory journals, and perhaps more likely to cut corners by engaging in misconduct themselves, or simply making a mistake because there’s little time to perfect a paper. Within our publishing and peer review workflows, there are issues of scale and a pressure to review more papers, faster. When there’s less time to review each paper, there’s less time to evaluate and identify mistakes or issues that would make a piece of research unsuitable for publication. 

On top of the growing pressures on workflows and workloads, the broader business models of scholarly publishing have shifted, leading to an additional pressure from the world of OA. The revenue models for OA publishing have changed the currency of scholarly publishing from the journal volume to the journal article, from subscription to an article based economy. Publishing a greater volume of content seems to be the way publishers choose to ensure ongoing revenue streams. 

Add to this overstressed system emerging forms of misconduct. In the last year, we’ve seen a huge volume of opinions and perspectives on the role of AI-generated content. Is it a tool, is it a replacement? The role of AI in publishing is being decided now, but as is often the case with technology, use is outpacing regulation. 

So what now? 

Today, we treat research misconduct reactively. We retract individual articles, which is a critical show of transparency as part of the overall trustworthiness of science, but is misunderstood publicly.

So what would a more proactive approach really look like? While research integrity can be guarded with improvements to the editorial and peer review process, we need to go broader. It needs to be a broader industry effort that addresses some of the underlying pressures for key stakeholders. There’s a path to providing relief on the pressure to publish if we can evolve the criteria for tenure processes, career progression, and evaluation. There are strategies to make the peer review process more streamlined, so it's less time consuming and easier to engage with, but also more transparent so reviewers get the recognition they deserve. A more streamlined editorial process will also support the publisher’s need to publish more research. 

Technology’s role is twofold. First, with streamlined processes, and enhanced integrity checks that can review manuscripts at scale, the process becomes faster and easier. Second, technology can support the improvement of papers from researchers for whom English is not a first language, perhaps broadening our published output to include more contributions from the Global South or also other non-English speaking countries. Here is a solution that would democratize the world of scholarly publishing: helping publishers increase their output while helping improve equity in the research community. 

Risks and priorities for research integrity

We have to balance the need for research integrity with the need to publish research more efficiently. At first glance it might seem as though curing research misconduct has the potential to slow science down, adding more layers of checks, more rigorous reviews, and complicated institutional changes that take time to fully adopt. Even further, prioritizing research integrity is an investment: it's expensive, and it can take a long time to truly see progress with point solutions or solutions not integrated throughout the publishing infrastructure.

But what do we, as an industry, risk creating if we do not approach broad, scalable changes to our research integrity infrastructure? An ecosystem that struggles to scale, one that becomes more crowded and loses its focus on quality. Without research integrity interventions, whether they are embedded in our technology or addressed in the transformation of our peer review workflows and institutional pressures, we lose trust. The public trust in science is already at risk. 

This is also a critical time for machine learning. If we feed our AI tools fraudulent data, or anything other than the highest quality science, we risk embedding biases in the machine learning process that will be increasingly harder and harder to correct. If we start exploring AI in the scientific review process or publishing program, it has to be with research integrity at the very centre of our focus. We’re approaching an inflection point with AI. If developed properly, it can be focused on making our systems, like peer review, more accurate, transparent, and trustworthy. But how, when the current system is imperfect and under strain? 

We’re not the only industry asking this question: recently leaders across the tech industry, including Steve Wozniack and Elon Musk, signed an open letter calling on the industry to pause giant AI experiments. In this letter they call for developing AI systems only after we are confident of their effects, and the need for independent review before training new systems. What they propose is effectively peer review for approval to launch new general AIs. For scholarly publishing, misinformation is a massive risk. We risk losing control over information, and even the ability to validate information, if we do not take care about the inputs for machine learning, nor how we allow AI-generated research to fill our systems. To address this challenge will take collaboration not just within our industry but with experts leading the technology revolution as well. 

To close, the solution to research misconduct requires more than changes to our publishing workflows. It requires this industry to look at the bigger picture, the longer term, and start building for the future today. We need to build the technology for all stakeholders in the research integrity ecosystem, from researchers to publishers to readers.  

About the Author

Sami Benchekroun is the Co-Founder and CEO of Morressier, the home of workflows to transform how science is discovered, disseminated and analyzed. He drives Morressier's vision forward and is dedicated to accelerating scientific breakthroughs by making the entire scholarly process, from the first spark on, much more transparent. Sami has over ten years of experience in academic conferences, scholarly communications, and entrepreneurship, and has a background studying management at ESCP Europe. Sami Benchekroun is also a Non-Executive Director of ALPSP

Morressier is a member of ALPSP. Find out more: https://www.morressier.com/

Friday, 27 January 2023

Mastodon versus Twitter - is this the research community's new digital town square?

Guest post by Lou Peck, The International Bunch.

When Elon Musk bought Twitter " to help humanity", did he envisage a mass migration to other communication channels? Does he even care? It's not like Twitter was not-for-profit before Elon Musk bought it, so why the sudden migration? Maybe it was the final push to kick people into gear and look at what alternatives are out there. Elon refers to Twitter as the 'digital town square', but are the new kids on the block actually the new place to be?

We have been monitoring Twitter in the research ecosystem, and clients have been reaching out, asking us to research and help them learn what the word on the street is. So we thought we would compile some recent research to help you to decide whether you need to include Mastodon or any other social channel in your strategy, as well as Twitter.

What even is Mastodon?

Mastodon takes its name inspiration from a large elephant-like extinct mammal of the Miocene to Pleistocene epochs (10-11,000 years ago) that belonged to the Mammutidae family. Living in herds, they roamed North and Central America and were predominately forest dwelling.

By Dantheman9758 at the English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4289640


Mastodon is a software development non-profit from Germany founded in 2016 by Eugen Rochko and offers microblogging in an open source realm. There are currently four members in the core team - Eugen, Claire, Felix Hlatky and Dopartwo and 1.8m monthly active users (+385% as of 14 November 2022). Funded by sponsors, there are three packages to support Mastodon, with the lowest already sold out. You may notice the logo in blue, but this is being phased out as part of their new rebrand to purple. I haven't been able to yet find out why they named their business after an extinct mammal, especially one whose name (named by French naturalist Georges Cuvier in the early 19th century) translates to 'nipple tooth' because of their distinctly shaped small teeth! 

Why Mastodon?

The four key positioning statements for Mastodon centre on being decentralized, open source, not for sale and interoperable. Not limited by commercial strategy with algorithms to show you what they think you want to digest, and no ads to 'waste your time', Mastodon shows you anyone you follow in chronological order on your feed to 'make your corner of the internet a little more like you'. Choose from any of the community servers (AKA instances) that best fit you as an individual (or host your own) and join Mastodon.

Businessinsider reports a 657% growth in downloads for Mastodon in the last 12 days (1-12 November 2022).

Whilst authors are using it to share and talk about their work, interestingly, a number of preprint bots have already been set up to follow:

A researcher's perspective

David Brückner has a really interesting thread on Twitter from 30 October 2022, where he concludes the pros and cons of using Mastodon over Twitter, and practical tips for transitioning to Mastodon from Twitter for #ScienceTwitter:

  • open-source 
  • decentralized: run on a federation of servers 
  • no ads! 
  • no attention-algorithm: the timeline is chronological! 
  • 500 instead of 140 characters in each "toot" 
  • key features that we need conserved: mentions, hashtags, retweets, likes
David mentions a 500-character limit, but that is the default. In fact, it can be reconfigured - QOTO.ORG, the STEM-based instanceIn fact, it can be reconfigured - QOTO.ORG, the STEM-based instance, allows around 65,000 characters. David also mentions in his Cons that you can't post videos - but you can post all types of content so for him, it might have been that with such an influx of users, some functionality was reduced temporarily or maybe the server David uses is restricted. 

On 30 October 2022, David asked his Twitter community what they thought through a Twitter poll about Mastodon. Nearly 18% said they would stay on Twitter, over a quarter said they would try it, and a fifth said they would use both. At present, many publishers, libraries and intermediaries are keeping an eye on Mastodon, and if active usage and sign-ups continue, Mastodon should be considered as part of your strategy. We just need to be mindful of its longevity and future-proofing.

David has another well-engaged thread on Twitter listing his top five tips for transitioning over to Mastodon, including:
  1. Find your friends on #Mastodon
  2. Sync your Twitter and #Mastodon account
  3. Choosing servers
  4. Follow preprint bots
  5. Mobile apps

How long will, as David puts it, 'the science party' remain on Mastodon and how long will they continue using it? Users seem divided. I found a server I liked the sound of and have created my own profile. I'm finding it a little clunky to navigate but I'm sure once I get used to it, I'll be more active on it. I've always found Twitter too noisy but this seems like a much better alternative where I can actually see what people are posting that I am interested in following.

As with Scholar Social member Cameron Neylon, I wonder where these social channels sit with altmetrics providers - are they including these in their algorithms? You can certainly explore preprints and published content on Mastodon being shared with a researcher's community. How is this being translated into impact when the service is not yet set up - discover the GitHub thread.

With hashtags like #GoodbyeTwitter and #TwitterMigration is Twitter doomed? Probably not. But what about Twitter Blue ($8 a month) as a revenue stream for Twitter? How will this change the quality of content there? Being payment verified means you will be prioritized, and always at the top of comments and searches. Bots and trolls will get pushed further down in the feeds. Will this help? Time will tell.

Casey Fiesler, an information researcher at the University of Colorado, Boulder, studied the migration of online communities, comments to Science.

Mark McCaughrean, an astronomer at the European Space Agency, also comments to Science.

There is absolutely a shift to Mastodon, but let's take a quick look at what else is out there.

What are the alternatives to Twitter?

  1. Mastodon 
  2. Bluesky (from the cofounder of Twitter, founded in 2019)
  3. Tumblr (recent sign ups include Deadpool's Ryan Reynolds and Wonder Woman's Lynda Carter)
  4. Tribel
  5. cohost 
  6. Log off - literally as it says on the tin - just sign out and log off!
There are so many more to mention but others we know are worth a mention include Reddit - a favourite of mine to research trends and discussions amongst the research community and WeChat for connecting with Chinese researchers. What does this mean for sector-specific channels? Interesting to see how Academia.edu, Researcher App, and ResearchGate develop and diversify. What about ORCID - does this give them a new avenue to explore, and even help with the issue around duplicate accounts?

To include Mastodon in your strategy, or not to include?

Whilst we are in the early stages of trying out new social channels and working out what is right for us, Mastodon is worth keeping an eye on and including in your current strategy. Maybe we will see publishers and intermediaries sponsor services like Mastodon as part of their positioning strategy in an open environment. If anything, I hope these social channels recognize the importance of this industry and Github tickets around DOIs move up the list in terms of priority and are implemented. 

Whether I'll be writing an article in a year's time - the demise of Twitter and the rise of Mastodon in the research ecosystem - who knows! Maybe someone will create a server on Mastodon (please....) for the academic publishing community so we ourselves have a place where we can chat and be friends...

Additional sources and reading:




The International Bunch is a member of ALPSP. 

Friday, 18 November 2022

The Evolution and Future of Peer Review

In this guest post, Michael Casp and Anna Jester look back at the ALPSP 2022 Annual Conference and Awards. 

Wiley logo

Peer review in the Digital Age relies heavily on email communications and web-based peer review platforms to contact reviewers and organize recommendations. In many ways, peer review today is just a recreation of the pre-internet mail, fax, and file cabinet template, but pasted online.  With the current advancements in preprints, social media, and communication platforms, it is possible – even likely – that the model of peer reviewing, and the technology that supports it, have more evolutions to come.

As communication begins to move beyond traditional text formats, so does content. Getting “beyond the PDF” has been a staple at scholarly publishing conferences for years, and as it becomes more commonplace, we are navigating its demands on the peer review process.

But it’s not just about technology. We must also focus on developing and empowering the next generation of reviewers in order to maintain a robust and sustainable reviewer pool. This happens through teaching and developing early academics, which can offer a technology-versed, diverse pool of reviewers that can continue to utilize the technologies we develop.

At the ALPSP 2022 Conference we were treated to many innovative ideas that will have a direct impact on peer review, potentially changing it and hopefully improving it, for the researchers and publishers and ultimately providing better value for society.

Beyond Email

Peer review’s reliance on email is a given. That is, unless you’re in China, where the app WeChat has in many ways supplanted email as the default communication system. Western publishers that seek to engage with Chinese researchers might struggle if they only use email. However, Charlesworth presented the ALPSP audience with a possible solution with their new product, Gateway. Gateway uses an API to allow journal-related communications to be sent to authors, reviewers, and editors via WeChat. This solution allows journals to meet Chinese researchers where they are, rather than trying to pull them into an (let’s face it, antiquated) email system.

Given this shift, eLife presented the ALPSP audience with Sciety, a new preprint evaluation platform that allows academics to review, curate, and share preprints and reviews with their peers. Preprint servers have also started to become social hubs for researchers to connect in more of a real time environment than traditional publishing tools. This system holds the promise of opening up peer review and publishing to a wider user base, allowing more people to curate and review research than ever before. The challenge presented by the scale of preprints is immense, and Sciety has the potential to reorganize how we deal with all this research in a social-focused way.

One more data point to mention: with the pervasive use of social media throughout the world, it is no surprise that academics would have their own version. Despite controversies, ResearchGate has maintained its position as the largest academic social network, connecting about 17 million users. With this many scholars connected, it’s possible we could see something like peer review networks emerge, though that doesn’t seem to be ResearchGate’s focus at the moment.

Beyond the PDF

A decade or so ago, getting “beyond the PDF” was still a new idea being speculated about at conferences. It is now a reality, with authors providing data sets, code, and detailed methods notebooks alongside their traditional manuscripts. As a partner to those authors, we’ve come up with ways to publish this content, but it can present special problems during peer review.

For starters, journals that employ an anonymized review model can find it quite difficult to extract identifiable information from complex content like code or reproducible methods. Sometimes an author’s institution might be inextricably linked to this content, making anonymization impossible – or at least impractical.

Other forms of content, like ultra-high-resolution images, can present logistical problems. Industry standard peer review management systems have hard limits on the size and format of files they can manage. For example, fields like pathology can rely on extremely detailed images of microscope slides, and these multi-gigabyte files are hard to move from author, to editor, to reviewer. Paleontology research can also require larger-than-usual images, as sharing highly detailed photos of artifacts is crucial to the field. Dealing with these kinds of challenges at the peer review stage can require a lot of creativity and patience for all involved without a more flexible solution.

Massive datasets can also present review challenges. Beyond the logistics of moving large files, there are also often more basic concerns, like is this data organized and labeled in a useful (and reusable) way? Is it actually possible to do a real review of a large dataset in the time that reviewers have to give to a paper? FAIR data principles are targeted at answering some of these questions, and services like Dryad and Figshare seek to help by curating and quality controlling datasets, ensuring they meet basic standards for organization, labeling, and metadata. But these services come with an additional cost that not everyone can bear. And a data review still depends on a reviewer willing to go the extra mile to actually review it.

Moving peer review beyond the PDF is still a work in progress, but many of these are solvable problems as our technology and internet infrastructure improve. Our J&J Editorial staff regularly handle content like videos, podcasts, and datasets. At EJournalPress, our platform is integrated with third parties including Dyrad, Code Ocean, and Vimeo. These integrations are an added convenience, as most journals and societies have to have direct agreements with third parties for the integrations to be fully utilized. But we often have to work around the peer review system, rather than with it, relying on basic cloud file servers (e.g., Dropbox or OneDrive) instead of more purpose-built technology.

Open/Pre-submission Review

Another decade-old conference trope was the constant talk about new open peer review models. You might recall that people were split on the wisdom of this approach, but the rise of preprints has done a lot to push open peer review and pre-submission review into the limelight.

Organizations like Review Commons are working with EJournalPress to make pre-submission and peer review a viable choice for authors by building review transfer relationships with more traditional journals. The Review Commons model is to take preprint submissions and have them peer reviewed. These reviews can then be shared with participating journals if authors choose to submit. Journal editors can use the existing Review Commons reviews to evaluate whether or not to publish the work. In data presented at ALPSP 2022, manuscripts that came into journals with reviews rarely needed to be sent out for further review. This has many benefits, saving the editor time in soliciting reviews, and giving a journal’s (probably over-taxed) reviewer pool a little break.

Review Commons is currently free for authors, being supported by a philanthropic grant. It will be fascinating to see if they are able to pivot towards a sustainable financial model in the future.

We won’t exhaust you with the long list of other open peer review initiatives, but suffice it to say, a lot of smart people are working hard on making this a standard part of research communication.

Developing the Next Generation of Reviewers

None of what we’ve written so far will matter one iota if there aren’t enough people in place to do the actual content reviews. One of the interesting revelations we had while managing journal peer review was the incredible range that exists in review quality. From the in-depth, multi-page discussions of every point and nuance of an author’s manuscript, to the dreaded “Looks good!” review, anyone in peer review can tell you that we can (and must!) do a better job training our reviewers. We can offer guideline documents and example reviews, but some people need a more engaging approach to understand and deliver what editors expect.

It would be lovely if reviewing was a required part of the science curriculum. It currently seems to happen in a piece-meal, ad hoc fashion, many times driven by people who are willing to just figure it out themselves. A more standardized approach is called for, especially as reviewable content becomes more complex and varied.

One of the best examples we’ve seen of reviewer training was actually a writer’s workshop for researchers wishing to submit to a medical journal. The EIC of the journal led this workshop, asking authors to submit their manuscripts ahead of time to serve as examples for the workshop. During the workshop, the EIC talked through several of these manuscripts, giving the authors invaluable feedback and what amounted to a free round of review prior to the official journal submission.

Though this workshop was ostensibly for authors, it was equally valuable for reviewers. Participants got to watch the EIC go through a paper in real time, ask questions, pose solutions, and talk through the subject matter with someone who had written hundreds and reviewed thousands of manuscripts. This program has always stuck out as a great way to train authors and reviewers, while also building the relationship between the journal and its community. Win-win-win!

Formal peer review training benefits all parties, such as the Genetics Society of America’s program which also includes direct feedback from journal editors. If you’re thinking of implementing something like this, your organization may wish to conduct a pilot prior to a full rollout. Another great model for peer review training is to pair mentors and mentees to simultaneously provide training and increase the number of high quality trained peer reviewers in the field broadly. If your team is willing to study the results of your reviewer training efforts, be sure to submit them to the next International Congress on Peer Review and Scientific Publication so that we can all benefit from your findings.

Demographics and Diversity

Many of the journals and publishers we work with are prioritizing diversity within their community by making efforts to extend their reach to people who might have been historically left out of the conversation. These organizations are also looking inward to seeing what their current level of diversity looks like in order to improve it.

Many organizations have begun collecting demographic information regarding their authors, reviewers, and editors. We recommend a thoughtful approach when embarking on this project, as it can be fraught with pitfalls and unexpected consequences if you don’t get it right. Before your organization embarks on this endeavor, please consider best practices regarding data collection and clearly define the initiative’s goals. Wondering where to start? Do a landscape scan of what other organizations aim to do with this data and please use standardized questions for self-reported diversity data collection.

Fortunately, many people are working on demographics data initiatives, and there is lots of support and ideas available from our community.

Summary

To put it mildly, there is a lot going on right now. The technology we use has the potential to upend the traditional research communication process, and in some cases (like preprints) it already has. With a host of new data, content, and equity concerns, people involved in the peer review process have more to deal with than ever before. And it’s unlikely that we’re doing enough to equip them with the knowledge and training they need to succeed. But we can do better, and I’m heartened to see the many people in and around our industry who are trying to improve the situation. From our end, eJournalPress is supporting societies and journals as they work to collect and evaluate demographic information and metadata, and J&J Editorial staff are always investigating ways to support journal innovations through a combination of technology and experienced staff.

I often think about peer review in the context of that old Churchill quote about democracy: “It has been said that democracy is the worst form of government, except all of those other forms that have been tried from time to time.” Peer review might not be the best method of scientific evaluation, but it’s the best we’ve got, and who knows, maybe we’ll make something even better. But until then, we’ve got work to do.

photo Anna Jester

Anna Jester, VP of Marketing and Sales, eJournalPress, Wiley Partner Solutions


photo Michael Casp

Michael Casp, Director of Business Development, J&J Editorial, Wiley Partner Solutions

Wiley Partner Solutions was gold sponsor of the ALPSP Conference and Awards held in Manchester UK in September 2022.  The 2023 ALPSP Conference will be held in Manchester from 13-15 September 2023.


Wiley is one of the world’s largest publishers and a global leader in scientific research and career-connected education. Founded in 1807, Wiley enables discovery, powers education, and shapes workforces. Through its industry-leading content, digital platforms, and knowledge networks, the company delivers on its timeless mission to unlock human potential. Visit us at Wiley.com. Follow us on Facebook, Twitter, LinkedIn and Instagram.

Thursday, 10 November 2022

Guest Post: Centering reproducibility and transparency in health science research

By Grainne McNamara, Research Integrity/Publication Ethics Manager at Karger, Silver sponsor of the ALPSP Conference and Awards 2022 

At Karger Publishers, we have over 130 years of experience connecting healthcare practitioners, researchers and patients with the latest research and emerging best practice, covering the whole cycle of knowledge. As a publisher in the health sciences with a broad audience, we have always centered the needs of our readers by tailoring our content to them. Complementing our long history of publishing journals and books for clinicians, researchers, and patients, in 2021 we launched the online blog The Waiting Room. In 2022 the Waiting Room Podcast launched, bringing breaking research to patients, caregivers, and the general public in easy-to-understand, jargon-free formats. Also this year, we made it possible for authors to submit plain language summaries in our journal Skin Appendage Disorders with their articles. These provide interested readers with easy-to-understand descriptions of the latest research in this field of dermatology. In all these developments, we are acutely aware of the great responsibility that comes with communicating health science research to a general audience.

Karger Publishers
At Karger Publishers, we employ a rigorous peer review process and are transparent about the evaluation criteria for articles that we publish. However, with increasing digitalization comes big challenges, as information can be disseminated, but also misinterpreted, at speed. With these challenges come opportunities to do better, and we asked ourselves: How can we do more for our community?

At Karger, we are open for Open. We have embraced the Open Science movement – in thought and action. Since 2021 all our research articles have been published with a data availability statement, directing readers to the location of the dataset(s) underlying these findings and providing thorough guidance to authors on the how and why to share their data. However, we see data availability statements as just the beginning.

Well-established reporting guidelines exist for almost every study type and provide a structure for authors, reviewers, and readers to understand what is, and should be, reported in an article. As such, adherence to community-standard reporting guidelines, such as CONSORT or PRISMA, has been the expectation for all our journals for many years. As a health sciences publisher, we know that case reports are an early stage but crucial part of evidence-based medicine decision making. That is why we have eight specialty Gold Open Access journals dedicated exclusively to communicating case reports. For case reports to be their most responsibly influential and effective, clear and transparent reporting, again, is crucial. That is why, as of September 2022, completion of the CARE checklist is a requirement for all submissions to these journals. We believe that by improving the consistency and transparency of case report reporting, we underscore the importance of transparency in health sciences research.

We present breaking research findings every day to our growing community and take great care in the trust placed in us by readers as a publisher of health sciences. We recognize that the ultimate guarantee of the reliability of a finding is its reproducibility - that is, the ability to find the same result again and again. We also know that transparency and detail of methodology are significant barriers to the reproducibility of a result and that adherence to reporting guidelines can improve the clarity of an article. That is why we recently expanded our guidance to authors on the use of reporting guidelines and endorsed the Materials Design Analysis Reporting Framework, as part of every journal’s reproducibility policy. We believe that by centering the reproducibility of methodology and reproducibility of results in our publications, we are progressing the conversation around reproducible-by-default health sciences research.

Grainne McNamara, Research Integrity/Publication Ethics Manager at Karger
We could not make these steps towards reproducibility-by-default without the support of our community of outstanding editors and reviewers. We aim to recognize and support our community of experts in a variety of ways. This includes providing training for the next generation of peer reviewers as well as interactive discussion webinars on the latest topics in peer review and reproducibility. We also benefit from the cross-publisher organizations, such as COPE and ALPSP, that facilitate conversations on best practices in reproducibility.

At Karger, as we expand our portfolio of research communication, we grow, in step, our focus on reliability and trust in those findings. By empowering researchers, institutions, funders, and policy makers to maximize impact in health sciences, we are taking strides toward our goal to help shape a smarter, more equitable future.


Karger was a silver sponsor of the ALPSP Conference and Awards held in Manchester UK in September 2022.  The 2023 ALPSP Conference will be 13-15 September 2023.

Tuesday, 18 October 2022

Guest Blog - SciencePOD


 OA implementation lags behind rhetoric
As ALPSP celebrates its 50th anniversary, poor change management levels hamper OA adoption


The
Association of Learned and Professional Society Publishers (ALPSP) celebrated its 50th anniversary during the 2022 Annual Conference. The event, held at the Hotel Mercure Piccadilly in Manchester, will be remembered for its glittering chandeliers. The rhetoric around digital change sparkled just as brightly, but was there substance beyond the shine?

At the time of the conference, most scholarly publishers and learned societies had already pledged to implement digital transformation, shifting toward greater Open Access (OA). Few of the discussions at the conference, however, focused on how, in practical terms, they would manage change along the way. Embedding digital-first processes requires strong leadership to overcome barriers, coupled with widespread transparency around the OA-readiness of each scholarly publisher.



Open Access

Over three days, the scholarly audience attended a series of discussions on the benefits of OA, but these were largely preaching to the converted. The Open Pharma forum, one of the satellite events, demonstrated the value of OA clinical studies, which are well suited for conversion into
plain language summaries; these enable Pharma to communicate the latest research findings outside scholarly circles, mainly to doctors and patients—a requirement imposed by the European Medicines Agency (EMA), that science content creation solutions, proposed by the likes of SciencePOD, routinely deliver. 
During the opening keynote, Peter Cunliffe-Jones (University of Westminster), discussed the role of OA in reducing misinformation for policy-makers, media professionals and fact-checkers, as well as the wider scientific community, while discouraging predatory journals. 
Further discussions focused on the need to expand the OA business model, following the recent announcement by the US Office of Science and Technology (OST) to make OA mandatory for publications derived from publicly funded research. The move offered further validation for the OA model, and offers further opportunities for publishers serving Stateside learned societies or university presses. At the time of writing, 45% of publications by volume are already published under OA, under 2021 Delta Think data. 

Change Management

Achieving the cultural agility and processes necessary for meaningful digital change is the biggest challenge faced by our industry. A dedicated session, “A look back at the evolution of publishing focusing on the changes in industry in the past 50 years”, looked at progress so far.
Delivering a smoother OA experience for authors is an issue of change management. Large organisations often struggle to adopt change in the face of inertia, political undercurrents and the ebb and flow of leadership directions. Smaller organisations like societies are more agile, but often resistant to change due to associated costs.




Stimulating Innovation

All that said, our industry has proven it is capable of change. The ALPSP Innovation Award nominees demonstrated the innovative initiatives the scholarly industry is piloting, particularly among small- or mid-size organisations. One of the co-winners of the
2022 ALPSP Award for Innovation in Publishing, Charlesworth’s Gateway is using WeChat social media communication technology to enhance communication between Chinese authors and publishers.
Others are focusing on solving data-sharing issues. GigGaByte journal, the other 2022 Award co-winner, caters for rapidly changing fields by publishing updatable datasets and software tools of value to more specialist communities.

Leadership toward company-wide transformation

Despite these promising initiatives, digital transformation must be company-wide, touching every aspect of the scholarly product lifecycle. We need to understand the internal change management process required to move toward a more author-centric offering that built on digital technologies.
There was no shortage of expert consultants with the change-management knowledge at the conference. Consultants are typically brought in to propose new processes towards a more effective digital approach. However, this can cause internal friction when members of staff have already identified the specific, detailed changes required, outside standard change management methods. 

Trust
 
In times of change, trust is paramount between publishers and their staff, as is the need for leadership that fosters an agile, adaptable culture open to digital change based on bottom-up suggestions. Transparency is key. During the session entitled, “Transparency in OA: Moving out of the Black Box?” speakers such as Julian Wilson, Head and Sales and Marketing at IOP Publishing, pledged to compile the appropriate data for scrutiny.
This data is difficult to assemble, not because publishers are holding back, but due to poor overall collection of OA-readiness data for display. So, when the US OST issued its new policy for publicly funded research to be made available in OA, US customers began asking publishers to identify which parts of research were publicly funded.

Transparency

The difficulties for
publishers in providing Plan S-mandated OA-readiness data show that our industry lacks established standards (time of first acceptance, number of reviewers, peer review length, etc.). Publishers need to create norms across the entire industry, allowing authors to make their own comparisons between OA outlets.
Introducing such new metrics would come under change management methodology. Some publishers, like Hindawi  are already making Plan S-requested metrics public, according to Chair Catriona MacCallum. PLOS, represented by its Director of Open Research Solutions, Iain Hrynaszkiewicz, announced at the conference an extended partnership with DataSeerAI to measure and publish multiple ‘Open Science Indicators’ across the entire PLOS corpus, going back to 2019; this would be ongoing for newly-published content, and would include rates of data-sharing in repositories, code-sharing and preprint-posting, in addition to future plans for protocol-sharing.
Tech-driven publishers, such as MDPI and Frontiers, present in the audience, have been metrics-conscious from their inception. They streamline every step in the peer-review process for prompt publication timelines and have automated many of their processes to the extent that they sometimes encounter resistance from scientists themselves. Voices have expressed concern to strict turnaround times for peer-review, for example, which were interpreted  by some as pushing peer-reviewers into rushing their work.  Yet, workflow streamlining and optimisation are a by-product of the digital transformation of our industry.

Sustainable Development Goals

Once change management has been implemented, there is real hope for applying research to global causes, such as the
Sustainable Development Goals (SDG), which were discussed at length during the conference.
Christina Brian, Vice President Business, Economics, Politics, Law Books at Springer Nature, concluded that SDG content is more likely to be published under OA, to be highly cited, and to receive more attention—as measured by altmetrics—than other research content.

Moving forward with OA

Although the audience at the ALPSP Annual Conference 2022 no longer needs to be convinced about the benefits of OA publishing, they have yet to fully adopt transparent criteria to measure their OA readiness, and focus on
improving the author experience.  They must also open the scholarly research publishing process further, illuminating excellent OA research for the benefit of the wider knowledge economy with the help of author-centric content marketing materials like plain language summaries, infographics and author podcast or video interviews to spread the latest discoveries far and wide.
 
About the Author

Sabine Louët, Founder and CEO, SciencePOD, purveyors of science content for educational, informational and content marketing purposes.


Friday, 30 September 2022

Guest Blog Post - Wiley

Research outputs beyond the PDF: Why they matter, and how to get started

Going beyond the default


While the PDF has become the de facto global standard for publishing articles online, the publishing tools of today offer a whole range of ways to publish content that is more flexible, more engaging, and more user-friendly, as well as better addresses current publishing standards. It’s time to broaden our scope beyond the PDF! 

Compared to PDF, both HTML and EPUB3 are better formats for accessibility—not just in the technical sense of image descriptions and ARIA roles, but also in the broader sense of allowing the user to resize and reflow text or zoom in on images. 

But there are also many other ways to publish both the outcomes of research and the materials that lead to those outcomes, and many compelling reasons to use them. The three we’ll focus on here are making research data and protocols available to other researchers around the world; engaging your users with more varied and interesting offerings; and translating research for non-scientists and non-specialists. 

Publishing data sets: Why, what, and how

The why of making researchers’ data available, in keeping with FAIR data principles, is well recognized: when the data behind the research is findable, accessible, interoperable, and reusable, replication studies can be more easily carried out, and testing the replicability of published results is key to advancing science and improving research integrity. 

The how may be more challenging. “Publish researchers’ data alongside the articles or books based on that data” seems perfectly straightforward—until you start thinking about all the things “data set” might mean. Depending on the research and the discipline in question, the data behind a published article might be anything from a vast database of testing data or geolocation coordinates to computer code, from a linguistic corpus to a collection of audio recorded interviews, archival photographs, or tweets. (We won’t get into collections of cells, core samples, or water samples.) While all are data sets and deserve FAIR data treatment, each brings a different set of technical challenges to the publication process.

Using a feature, like Digital Objects on the Literatum publishing platform, that supports multiple publication formats allows publishers to offer—or even require—researchers to make their data available on the same platform and at the same level of discoverability as the publications based on it. A wide variety of data types—essentially, anything that exists in a digital format—can be hosted alongside an article or book, assigned a Datacite or Crossref DOI, and linked bidirectionally with the publication and any other data sets or research products. Depositing a DOI makes data sets easier to find and to cite, benefiting the researchers on both ends of the data-reuse transaction.

Protocols, notebooks, and more

Just as important for replicability as data sets are the protocols used in collecting and analyzing the data—from survey instruments to lab procedures to focus-group guidelines. Publishing research protocols alongside data and findings further encourages replication studies.

A related use case is that of computational notebooks, which are widely used by researchers in many scientific disciplines to carry out, manage, and share their workflows and data analyses but which most publishers can’t yet accommodate as part of the research output. Wiley and Atypon are part of the Sloan Foundation–funded project Notebooks Now!, led by the American Geophysical Union, aimed at developing a standard model for publishing computational notebooks.

Why is this important? As Shelley Stall et al. write,

Providing notebooks as available and curated research outputs would greatly enhance the transparency and reproducibility of research, integrating into computational workflows. The notebooks allow deeper investigations into studies and display of results because they link data and software together dynamically with what are often final figures and plots. [Read the full Notebooks Now! proposal at https://doi.org/10.5281/zenodo.6981363.]

Publishing computational notebooks is just one way that making researchers’ data findable, accessible, interoperable, and reusable helps elevate both integrity and equity across research and publishing.

Access, accessibility, and knowledge translation

Data availability is critical. But publishing “beyond the PDF” isn’t just for data sets!

When we talk about Open Access, or about public access to publicly funded research, we need to consider much more than whether or not a publication is paywalled. It’s important to ask, “Can a member of the public download and read this article?” But we also need to ask, “Will a non-expert reader understand the key findings of this article?” Researchers generally write for other researchers in their field, and the typical editorial process does a good job of facilitating that expert-to-expert conversation—which simply isn’t designed for readers without that specific expertise.

This is where knowledge translation comes into the picture. What alternatives to expert-to-expert academic writing can we provide, in order to make key research findings—for example, from high-quality and up-to-date studies in public health, epidemiology, and occupational safety—both freely available and genuinely useful for non-experts who would benefit from understanding them?

Plain-language summaries, “explainer” blog posts, and static infographics are a great place to start. Translating these into other widely spoken languages takes us further. But why stop there? Publishing platforms like Literatum allow publishers to host audio, video, podcast, and interactive visual content. So consider “explainer” video or podcast interviews where researchers highlight key findings; consider how an interactive graph can help a non-expert understand demographic changes, the spread of a disease, or how languages change over time; consider how an animated map can illustrate economic, weather, or population data across space and time. 

Finally, we need to consider accessibility. Step one, of course, is to make sure that our websites are WCAG compliant. Step two is ensuring that all text content—whether articles, books, blog posts, news updates, or data files—is machine readable, so it can be interpreted by screen readers, and available in HTML or EPUB format (either natively or via a PDF rendering tool such as Atypon’s eReader), so that it’s friendly to those who need large print, reflowable text, and zoomable images. Step three is to work on making non-text content as accessible as possible: well-written descriptions for all non-decorative images, closed captions for all videos, transcripts for audio content … 

All of these elements, too, can usefully have DOIs deposited, to make citing them easier and help direct readers to the version of record.

Make time for metadata!

Whatever you’re publishing, in whatever format, metadata remains key. The challenge comes in determining what metadata are necessary and appropriate for new types of content. To maximize discovery of non-PDF content, it’s important to accurately identify in the metadata what it is (data set? interview? photo archive?), what the format is (.csv? .mp4? .jpeg? .zip?), what it’s about (few things are more frustrating than thinking you’ve discovered a good study of chess and then finding it’s a study of cheese), and all the ways it’s connected to other pieces of content. Metadata should also include information about how and where to access the content, who created it, and what users can and can’t do with it. An additional part of the metadata equation is deciding which elements are supplementary to the article, and which are effectively on the same level.

Finally, depositing a DOI (or other appropriate persistent ID) is important for everything you publish. On a practical level, using and maintaining DOI links makes citation easier, directs readers to the version of record, and ensures that whatever element of their work is cited, authors’ citation stats benefit from the use of their work. On a more symbolic level, depositing DOIs for non-article content signals a commitment to treat these content types as they deserve: as part of the published scholarly record.

So now what?

Your publishing platform provider can tell you what non-PDF content types can be hosted on your site and how to get them there, and we can also help you resolve metadata questions and deposit your DOIs.

The more challenging—and more exciting—part is up to you and your contributors: Deciding what content types best suit your contributors, their research, and your audience, and then making it happen. We’re here to help you all the way!


Author Bio


Sylvia Izzo Hunter joined the marketing team at Atypon as Community Manager in 2021 and has been Marketing Manager at Inera since 2018, responsible for content marketing and social media. Prior to shifting to marketing, she worked in editorial, production, and digital publishing at University of Toronto Press. A past SSP Board member (2015-2018), Sylvia has also served on SSP's Communications, Education, and DEIA committees and is a member of the NISO CREC Working Group.


Guest Blog - Morressier

 Today’s workflows are tomorrow’s headaches: Is it time to change?



Trust in research has never been more important. I’ve lost track of the number of think pieces and surveys, meta-analyses and pessimistic opinion pieces about the critical nature of our time and how that trust is slipping away. 

How is that trust built? It's a complex system with many stakeholders whose competing priorities make it a challenge to find common ground. The media wants certainty and stories, and scientists rely on statistics and replicating results of data sets that few in the general population can understand. But setting how science is shared and communicated to one side, we’re interested in the infrastructure of disseminating science. What about the workflows that build research integrity into the process itself, and the technologies that keep peer review and content management moving forward? 

Today those processes might be part of the problem, but they can lead the way toward a bigger solution. Each time a headline hits the news about a retraction, or fabricated clinical trials, or ethics violations, we have already missed an opportunity to solve the problem at its source in the publishing workflow. And with each headline of that nature, trust in science erodes.

Here’s my vision for the future of research integrity: 

  1. It all starts with early-stage research. By the time a journal article is submitted for publication, years worth of research have already gone into those conclusions, and that science is hidden away or ephemeral in the form of a conference presentation two years ago. Imagine having a record of early-stage research, one that was already peer reviewed for a conference, and then workshopped and validated by the peers in the room. And, thanks to an integrated infrastructure, the record of those review stages are linked to the submitted manuscript. 
  2. It relies on technology to automate the process so human focus can be reapplied to important matters. Organizations should be able to set up the workflow they need with simple technology solutions, and sit back and watch the infrastructure they set up work for them and not against. Peer review is too important to waste time by sending out and managing manual messages or prompts. Imagine a re-designed workflow infrastructure that never requires leaving the platform? That’s the future, freeing up the time and resources of staff to use workflow data to build pools of new reviewers and authors, or to forecast trends for the discipline. 
  3. It is user-friendly and flexible. As valuable as a streamlined workflow is for organizations like publishers and societies, it's equally valuable for reviewers and authors. It's well known that peer review takes too long, these time-intensive contributions are not well recognized or rewarded, and it's frustratingly technical, with standards and policies that can be different for each journal or conference. Peer review should be something researchers line up to do, because it helps move science forward and validate discoveries that could become tomorrow’s innovations. Peer review is the foundation on which trust in science is built. But when the process is cumbersome, hard to justify in terms of the career growth it promises, and all the processes and stages are impossible to remember, it becomes less appealing. Workflows and infrastructure can solve this problem, leaving the path to trust simpler. 
  4. Our values are enforced at every step of the process. Infrastructure is a balancing act, and we have to keep our values close as we build. If we focus solely on efficiency, for example, then we might cut publication times but lose some of the rigour of close review. Workflow technology needs to be designed to uphold research integrity. Only by embedding the values and principles of the research industry in the infrastructure built to support it, can we find the right balance and success.

Publishing workflows cannot exist in a vacuum. They need to talk to each other, share data and insights that help human reviewers make informed decisions. We need to stop wasting time on tasks that technology can take over, but we need to retain the most ethical and integrity-rich practices possible. 

Author: Sami Benchekroun, CEO and co-founder of Morressier