Monday, 24 April 2023

How do we overcome research misconduct?

Guest post by Sami Benchekroun, CEO and Co-founder, Morressier

As the volume of research output grows larger and larger each year, new forms of research misconduct appear around the edges. While research misconduct represents a small percentage of overall published works, each instance seems much larger under the magnifying glass of public scrutiny. Without public trust, science has no power, and research misconduct is a disease that threatens the validity of our enterprise as a whole. Today, we have treatments to this disease - like retractions - but these treatments can still leave publishers scarred and damaged in the long term. 

What we need is a preventative and proactive solution. Something to stop research misconduct before it happens. In order to create that solution, we have to start by understanding why research misconduct happens, and address it on the industry level, not just within our workflows. 

Why does research misconduct happen?

Research misconduct happens when a system is under pressure. There are limited resources, and a constant call to do things faster. On the part of the researcher, they face immense pressure to publish, in order to advance their careers and build their personal reputation. This pressure leaves them vulnerable to paper mills and predatory journals, and perhaps more likely to cut corners by engaging in misconduct themselves, or simply making a mistake because there’s little time to perfect a paper. Within our publishing and peer review workflows, there are issues of scale and a pressure to review more papers, faster. When there’s less time to review each paper, there’s less time to evaluate and identify mistakes or issues that would make a piece of research unsuitable for publication. 

On top of the growing pressures on workflows and workloads, the broader business models of scholarly publishing have shifted, leading to an additional pressure from the world of OA. The revenue models for OA publishing have changed the currency of scholarly publishing from the journal volume to the journal article, from subscription to an article based economy. Publishing a greater volume of content seems to be the way publishers choose to ensure ongoing revenue streams. 

Add to this overstressed system emerging forms of misconduct. In the last year, we’ve seen a huge volume of opinions and perspectives on the role of AI-generated content. Is it a tool, is it a replacement? The role of AI in publishing is being decided now, but as is often the case with technology, use is outpacing regulation. 

So what now? 

Today, we treat research misconduct reactively. We retract individual articles, which is a critical show of transparency as part of the overall trustworthiness of science, but is misunderstood publicly.

So what would a more proactive approach really look like? While research integrity can be guarded with improvements to the editorial and peer review process, we need to go broader. It needs to be a broader industry effort that addresses some of the underlying pressures for key stakeholders. There’s a path to providing relief on the pressure to publish if we can evolve the criteria for tenure processes, career progression, and evaluation. There are strategies to make the peer review process more streamlined, so it's less time consuming and easier to engage with, but also more transparent so reviewers get the recognition they deserve. A more streamlined editorial process will also support the publisher’s need to publish more research. 

Technology’s role is twofold. First, with streamlined processes, and enhanced integrity checks that can review manuscripts at scale, the process becomes faster and easier. Second, technology can support the improvement of papers from researchers for whom English is not a first language, perhaps broadening our published output to include more contributions from the Global South or also other non-English speaking countries. Here is a solution that would democratize the world of scholarly publishing: helping publishers increase their output while helping improve equity in the research community. 

Risks and priorities for research integrity

We have to balance the need for research integrity with the need to publish research more efficiently. At first glance it might seem as though curing research misconduct has the potential to slow science down, adding more layers of checks, more rigorous reviews, and complicated institutional changes that take time to fully adopt. Even further, prioritizing research integrity is an investment: it's expensive, and it can take a long time to truly see progress with point solutions or solutions not integrated throughout the publishing infrastructure.

But what do we, as an industry, risk creating if we do not approach broad, scalable changes to our research integrity infrastructure? An ecosystem that struggles to scale, one that becomes more crowded and loses its focus on quality. Without research integrity interventions, whether they are embedded in our technology or addressed in the transformation of our peer review workflows and institutional pressures, we lose trust. The public trust in science is already at risk. 

This is also a critical time for machine learning. If we feed our AI tools fraudulent data, or anything other than the highest quality science, we risk embedding biases in the machine learning process that will be increasingly harder and harder to correct. If we start exploring AI in the scientific review process or publishing program, it has to be with research integrity at the very centre of our focus. We’re approaching an inflection point with AI. If developed properly, it can be focused on making our systems, like peer review, more accurate, transparent, and trustworthy. But how, when the current system is imperfect and under strain? 

We’re not the only industry asking this question: recently leaders across the tech industry, including Steve Wozniack and Elon Musk, signed an open letter calling on the industry to pause giant AI experiments. In this letter they call for developing AI systems only after we are confident of their effects, and the need for independent review before training new systems. What they propose is effectively peer review for approval to launch new general AIs. For scholarly publishing, misinformation is a massive risk. We risk losing control over information, and even the ability to validate information, if we do not take care about the inputs for machine learning, nor how we allow AI-generated research to fill our systems. To address this challenge will take collaboration not just within our industry but with experts leading the technology revolution as well. 

To close, the solution to research misconduct requires more than changes to our publishing workflows. It requires this industry to look at the bigger picture, the longer term, and start building for the future today. We need to build the technology for all stakeholders in the research integrity ecosystem, from researchers to publishers to readers.  

About the Author

Sami Benchekroun is the Co-Founder and CEO of Morressier, the home of workflows to transform how science is discovered, disseminated and analyzed. He drives Morressier's vision forward and is dedicated to accelerating scientific breakthroughs by making the entire scholarly process, from the first spark on, much more transparent. Sami has over ten years of experience in academic conferences, scholarly communications, and entrepreneurship, and has a background studying management at ESCP Europe. Sami Benchekroun is also a Non-Executive Director of ALPSP

Morressier is a member of ALPSP. Find out more: https://www.morressier.com/

No comments:

Post a Comment