Monday, 20 August 2018

Spotlight on Code Ocean - shortlisted for the 2018 ALPSP Awards for Innovation in Publishing

On 13 September, at the ALPSP Conference, we will be announcing the winners of the 2018 ALPSP Awards for Innovation in Publishing, sponsored by MPS Limited.  In this series of posts leading up to the Awards ceremony, we meet our six finalists and get to know a bit more about them.


logo code ocean

In this blog, we speak to Simon Adar about Code Ocean - a cloud-based computational reproducibility platform that provides researchers an easy way to share, discover, and run code.


Tell us a bit about your company

More and more of today's research includes software code, statistical analysis, and algorithms that are not included in traditional publishing. These are often essential for reproducing an article’s research results and for enabling future research. Code Ocean addresses this major roadblock for researchers, and was incubated at the 2014 Runway Startup Postdoc Program at the Jacobs Technion Cornell Institute.


What is the project/product that you submitted for the Awards?


We submitted the Code Ocean platform codeocean.com. The platform offers researchers and scientists the option to upload a working instance of their code and data along with the associated article, at no cost to the researcher.




Tell us more about how it works and the team behind it 


Code Ocean is an open access platform for code and data, which can all be downloaded without charge. In addition, Code Ocean enables users to execute all published code on the platform, eliminating the need to install software on personal computers. Everything runs in the cloud on CPUs or GPUs, according to the user needs. We make it easy to change parameters, modify the code, upload data, run things again, and see how the results change.

Code Ocean was founded based on experiences I had during my Ph.D. Part of my research was spent exploiting airborne and spaceborne multispectral and hyperspectral images for the purpose of environmental monitoring. This meant building on previously published works, but as me and my colleagues got deeper into the project, we faced multiple roadblocks, particularly in trying to get other researchers’ code up and running. We found that researchers develop algorithms, software simulations, and analysis in a wide variety of programming languages (and each language often has multiple versions, creating further compatibility issues). Code also depends on different files, packages, scripts, and installers in order to run properly, and getting all these pieces working is a time-consuming and complicated endeavor that takes away from research. I soon discovered that these “reuse” difficulties were part of a wider problem that many call the  "Reproducibility Crisis" in science. I realized that certain recently developed technologies, properly used and adapted, could help address this crisis led to the founding of Code Ocean.





We are now a team of over twenty women and men dedicated to working with the scholarly publishing community in make code execution easier for researchers, so they can continue building upon their work and the work of others. 



Why do you think it demonstrates publishing innovation?


For the first time, researchers and scientists can upload code and data in any open source programming language (plus MATLAB and Stata) and link executable code with published articles. For many years the scholarly publishing community has been looking to move beyond the PDF, provide interactivity to their users and treat critical artifacts such code and data as first class citizens.  Code Ocean provides self-contained executable ‘compute capsules’ that include code, data, and the computational environment that can be embedded into articles. These compute capsules saves researchers time and allows them to iterate on and interact with code, for instance, bringing  in new datasets, changing variables, and extending existing code. Code Ocean's platform eliminates the need to set up or debug coding environments, allowing researchers to spend more time on their research.

Our current partnerships with the IEEE, F1000, Cambridge University Press, Taylor and Francis, and others demonstrate how embedding executable code within publications provides interactivity, validation, and research transparency. 


What are your plans for the future?


We are continually updating the platform with new features and have a substantial update in the works that will expand functionality for our researchers. One recent development is that researchers can download an entire capsule, comprising its code, data, results, metadata, and a recipe for the complete computational environment. This will allow researchers to use Code Ocean compute capsules outside of the platform on users’ preferred machines.

We are also testing a new peer review workflow with three Nature journals—Nature Methods, Nature Biotechnology and Nature Machine Intelligence— to enable authors to share fully-functional and executable code alongside submitted articles to facilitate peer review.



photo Simon AdarSimon Adar is the founder and CEO of Code Ocean which was originally incubated at Cornell Tech. He was a Runway postdoc awardee at the Jacobs Technion-Cornell Institute and holds a PhD from Tel-Aviv University in the field of Hyperspectral image processing. Simon previously collaborated with the DLR - the German Space Agency on the European FP7 funded EO-MINERS project to detect environmental changes from airborne and spaceborne sensors.

Website: www.codeocean.com

Tuesday, 14 August 2018

Spotlight on Dimensions - shortlisted for the 2018 ALPSP Awards for Innovation in Publishing

On the 13th of September we will be announcing the winner of the 2018 ALPSP Awards for Innovation in Publishing, sponsored by MPS Limited, at the annual ALPSP Conference.  In this series of posts leading up to the Awards ceremony we meet our six finalists and get to know a bit more about them.

In this blog, we speak to Daniel Hook, MD of Digital Science, and Christian Herzog, CEO of ÜberResearch, about Dimensions.


An Introduction to Digital Science 

Set up in 2010 by a team from Nature, Digital Science was established with the aim of investing in startups that originated in academia and that created software tools to help researchers to do their best research. 

An independent company since the Springer Nature merger of 2015, Digital Science continues to pursue the vision of its founders: working closely with the research community to improve workflows, provide new insights, and develop alternative technologies that can better meet their needs.

Many of our team either come from a research background, supported researchers in their previous roles, or simply have a love of research and science. It is this ethos that keeps us close to the researcher and to the broader scholarly community. 

Today, the Digital Science portfolio has grown to include now-familiar services Altmetric, Figshare, and Symplectic Elements, amongst others. These tools, widely adopted by publishers, institutions and funders around the world, have helped to evolve the way that scholars work and scholarly information flows. 


Dimensions: re-imagining discovery and access to research 

Launched in January 2018 and built in partnership with over 100 research organisations from around the world, Dimensions is a research insights platform that aims to re-imagine discovery and access to research, transforming the way the scholarly community navigates the global research landscape.

In developing Dimensions, we wanted to achieve 3 key things:

  • A more complete view across the research lifecycle, with the focus no longer so much on publications alone
  • A more open approach to content and metrics that puts the power of data back into the hands of the scholarly community 
  • A tool that meets the needs of modern research organisations

Dimensions breaks down barriers to discovery and enhances the visibility of research by connecting over 133 million grants, publications, patents and clinical trials for the first time, enabling users to explore over 4 billion links between them to gather new insight into the multi-dimensional aspects of research activity.




A free version of the platform provides openly-available search across the publication records and their associated metrics, and paid-for versions Dimensions Plus and Dimensions Analytics deliver extended functionality and analytical tools to institutions, funders, publishers and corporate research organisations. 

Additional elements include the dynamic Dimensions badges, which can be easily added to any institutional repository to showcase citations, and a powerful API that makes it possible to query the data however the user chooses. 

A free metrics API is intended to encourage the development of new measures of impact, and Anywhere Access technology enables libraries to provide one-click access to Open Access and licensed content. 


Behind the scenes


Dimensions is a study in what can be done with open data, unique identifiers, modern machine learning, some friendly publisher-partners, a great set of development partners and a highly dedicated team. It also relies on the technologies that Digital Science has been investing in over the last 8 years with companies such as Altmetric, Readcube and Figshare.

Built by a distributed and diverse team located in countries mostly around Europe, the technical group is orchestrated by our CTO, Mario Diwersy, from Frankfurt. Much of the team is specialised in machine learning and artificial intelligence. 

An extensive data enrichment process has ensured that search results are both comprehensive and meaningful. Organisation identification, researcher disambiguation, natural language processing and reference extraction mean Dimensions is able to respond accurately to complex search queries. 

Using Dimensions, users are able to draw out insights that would not previously have been possible to uncover, and trace the research process from ideas to eventual impacts for the first time. 

visual of linked content

Just two dedicated staff members were newly hired to work on Dimensions: the rest has come from collaboration between existing teams - pulling together expertise from across the portfolio. 

These contributions included marketing and the development of the Dimensions badges, led by Altmetric; the application development and data science, directed by ÜberResearch; the close partnerships formed by the Digital Science Consulting team and Symplectic that helped to create the development partner programme; and the underpinning publication and infrastructure made possible by ReadCube.

It has been amazing to see the teams evolve and find new ways to work together, sharing different and multifaceted experiences to enable the rapid development and launch of a mature and well-thought-through product that keeps researchers and institutional users as a key focus. 


A 'game-changing' innovation


Until Dimensions, data sets that are highly relevant to one another were often fragmented, or situated in silos, making them difficult to compare. Existing market structures and data monopolies were preventing innovation in a space that needed up to date tools. 

In building Dimensions, we aimed to trigger an evolution in how researchers think about research information, and to address some key problems that had become apparent through our interactions with scholars, funders, institutions and publishers: 

  • Difficulties in getting access to content
    Connecting researchers to the content that they need has been a poorly solved problem for many years. With Anywhere Access, which provides one-click access to OA and licensed content, Dimensions offers the fastest way yet for a researcher to find the article that they're looking for and to get access to the legal, full text version. Dimensions is also the first freely available publication and citation data source where users can have full visibility of the data that drives the system, and that has been written to meet specific academic use cases.
  • Increasing demands on researchers
    There has been a slow but significant shift in what is asked of a researcher. Now, they must find places for PhD students who wish to continue studying, seek out collaborations, identify grant opportunities and craft responses, help to hire other academics in their institutions and ensure that their research is original, translatable and well marketed. In essence, they must run their own mini-startup around their research. Dimensions gives them the data to start managing all these facets of their research life.

  • An over-emphasis on publications and citations
    Research institutions, funders, publishers, governments and industry have all focussed on bibliometrics to form some kind of measure of the parts of the research enterprise that are relevant to them, and to help inform their future strategies. Of course, the reason for this focus is that publications and citations have been the only reliable data source that has been made available. With Dimensions, we move beyond that limitation and enable users to easily trace the links and analyse connections between content from across the research lifecycle - providing a much more comprehensive view on which to base decisions.
  • Excessive barriers to innovation
    A core aspect of Dimensions is to put decisions and control back in the hands of the research community. In an era where metrics are so easily misused, metrics must be developed and owned by the community based on a datasource that can be fully audited. In commitment to this, the full Dimensions API is made freely available to bibliometrics researchers as well as to clients, enabling them to build on the data as they wish. 
Crucially, we did not do this alone. Beyond the development partners we have taken the approach of incorporating existing industry standards and data sources, with the aim of furthering a more collaborative scholarly ecosystem. 

This includes publications data from Crossref (along with many publishers who actively supported the project), citation data from the Initiative for Open Citations (I40C), OA discovery via Unpaywall, and a partnership with ORCID that makes it possible for authors to supplement how they appear in Dimensions and claim publications directly to their ORCID account from within the platform. 


Our plans for the future


The launch of Dimensions was only the starting point for a continued joint development with the large group of development partners and users. 

Our aim now is to provide a constantly growing research information database that continues to link elements and content consistently together, providing a data landscape that reflects the complexity of the research process, outcomes and impacts. 

In early August we integrated supplementary data from Figshare to publication pages within the platform, and in the next month we expect to add over 330,000 policy documents, offering valuable insights for social science scholars and those looking to understand the societal impacts of research. 

In the meantime, we hope that the wider research community will grasp the opportunity to provide feedback and build on the infrastructure that exists today, and look forward to seeing where new ideas might lead!


Daniel Hook, MD Digital Science 

Daniel has been CEO of Digital Science since 2015. Holding a PhD in Theoretical Physics from Imperial College London, Daniel was a founder of Symplectic and served as its Managing Director from its foundation in 2003 until 2013, when he moved into a senior management role at Digital Science.


Christian Herzog, CEO ÜberResearch 

Christian is the lead on the Dimensions project at Digital Science and CEO of ÜberResearch, which he co-founded. A medical doctor by training, Christian was also one of the co-founders of Collexis, and later became Vice President Product Management for SciVal.


Websites: 

Wednesday, 25 July 2018

Brand Building the Scholarly Author – So what’s in it for the Publisher?

In this week’s guest post we hear from Jean Dawson, Product Manager at Charlesworth Author Services and member of ALPSP’s Professional Development Committee behind ALPSP programme of webinars.


It has never been more topical to develop academic authorship as a ‘brand’; this has been particularly emphasised by educational institutions that now often provide training sessions for their Early Career Researchers (ECRs, including PhD and Master’s students). These courses enable ECRs to understand how to build their online presence and the importance of starting to think of themselves as a ‘brand’ very early in their careers. Professor Stephen Hawking, for example, was not only an academic he was a ‘brand’, as are the TV personalities Dr Alice Roberts and Dr Brian Cox.  Although it is clear that not every academic can reach the same pinnacle of ‘brand’ as Professor Stephen Hawking, they can nevertheless be taught how best to present both themselves and their research to the online community effectively.  I have attended sessions at universities where ECRs have been tutored on how to write and present their LinkedIn profiles as a part of their extended curriculum vitae, create online social profiles, were reminded that prospective employers can search for you on Facebook, were given guidelines on how much of their research to reveal online (and where, as ‘online’ is a big place) and on which online communities to associate with. All vital life tools to assist in publicising your research at an early stage, job searches, promotions, and publishing contracts.

Advancements in technology platforms provide tools that directly assist academics to build their online profiles, understanding where their work is being read and cited, as well as how to build an ‘audience’ for their work. These platforms also encourage authors to be proactive in their research dissemination, rather than just passively waiting for citations. They shout ‘Get out there and sell yourself!’ As Charlie Rapple (co-founder of Kudos) will be pointing out in her presentation in ‘Brand Building the Scholarly AuthorALPSP webinar, given an annual research investment of over £1 trillion by funders, and with over 2 million outputs published, it’s simply staggering to note that up to 50% of these outputs will never be discovered. No-one reads them, let along cites them.  Funding bodies have woken up to this huge issue and are looking to see activities that drive engagement with research. This puts the ball squarely back into the academics’ court.

So what is the knock-on effect for academic publishers? To paraphrase Cathy Holland (Business Development Manager, Digital Science), the second speaker on the ‘Brand Building the Scholarly Author’ webinar, “Social media has a huge impact on the way information is communicated today… No longer can the publisher or author passively just sit back and let content collect citations.” In the same way as authors now need to become active, publishers also need to be actively engaging online with scholarly research communities. Engaging is not enough and publishers must also now measure their social media efforts to successfully access their reach and effectiveness. As we have seen with the new social-media tool kits for authors to emerge, technology platforms provide Publishers with key measurement and online engagement tools.


Jean Dawson is the Product Manager at Charlesworth Author Services. Jean has over 25 years’ experience in senior management roles in publishing across the industry from academic publishing, start-ups, to publishing service providers. Commencing her career at Oxford University Press, she moved to form part of the Ingram UK Business start-up team. Prior to joining The Charlesworth Group, Jean worked as a consultant providing product development and marketing services to trade and academic publishers. Jean is also an active participant in cross industry membership groups.

The Brand Building the Scholarly Author seminar will take place on Thursday 18th October 2018. To find our more or to book your place visit: https://www.alpsp.org/Webinars/Brand-Building-the-Scholarly-Author/56883



Thursday, 19 July 2018

Business Models for Open Access: How can I run a successful Open Access journal?


JAMS logo

In this week's guest blog Martyn Rittman, Ph.D, Publishing Services Manager at MDPI, offers some words of wisdom for developing successful open access journals.


The Directory of Open Access Journals contains over 11,500 journals and more than 3.1 million open access articles. Our indexing database Scilit contains around 20 million freely available articles, mostly open access. Estimates put the amount of open access in the region of 15% to 20% of all published articles. Do these numbers represent a threat to traditional revenue channels, or is it possible to run a healthy business using this model?

MDPI started publishing free online articles in the late 1990s. At first, we were supported by other projects, conferences, grants, and a great deal of voluntary time. In the mid-2000s, along with other publishers, we adopted author-side charges for publication, commonly known as article processing charges (APCs). By separating the journal editors making final acceptance decisions from the publisher, we have been able to maintain a rigorous and objective peer review process alongside gold open access. However, we have spoken to other publishers who have found it difficult to adopt the open access model, don’t feel they have the expertise, or find it difficult to cover their costs. Here, we offer some advice for developing successful open access journals.


Sources of revenue

Assure yourself that revenue streams for open access are available, even in fields with high scepticism towards author-side charges.  An increasing number of national funding agencies and governments have open access mandates, and offer support for the payment of APCs. National agreements with publishers are also emerging. Many non-governmental funding agencies and university libraries have also embraced open access and provide assistance to authors. These include the Wellcome Trust, the European Union, the Bill and Melinda Gates Foundation, and many more. A useful resource to see the amounts paid by universities for open access publication is the OpenAPC platform (https://treemaps.intact-project.org/apcdata/openapc/). Other models include Knowledge Unlatched for humanities and SCOAP3 for high energy physics, where publishers receive a per-article payment out of a central fund collected from funders and libraries. Smaller journals may be able to find a single funding agency, university or society to cover all the costs of the journal, especially in niche fields. In fact, there are increasing opportunities that do not involve directly invoicing authors.

Providing a useful service

Do not assume that open access is enough. Look carefully at the scope of your journal to see whether it offers something unique in the field. This is especially critical for new journals. For many authors, the decision on where to publish is not primarily linked to open access: the scope, editorial board, and reputation of the journal are usually more important. Open access journals should be focused on providing a good service to authors and you can distinguish your journal simply by providing a better alternative to existing journals.

Workflows

Consider new workflows for your journal. There may be an initial cost to making changes in how you run the journal but it will pay off in the long-term. Authors publishing in open access are often looking for a quick decision and publication. This might mean revisiting how editorial decisions are made, and changing expectations among editors, editorial board members and reviewers about how quickly they provide feedback. On the marketing side, you will need to consider how to better reach your target authors, redirecting efforts from potential subscription customers. If you opt for an APC model, handling a larger volume of small payments may require a new approach to invoicing.

There is no magic formula for running an open access journal and much of the work is the same as for traditional journals. Open access journals now exist in all fields using all kinds of editorial and business models. At MDPI, we continue to see growth in the open access market across many research fields. We are convinced of the benefits of universal access through a large, broad readership, allowing ideas to shape those outside of the academy as well as authors from institutions with small subscription budgets. Open access supports the dissemination and sustainability of knowledge and we encourage all publishers to take advantage.

Head and shoulders photo of Martyn Rittman
Martyn Rittman, Ph.D. is Publishing Services Manager at MDPI, combining a passion for open access publishing with an interest in new models for publishing and open science. He joined MDPI in 2013 following a research career covering physical chemistry, materials science, instrumentation, and mathematical modelling.


MDPI is headquartered in Basel, Switzerland, with branch offices in China, Spain, and Serbia. It runs over 200 fully open access journals, including some in collaboration with scholarly societies, and in 2017 published over 35,000 peer-reviewed articles. MDPI also provides publisher services through its JAMS software (jams.pub) and offers academic communication tools, including a conference management platform, at sciforum.net.

JAMS website: http://jams.pub
MDPI website: http://www.mdpi.com/publishing_services
Twitter: https://twitter.com/MDPIOpenAccess
Facebook: https://www.facebook.com/MDPIOpenAccessPublishing/


MDPI are proud silver sponsors of the ALPSP Annual Conference and Awards 2018





Friday, 6 July 2018

How to train your author - Is author training a good idea for publishers?

In this week’s guest post, we hear from Dr Gareth Dyke who heads up Charlesworth Knowledge, a new service being launched this year for authors, educational institutions, and publishers.


A huge range of training courses, most often delivered online, are available to help academic researchers improve their writing and publication presentation skills. Ive often wondered, however, if encouraging authors to attend such courses and improve their abilities (albeit incrementally) is actually a good idea for publishers? There is an argument that it is in the interests of the publication house to receive badly written content so that in-house editing and polishing offerings can be recommended, leading to an obvious knock-on increase in revenue, in spite of the editorial headaches involved in reviewing them.

This feedback loop seems self-defeating for publishing houses. Surely once an author has been trained to write better and more effective articles, then that individual is less likely to avail of in-house language editing services?

I disagree. Id argue that the whole author-publisher ecosystem should be viewed pragmatically and is, after all, a question of scale. What does a publisher gain from training authors to write better on their own? In addition to reducing the time spent working over the hundreds of submissions that might come through a system week on week, revenue from providing the training itself is important. Looking longer term: better quality submissions enhance the journal, building its reputation, leading to more citations, raising up the impact factor, and driving subscriptions. In this day and age of quick online publications that are often open access amidst competition for your readers' time (even within your own research field), well written and effective articles that draw readers in and keep them going past just the title and the abstract are a bonus for everybody. It’s also important to build loyalty amongst authors so that they keep submitting their papers to the same journals; this could be because this is where they received their key training, as well as the reputation and quality of the journal playing a role in their decision. This is one good reason why the big academic journal publishing houses offer author training, often as standalone academy’-type model or pay-per-view online workshops and seminars; they want to encourage author submission habits with quality product that all feeds into driving up their sales, including insitutional subscriptions, and publisher/journal reputation. 

So, yes, author training is a very good thing. Perhaps thats why more and more publishing houses are getting into this area and offering these courses either directly or through third-party training providers. Through the many years of providing high-quality language polishing services, Charlesworth Author Services has recognised this growing need to provide high-quality and high-value training to the academic authors who use our services. Author education provided by the unique combination of people with both publishing and reviewing skills at the highest level is the logical next step.

About The Charlesworth Group


With close to 20 years' experience in China, The Charlesworth Group is recognized globally as the trusted partner for sales and marketing representation and consultancy for STM publishers in the Chinese academic market. Charlesworth is also a leading provider of language editing and author services through its Charlesworth Author Services Division.


Dr. Gareth Dyke


Gareth is a prolific scientific author who has published more than 200 articles in peer-reviewed journals over the last 20 years. During Gareth’s experience as an academic, working across multiple Universities,  he has mentored students at all ages and has developed a large range of teaching techniques.

Website: www.cwauthors.com
Twitter: @CWGAuthors
Linkedin: Charlesworth Author Services
Facebook: www.facebook.com/CWGauthorservices

  


Wednesday, 20 June 2018

Artificial Intelligence: What It Is, How It Works, and What Publishers Can Do with It


Atypon logo
AI was one of the hot topics at last year's ALPSP conference, in this guest blog Hong Zhou, Senior Product Manager for Information Discovery and AI at Atypon give us the 101 on this transformational development.

Artificial intelligence, or AI, is much more than the latest technology buzzword. According to Gartner, by 2020, AI will positively change the behavior of billions of workers and users. And Tata estimates that the vast majority of those workers will work outside of IT.

But what exactly is AI?

AI is a broad set of technologies that use the computational capabilities of machines to “think” like humans. There are many different types of AI, each of which can be used to solve different problems.

So how can AI be employed by scholarly publishers? Ultimately, any publishing technology should make the research experience more productive, increase content usage, and add value to the publisher’s content. To do that, R&D at Atypon explores ways to help readers discover useful and relevant information more quickly by improving search mechanisms and refining content recommendations.

Making content relevant: Recommender systems

Recommender systems will be familiar to anyone who has received suggestions about what other products to buy before or after making an online purchase. Publishers can use them to target relevant products to individual customers by understanding their online site behavior and interests.

Anticipating what readers want: Personalized search

AI-driven recommendation technology can be extended to personalize search as well: reading histories can be used to adjust search rankings specifically to each user—and even suggest new queries that may be relevant—with the goal of understanding a user’s intentions even before they search.

Faster, easier content classification: Semantic auto-tagging

Content tagging underlies many important website capabilities, such as automating the creation of topic-specific pages and content bundles, and powering search results and content recommendations. But tagging documents and maintaining tag sets can be a daunting undertaking. Auto-taggers powered by intelligent machine learning algorithms tag articles accurately and even identify which tags may not be assigned correctly. They save curators time by letting them concentrate their efforts only on content that’s assigned low “confidence scores” by the auto-tagger, thus making it easier for publishers to implement and manage taxonomies.

Content enrichment: Natural language processing

Keywords are traditionally extracted or selected manually, but doing it automatically requires a large amount of training data to identify relationships among topics and key phrases. By enabling machines to understand the meaning of content rather than just the individual words, they can extract more valuable information from content. Natural language processing (NLP) automates key phrase extraction and obviates “teaching” the engine about the content first. By extracting key phrases from different sections of the content and ranking them based on their importance, NLP ultimately improves content categorization and, by extension, content discovery.

Beyond tagging and metadata: Knowledge graphs

A knowledge graph charts all of the possible connections among publication-related information like authors, topics, journals, articles, and even external knowledge databases. Based on these connections, algorithms identify and recommend to researchers the most influential entities, trending topics, and even co-authors and reviewers based on their areas of specialization and the subjects about which they’re writing.

Granular discoverability for text and images: Semantic enrichment

Suppose a researcher wants to interpret many figures associated with a single experiment. Editors have to segment them manually using specialized software—problematic when processing a large number of them. Machine learning can be used to extract sub-figures and captions from compound figures and even separate labels from their associated images, enabling each item to be searched and retrieved individually. Such automation not only reduces the cost of segmentation but also extracts and organizes more valuable information so researchers can search for, compare, and recommend images more precisely and easily.

Search the science, not the text

AI is no longer an aspirational conversation about the future—many of the technologies discussed above are all available today and in use by publishers. By using AI to provide better search results for researchers—and enable publishers to target content more effectively—publishers can deepen researchers’ engagement with their websites, increase the value of their content, and further the pursuit of scientific knowledge by surfacing the information they need more quickly and accurately.


Hong Zhou works on Atypon’s next-generation information discovery technologies. Previously, he was the CTO of Digital Fineprint, a startup that leveraged machine learning algorithms for the insurance industry. He also spent a year designing race car games at Eutechnyx. He holds a PhD in 3D modeling with artificial intelligence algorithms from Aberystwyth University and has published widely on computer science.


Atypon is the proud sponsor of our Awards Dinner at the ALPSP Annual Conference which will take place on 12-14 September this year.


Wednesday, 30 May 2018

Examining Trust and Truth in Scholarly Publishing

In this latest blog, Helen Duriez, from our Professional Development Committee, reflects on how our current webinar series Trust, Truth and Scholarly Publishing webinar series came together.  


Oh, how the world turns. I used to think that Donald Trump running for US president was a fine joke. I used to think there was no way the UK would choose to go it alone when it could be a part of the collective economic might of the European Union. Turns out, the voting public in the US and UK had very different ideas to those of this naïve millennial back in 2016.  

Two years on, it’s become apparent that a large part of the success of these two major political campaigns was their ability to leverage personal belief systems. People are more likely to believe what they read if it aligns with their pre-existing belief system or if it taps into a feeling of existential threat, causing them to disregard evidence to the contrary. Ironically enough, there’s research that backs up this theory, and the concept even has a name – post-truth. You might have heard of it.

Now, what people choose to believe (or not) is tightly interwoven with what we choose to tell them, and how. In scholarly publishing, most of our jobs involve disseminating complex information in one form or another. With research output higher than ever before, there’s a lot of complicated stuff to explain – not just to academics and practitioners, but to the general public as well. Scientists are used to working with ambiguities, although that doesn’t mean they always navigate the rocky terrain of uncertainty safely. And what about lay audiences, who give as much weight to opinions as to facts?

The team at ALPSP felt this topic warranted further exploration, and so a small group of staff and volunteers (I’m one of the latter) have taken it upon ourselves to put together a series of webinars looking at some of the issues and opportunities in scholarly publishing today. Here’s how the series pans out…

Publishing without perishing

In case you missed the first webinar in the Trust, Truth & Scholarly Publishing series, go – sign up and download it. Seriously, do it. Yes, as one of the organisers I may be a little biased, but even knowing what I was about to listen to didn’t stop me from being motivated and left feeling a little awe-inspired as Richard Horton gave us a passionate, powerful reminder of what early journal publishers set out to achieve, and the obligations we still have to society today. Jason Hoyt follows up with some practical thoughts about how publishers can succeed in a post-truth world.

The reproducibility opportunity

In last week's webinar, now available for download too, and highly recommended,  Catriona Fennell, Rachel Tsui and Chris Chambers explored how the concept of reproducible science represents an opportunity, rather than a threat, when it comes to getting to the truth, the whole truth, and nothing but the truth. The traditional journal publishing model doesn’t have much time for replication studies (not original research, don’t ya know) or registered reports (findings, please!), but things are starting to change…

Public engagement with scholarly research

The process of communicating a new piece of scientific research to the world can sometimes feel a little like a game of Chinese whispers. When the description of a complex concept or process is shortened and reworded in order to reach a new audience, it’s meaning can change subtly. I’ve seen more than one twitter spat debating the latest “scientists have found…” health fact, and there are those who have built careers around addressing some of these misrepresentations.

So, what tools can those of us in scholarly communication use to instil trust in our content? In our last webinar we are joined by three industry communicators Tom Griffin, John Eggleton and Eva Emerson to find out.  You can register here for this final webinar.

For more practical information on the series, including how to get a members’ discount, see here.

Helen Duriez is a Product Manager at Wiley, specialising in digital strategy and planning. With over 12 years’ experience in the publishing industry, Helen has previous worked at the Royal Society, Macmillan and OUP. She gets out of bed for open science and avocado toast.