Tuesday, 13 November 2018

Thinking of reviewing as mentoring

In this blog Siân Harris shares her personal experiences of being a peer reviewer for Learned Publishing.


Earlier this year I was contacted by Learned Publishing about reviewing a paper. This was an interesting experience for me because although I had been a researcher and then a commentator on scholarly publishing, including peer review, for many years, this was the first time I had done a review myself.

The paper I was invited to review was about publishing from a region outside the dominant geographies of North America and western Europe. Ensuring that scholarly publishing – and, in particular, the research that it disseminates – is genuinely global is something that I am passionate about (in my day job I work for INASP) so I was very happy to take on this review.

There have been plenty of complaints about peer review being provided freely to publishers and rarely recognized as part of an academic’s job description (it’s also not part of my non-academic job). And some researchers can feel bruised when their papers have been handled insensitively by peer reviewers.

On the other hand, there are powerful arguments for doing peer review in the interests of scholarship. What I’d not heard or realised until I did a review myself was how doing peer review is – or should be - a lot like mentoring. Since my time as a (chemistry) researcher I have regularly given others feedback about their papers, books and other written work, most recently as an AuthorAID mentor supporting early-career chemistry researchers in Africa and Asia. I also found, as I did the review, that I was very happy to put my name on it, even after recommending major revisions.

As I read the Learned Publishing paper I found I was reading it with that same mentoring lens and I realised there was an opportunity to help the authors not only to get their paper published but also to explain their research more clearly so that it has greater potential to make a difference. I wanted to encourage them to make their paper better — and to suggest what improvements they could make. Crucially, I didn’t feel like I was doing a review for the publisher; I felt I was doing the review for the authors and for the readers.

As I’ve seen with so many papers before, the paper had some really interesting data but the discussion was incomplete and a bit confusing in places; it felt to me a bit like an ill-fitting jacket for the research results. I made positive comments about the data and I made suggestions of things to improve. I hoped at the time that the authors found my feedback useful and constructive and so I was pleased that they responded quickly and positively.

The second version was much better than the first; a much clearer link was made between the data and the discussion and some answers had been given to many of those intriguing questions that had occurred to me in reading the first draft.We could have left it there but there were still some residual questions that the paper didn’t address, so in the second round I recommended further (minor) revisions.

Quickly, the third version of the paper came back to me. I know it can be frustrating for authors to keep revising manuscripts but the journey of this paper convinced me that it is worth it. The first version had great data that intrigued me and was very relevant to wider publishing conversations, but the discussion lacked both the connection and context to do the data justice. The second version was a reasonable paper but still had gaps between the data and the discussion that undermined the research. But the third version thrilled me because I realised I was reading something that other researchers would be interested in citing, and that could even be included in policy recommendations made in the authors’ country.

Having reflected on this process during this year's Peer Review Week with its theme of diversity, I am pleased that I read this paper and was able to provide feedback in a way that helped the authors to turn good data into an excellent article. First drafts of papers aren’t always easy to read, especially if the authors are not writing in their native language.  Authors can assume that readers will make connections between the results and the conclusions themselves, resulting in some things being inadequately explained. But peer review – and mentoring -– can help good research, from anywhere in the world, be communicated more clearly so that it is read, used and can make a difference.

Dr Siân Harris is a Communications Specialist at INASP. 


Friday, 2 November 2018

Why is innovation a challenge for established publishers?


Charles Thiede photo
Charles Thiede, CEO of tech startup Zapnito (and former CPO of Nature Publishing Group and CTO of Informa Business Intelligence), explores the theory of the innovator’s dilemma. And what publishers can learn from it.

In a couple of weeks’ time, I’m going to be chairing an event with ALPSP entitled ‘Innovate or Perish? How to think like a startup’. It’s got me thinking on the challenges of innovation within large publishers - challenges I’ve seen from the inside, as well as out.

I asked Zapnito advisor Mark Allin, former CEO of Wiley and a speaker at the event, for his view on how innovative publishing companies are at the moment. On a scale of 1-10, he gave them 3 or 4. That’s not ideal.

So why is this the case? There’s no question that the publishing industry is full of talent and resources. Yet startups are often seen as having the edge when it comes to innovation. The ‘innovator’s dilemma’ - a term coined by Clayton Christensen in 1997, offers insight into why that is.

The innovator’s dilemma
The problem isn’t necessarily lack of innovation itself, or enthusiasm to try new ideas, but more the environment in which those new ideas are developed and nurtured.

The value from new innovations isn’t realised immediately. It tends to follow an S-curve (see the yellow line on the graph). Improving a product takes time and many iterations, and quite often the early iterations provide minimal value. That can prove the sticking point for many businesses.

Ultimately, the primary aim of most established companies is to keep and add additional value to their existing customer base and their current business models. This means new and innovative ideas can be undervalued, because they are applied and tested with existing customers or through existing models, rather than looking at new markets or models. 

It also means that if innovative ideas fail to deliver results quickly, they are seen as failing - the ROI is thought to be too low. In this case, often management acts ‘sensibly’, in what they view to be the company’s best fiduciary interests, and rejects continued investment in the innovation. 

This is why startups, usually with little or nothing to lose when they enter the market, are so much more successful. They find new markets to apply their innovations, largely by trial and error, at low margins. Their nimbleness and low cost structures allow them to operate sustainably where established companies could not. They don’t have the same responsibilities to, for example, shareholders or existing customers.

At the same time, especially with ‘bootstrapped’ companies, startups must survive on their own two feet. This means that if the initial idea doesn’t work, they can adapt and even pivot their models. We did this at Zapnito early on. In contrast, for an established publisher, the initial idea is often fixed and changing direction means failure. 

By finding the right application use and market, startups advance rapidly and hit the steep part of the S-curve, eventually entering the more mature markets of the established companies and disrupting them.

What’s the solution?
There is no one way to do innovation. But to me, the most vital change is a change in attitudes. Traditional publishers will need to think outside their traditional business models. Innovation does not need to be in context of existing ways of doing business. Too many media companies are organised around delivery models vs. solutions to a market. That leaves little room for innovation.

There’s also a need to start playing the long game and looking for ways to manage development processes so that it’s okay to change direction, or even to fail.

I also want to challenge the idea of innovation itself. Innovation does not mean invention. Most people think innovation and invention are synonymous. But Jeff Bezos did not invent ecommerce. Steve Jobs did not invent smartphones. The innovation happened in the execution of those ideas and how they were delivered to the market.

There are lots of potential ways for publishers to nurture more innovation within their companies. This could be through mergers and acquisitions (M&A), partnering with disruptive businesses, creating an internal ‘skunkworks’-style structure, or even separating out new innovations into offshoot companies.

These are all ideas I’m looking forward to exploring at the event. Hope to see you there.

The Innovate or Perish? seminar will take place on Thursday 15 November 2018. To find our more or to book your place visit: https://www.alpsp.org/Events/Innovate-or-Perish/60274

Thursday, 18 October 2018

Getting From Word to JATS XML

In this blog Bill Kasdorf, Principal, Kasdorf & Associates, LLC talks us through a perennial problem and the different approaches to addressing this:


It is a truth universally acknowledged that journal articles need to be in JATS XML but they’re almost always authored in Microsoft Word.

This is not news to anybody reading this. This has been an issue since before JATS existed. Good workflows want XML. So for decades (yes, plural) publishers have been trying to get well structured XML from authors’ manuscripts without having to strip them down to plain text and tag them by hand. (This still happens. I’m not going to include that in my list of strategies because nobody thinks that’s a good idea anymore.)

There are four basic strategies for accomplishing this:
• Dedicated, validating XML editors.
• Editors that emulate or alter MS Word.
• Use Word as-is, converting styles to XML.
• Editors that use Word as-is, with plug-ins.
Here are the pros and cons of these four approaches.


Dedicated, Validating XML Editors


This is the “make the authors do it your way” method. The authors are authoring XML from the get-go. And not just any XML. Not even just any JATS (or whatever XML model). Exactly the specification of JATS that the publisher needs, conforming in every way to the publisher’s or journal’s style guide and technical requirements. This strategy works in controlled authoring situations like the people developing technical documentation. (They’re probably authoring DITA, not JATS.) They’re typically employees of the publisher, and the document structures are exactly the same every day those employees show up to work.

I have never seen this strategy successfully employed in a traditional publishing context, although I have seen it attempted many times. (If anybody knows of a journal publisher doing this successfully, please comment. I’d like to know about it.) This doesn’t work for journals for two main reasons:
1. Authors hate it. They want Word.
2. They have already written the paper before submitting it to the journal. The horse is out of the barn!


Editors that Emulate or Alter MS Word


This always seems like a promising strategy, and it can work when it’s executed well in the right context. The idea is to either let authors use Word, but make it impossible for them to do things you don’t want them to do (like making a line of body text bold when it should be styled as a heading), either by disabling features in Word like local formatting or by creating a separate application that looks and acts a lot like Word.

I have seen this work in some contexts, but for authoring, I’ve seen it fail more often. The reason is No. 1 above. Despite being a lot like Word, it’s not Word, and authors balk at that. These are often Web-based programs, and authors want to write on a plane or the subway. And there’s always No. 2: most journal articles are written before it’s known which journal is going to publish it.

This strategy can work well, though, after authoring. Copyeditors and production staff can use a structured tool like this more successfully than authors can. We’re seeing these kinds of things proliferate in integrated editorial and production systems like Editoria, developed by the Coko Foundation for the University of California Press, and XEditPro, developed by a vendor, diacriTech.


Use Word As-Is, Converting Styles to XML


This is by far the most common way that Word manuscripts get turned into XML today. A well designed set of paragraph and character styles can be created to express virtually all of the structural components that need to be marked up in JATS for a journal article. This is done with a template, a .dotx file in Word, which, when opened, creates a .docx document with all of the required styles built in. And since modern Word files are XML under the hood, you can work with those files to get the JATS XML you need.

The question is who does the styling, and how well it gets done.

Publishers are sometimes eager to give these templates to their authors so they can either write or, post-authoring, style their manuscripts according to the publisher’s requirements. Good luck with that. The problem is that it’s too easy to do it wrong. Use the wrong style. Use local formatting (see above). Put in other things that need to be cleaned up, like extra spaces and carriage returns. Somebody downstream has to fix these things.

Those people downstream tend to be trained professionals, and it’s usually best just to let them do the styling in the first place. This is how most JATS XML starts out these days: as professionally styled Word files. Many prepress vendors have trained staff take raw Word manuscripts and style them, often augmented by programmatic processing to reduce the manual work. These systems, which the vendors have usually developed in-house, also typically do a “pre-edit,” cleaning up the manuscript of many of those nasty inconsistencies programmatically to save the copyeditor work.

This is also at the heart of what I would consider the best in class of such programs, Inera’s eXtyles. Typically, a person or people on the publisher’s staff are trained to properly style accepted manuscripts; eXtyles provides features that makes this easier to do than just using Word’s Styles menu. Then it goes to town, doing lots of processing of the resulting file based on under-the-hood XML. It’s primarily an editorial tool, not just a convert-to-XML tool.


Use Word As-Is, With Plug-Ins


This is not necessarily the same as the previous category, but there’s an overlap: eXtyles is a plug-in for Word, and the resulting styled Word files can just be opened up in Word without the plug-in by a copyeditor or author. But that approach still depends on somebody having styled the manuscript, and subsequent folks not having messed up the styling. It also presents the copyeditor (and then usually the author, who reviews the copyedits) with a manuscript that doesn’t look like the one the author submitted in the first place.

This tends to make authors suspicious—what else might have been changed?—and suspicious authors are more likely to futz. That’s why in those workflows it’s important to use Tracked Changes, though some authors realize that that can be turned on and off by the copyeditor so as not to track every little punctuation correction that’s non-negotiable anyway.

An approach that I have just recently come to appreciate is what Ictect uses. This approach is not dependent on styles. As much as I’ve been an advocate of styles for years, this is actually a good thing. Styles are the result of human judgment and attention. When done by trained professionals, that’s pretty much okay. But on raw author manuscripts—not.

Ictect uses Artificial Intelligence to derive the XML not from the appearance of the article, which is unreliable, but on the content. Stop and think about that a minute. Whereas authors are sloppy or incompetent in getting the formatting right, they are pretty darn obsessive about getting the content right. That’s their paper.

Speaking of which, in addition to not changing the formatting the author submitted, Ictect doesn’t change the content either. The JATS XML is now embedded in that Word file, but you only see that if you’re using the Ictect software. After processing by Ictect, the document is always a Word document and it is always a JATS document. To an author or a copyeditor it just looks like the original Word file. This inspires trust.

I was initially skeptical about this. But it actually works. Given a publisher’s style requirements and a sufficiently representative set of raw author manuscripts, Ictect can be set up to do a shockingly accurate job of generating JATS from raw author manuscripts. In seconds. Nobody plowing through the manuscripts to style them.

There have been tests done by large STM publishers that have demonstrated that Ictect typically produces fully correct, richly tagged JATS for over half of the raw Word manuscript files submitted by authors, and over 90% of manuscripts can be perfected in less than ten minutes by non-technical staff like production editors. The Ictect software highlights the issues and makes it easy for publishing staff to see what the problem is in the Word file and fix it. That’s because the errors aren’t styling errors, they’re content errors. They have to be fixed no matter what.

In case you think this is simplistic or dumbed-down JATS XML, nope. I’m talking about fully expressed, granular JATS, with its metadata header and all the body markup and even granularly tagged references that enable Crossref and PubMed processing. Not just good-enough JATS. Microsoft Office 365 is not exactly a new kid on the block now, but journal publishers have not made much use of it. As things evolve naturally, more and more authors are going to use Office 365 for peer review, quick editing, corrections and even for full article writing. Since Ictect software creates a richly tagged Word document that can be edited using Office 365, it opens up some interesting workflow automation and collaboration possibilities, especially for large scale publishing.

And if you need consistently styled Word files, no problem. Because you’ve got that rich JATS markup, a styled file can be generated automatically in seconds. For example, in a consistent format for copyediting (I would strongly recommend that), or a format that’s modeled after the final published article format. Authors also really like to see that at an early stage. It’s an unavoidable psychological truism that when an author sees an article in published form she notices things she hadn’t noticed in her manuscript. So you can do both: return the manuscript in its original form, and provide a PDF from the styled Word file to emulate the final layout.

All of the methods I’ve discussed in this blog have a place in the ecosystem, in the right context. I haven’t mentioned a product that I wouldn’t recommend in the right situation. For example, you might initially view Ictect as a competitor of eXtyles and those home-grown programs the prepress vendors use. It’s not. It belongs upstream of them. It’s a way to get really well tagged JATS from raw author manuscripts to facilitate the use of editorial tools, without requiring manual styling. It’s the beginning of an Intelligent Content Workflow. It’s a very interesting development.

Bill Kasdorf is Principal of Kasdorf & Associates, LLC, a consultancy specializing in accessibility, XML/HTML/EPUB modeling, editorial and production workflows, and standards alignment. He is a founding partner of Publishing Technology Partners 

Website: https://pubtechpartners.com/

Twitter: @BillKasdorf


To find out further information on Ictect visit: http://www.ictect.com/ 

or register for one of their free monthly webinars at: http://www.ictect.com/journal-webinars




Monday, 8 October 2018

2018 ALPSP Conference Report - From Adventures in Publishing to #MeToo

In this blog, Alastair Horne, Press Futurist and social media correspondent at this year's ALPSP Conference reports on a packed few days in Windsor hearing from the scholarly publishing community.


This year’s conference once again offered a range of perspectives from across the scholarly publishing ecosystem on the key issues that affect us.

photo Chris Jackson
Keynote - Professor Chris Jackson
Thursday’s opening keynote was given by Professor Chris Jackson, who shared his own experiences as a researcher who has engaged deeply with the industry, publishing more than 150 articles, acting as editor for three journals, and co-founding the EarthArXiv preprint server. In a wide-ranging talk, Jackson offered some advice for publishers drawn from his experience: to be transparent about APC pricing; to offer strongly reduced APCs to early career researchers in order to build an affinity with new authors; and to be clear about their views on metrics. On open access, though generally enthusiastic, he suggested that Plan S had caused concerns among academics and might create challenges for societies who relied on income from subscription or hybrid journals to fund their other activities.

Open access was, inevitably, a theme that persisted throughout the conference. The panel that followed Jackson’s talk asked how societies and publishers should ‘accelerate the transition’. Kamram Naim shared details of the ‘subscribe to open’ model used by non-profit publisher Annual Reviews, which addressed the twin problems of library policies on ‘donations’ often preventing the support of open access initiatives, and the fact that APCs don’t work for journals that publish invited contributions from scholars, rather than receiving submissions. Their ‘Subscribe to Open’ model, which bears some similarities to Knowledge Unlatched’s, sees libraries receive a discount on their journal subscriptions if they choose to participate in unlocking initiatives: if enough do so, then that volume’s issues of the journal become available through open access; if not, then only subscribing institutions have access. Naim’s fellow panellist Steven Hill, Director of Research at Research England, and architect of the new REF, insisted that the new requirement for open access monographs would not mandate any particular model. His position was strongly challenged, though, by the panel’s third speaker, Goldsmiths Press’ Sarah Kember, who asked why the transition to open access for monographs was happening at all, and called for a deceleration to allow time for more consideration of differences across the sector. Plan S, she suggested, totally disregarded the humanities and monographs, and posed a considerable threat to academic freedom by restricting where researchers could publish.

photo Conference Panel session
Panel debate on Open Access
The following day, a further session considered the impact of open access on library sales, strategies, and solutions, as library directors from Europe and the US shared some insights into their institutions’ recent cancellations of big deals. Wilhelm Widmark, Library Director of Stockholm University, suggested that the Swedish universities’ decision to reject what he described as a ‘good’ proposed deal with Elsevier was because it didn’t offer a sustainable route to full open access; the money saved is being redirected towards fully open access journals. Jean François Lutz, Head of the Digital Library at the University of Lorraine, and Adrian Alexander, Dean of the Library at the University of Tulsa, added that their own institutions’ decision to cancel some of their big deal contracts were prompted by budget constraints and unsustainable pricing increases.

Friday’s opening session considered another increasingly hot topic: customer data. Chris Leonard from Emerald shared insights from their work in mapping user journeys in accessing their content, and one key finding – that though a high proportion of people who visit their site discover it through Google, the majority of those people don’t have institutional access and so leave; people who come to the site via library discovery services are far more likely to continue their journey further. Lettie Conrad of Maverick Consulting spoke of the wealth of data available to publishers, both internal – customer service records, sales reports, customer data, market research findings, product testing and user studies – and external – competitor analysis, discovery journeys, and usage analytics. Transforming such data into usable information required strategic thinking and some investment, she suggested, but it wasn’t rocket science. The third panel member, David Hutcheson, told how BMJ had developed a strategy for using data to inform their decisions, drive user engagement and deepen user understanding. Working with consultants and stakeholders to create an overall plan, they started by deepening their understanding of their existing technology and resources and testing them to see what worked. Integrating their different platforms to connect their data, and developing partnerships with suppliers, the BMJ set up a small six-person data team to serve as a specialist centre of excellence, supporting the rest of the business, automating processes and delivering self-service reporting to enable and empower colleagues to make use of the data produced.

The parallel sessions offered the usual dilemma of which to attend, and though there’s too little space to describe them all here, a personal highlight was a fascinating panel on the digital humanities. Peter Berkery of the Association of University Presses, Paul Spence of King’s College London, and Etienne Posthumus of Brill all discussed recent experiments in finding modes of publishing that would support the complex needs of this growing sector. Spence spoke of the need to fix a common terminology for the different types of publications produced, while Berkery talked through four marquee digital projects by university presses: Rotunda at Virginia, Manifold at Minnesota, Fulcrum at Michigan, and .supDigital at Stanford; Posthumus spoke on Brill’s own initiatives in labs and data.
Revenues from rights formed the focus of the day’s final session, sponsored by Publishers’ Licensing Services: Rebecca Cook of Wiley emphasised the need for thorough documentation governing what can be done with content, while Clare Hodder urged publishers to invest in metadata.

Photo Awards Presentation
Code Ocean wins the ALPSP Awards for Innovation 2018
Then, at the evening’s gala dinner, the winners of two prestigious ALPSP Awards were announced: Richard Fisher was honoured for his Contribution to Scholarly Publishing over a long career, both at Cambridge University Press and in his retirement, busier than many people’s main careers; then the cloud-based computational reproducibility platform Code Ocean was named the winner of the ALPSP Award for Innovation in Publishing.

The final day of the conference was dominated by ethical questions. Professor Graham Crow of the University of Edinburgh explored issues in research and publishing ethics, before the closing panel session addressed ‘The #MeToo Era in Academic Publishing: Tackling harassment and the roots of gender bias’. Femi Otitoju of the Challenge Consultancy shared some lessons drawn from thirty years of experience working in this area, emphasising the need to create the right working culture by focusing on positive outcomes rather than problems – having a ‘dignity at work’ policy rather than one on harassment, for instance – and prominently highlighting such policies through posters rather than pages buried on the company intranet. Karen Phillips of SAGE spoke of the need for publishers to learn from each other, while Eric Merkel-Sobotta of De Gruyter emphasised the importance of economic arguments in convincing management of the need to address such problems. Dr Afroditi Pina shared the results of her research into sexual harassment and successful strategies for addressing it: the need to agree appropriate sanctions for unacceptable behaviour, the role that public apologies can play in such sanctions, and the importance of listening un-defensively to those reporting harassment.

photo Beaumont Estate
The Beaumont Estate

If you would like to hear more about this year's ALPSP Conference, you can find video footage, audio and speaker presentations at:

 

https://www.alpsp.org/2018-Programme


The ALPSP Conference and Awards 2019 will be held at Beaumont Estate, Old Windsor, UK on 11-13 September. Please save the date!



Wednesday, 5 September 2018

Spotlight on Annotation for Transparent Inquiry - finalist for 2018 ALPSP Awards for Innovation in Publishing

On 13 September we will be announcing the winner of the 2018 ALPSP Awards for Innovation in Publishing, sponsored by MPS Limited, at the annual ALPSP Conference.  In this series of posts leading up to the Awards ceremony we meet our six finalists and get to know a bit more about them.


In this blog, we speak to Nisha Doshi, Senior Digital Development Publisher at Cambridge University Press, Heather Staines, Director of Partnerships at Hypothesis and Sebastian Karcher, Associate Director of the Qualitative Data Repository.


Tell us a bit about your company


One of the things that makes Annotation for Transparent Inquiry unique is that it isn’t the product of one company but the result of a collaboration between three non-profit, mission-driven organizations: Cambridge University Press, Hypothesis and the Qualitative Data Repository (QDR).

Cambridge University Press dates from 1534 and is part of the University of Cambridge; our mission is to unlock people's potential with the best learning and research solutions and we published the first articles that make use of Annotation for Transparent Inquiry (ATI). 

Hypothesis is a non-profit open source technology company, and they provide the annotation tool that powers ATI. 

QDR is a domain repository dedicated to curating, preserving and publishing the data underlying qualitative and multi-method research in the health and social sciences. 


What is the project/product that you submitted for the Awards?


We submitted Annotation for Transparent Inquiry (ATI), which creates a digital overlay on top of content on publisher web pages and connects specific passages of text to author-generated annotations. The ATI annotations include ‘analytic notes’ discussing data generation and analysis, excerpts from data sources, and links to those sources stored in trusted digital repositories. These data sources can be interview transcripts, audio clips, scanned telegrams, maps and so forth – all sorts of different types of material which wouldn’t usually be accessible to the reader. Readers are able to view annotations immediately alongside the main text, removing the need to jump to footnotes or separate appendices.


Tell us more about how it works and the team behind it


A passage of the article is highlighted to indicate there’s an annotation, and the annotations are displayed as a collapsible right-hand panel alongside the content. Each annotation created by the author generates a unique persistent web address for the details and analysis shared with the reader. The passage in the publication is linked to the source material, which is archived by QDR – a trusted digital repository. Readers can also shift into an Activity Page where they can view, search, and filter all of the annotations created on the project. From this page, researchers can explore other portions of the content as well as connected resources.


graphic Annotation for Transparent Inquiry


ATI has been a collaborative effort both within and between our three organisations. At Cambridge University Press, launch of ATI has involved colleagues in editorial, digital publishing, the Cambridge Core platform team and marketing, to name but a few. 

At Hypothesis, with our dedication to supporting researcher workflow through open annotation, input and technical expertise has come from partnerships, marketing, and product development.

And at QDR, transparency in qualitative research is central to our mission and ATI involves the whole team. QDR is run by active researchers, who conceptualized ATI based on ongoing debates in qualitative methods. The repository’s curators provide advice and support to researchers interested in using QDR.


Why do you think it demonstrates publishing innovation?


Recent years have seen significant advances in transparency and reproducibility for quantitative analyses, but progress has much slower for the qualitative analyses central to so much research. ATI brings transparency to qualitative research. ATI allows readers to interrogate qualitative sources in a way that has not hitherto been possible without, for example, travelling to archives or museums to access the original material themselves. It also allows readers to understand authors’ analytic processes in depth, verify their evidence and thus properly evaluate their findings. By utilizing an annotation layer, authors are no longer constrained by word limits and thus can elaborate on aspects of the project which are important to them, providing rich media and additional links as needed.


What are your plans for the future?


We originally launched ATI in April 2018 with eight articles published by Cambridge University Press, followed by a 9th article published in May. We are now working to integrate ATI with books published by Cambridge, as well as material from other publishers and preprint servers. Although ATI was launched by Cambridge University Press, Hypothesis and QDR, it makes use of open standards and open source technology and the aspiration is that it can go on to be used by different publishers, different annotation tools and/or different data repositories. For example, a further eight articles with annotations on five other publishing platforms are pending publication. The founding partners of ATI are also exploring how best to embed ATI upstream in the research and authoring process.

Lastly (for now), to further promote ATI and explore how authors will conduct research and write with these annotations in mind, QDR launched the “ATI Challenge”. The winners receive an honorarium to help them finalize their manuscript with ATI annotations and, from our point of view, working with those authors in a variety of disciplines and understanding how they want to use ATI will help us further improve workflows, instructions and technology. QDR received more than 80 applications across disciplines in the humanities, social sciences and STM and from across five continents and announced the winning proposals in early August. We believe that the wealth and quality of applications to the ATI challenge shows that Annotation for Transparent Inquiry really does serve a need recognized by qualitative researchers worldwide.




photo Nisha Doshi
Nisha Doshi is Senior Digital Development Publisher at Cambridge University Press, where she leads the digital publishing team across academic books and journals.

@CambridgeUP
@nishadoshi






photo Heather Staines

Heather Staines is Director of Partnerships at Hypothesis, working with publishers, platforms and technology companies to integrate annotation into their workflow.

@heatherstaines
@hypothes_is




photo Sebastian Karcher

Sebastian Karcher is the Associate Director of the Qualitative Data Repository, where his work focuses on data curation and technological strategy.

@adam42smith
@qdrepository






https://qdr.syr.edu/ati
https://www.cambridge.org/core/services/authors/annotation-for-transparent-inquiry-ati
https://qdr.syr.edu/ati/ati-challenge

The ALPSP Annual Conference and Awards 2018 will be held at the Beaumont Estate, Old Windsor, UK from 12-14 September. #alpsp18

Friday, 31 August 2018

Spotlight on IP Intrusion - shortlisted for the 2018 ALPSP Awards for Innovation in Publishing

On 13 September we will be announcing the winner of the 2018 ALPSP Awards for Innovation in Publishing, sponsored by MPS Limited, at the ALPSP Annual Conference.  In this series of posts leading up to the Awards ceremony we meet our six finalists and get to know a bit more about them.


logo IP Intrusion

In this post, we speak to Andrew Pitts, a partner and co-founder of PSI Ltd.



Tell us a bit about your company


PSI is the brainchild of two veterans of the publishing world who, between them, hold over 40 years of STM publishing experience. It was while working for major publishers that they recognised a number of problems facing the industry that could only be overcome through communication, collaboration and innovation. PSI is an independent company. Through our work to enable publishers, membership societies, and libraries to work together securely and confidentially towards common goals, PSI has found itself in a unique position to encourage collaboration across the academic research community.

PSI is the developer of both the IPregistry.org and IP-intrusion.org and also the STM endorsed Due Diligence Bureau. With the IPRegistry.org, publishers and libraries can save time and streamline processes, eliminate errors, improve the reliability of usage metrics and ensure the right content is accessible to the right users.
With 
IP-intrusion.org publishers, and soon libraries, can join the community driven fight against cybercrime. With the Due Diligence Bureau publishers can demonstrate compliance and protect their reputation.

What is the project/product that you submitted for the Awards?


The product we submitted for the innovation award is our IP Intrusion Service database. PSI’s IP Intrusion Service is a community liaison hub where publishers and libraries can exchange information and warn each other about threats via IP-intrustion.org in REAL time.


Tell us more about how it works and the team behind it


PSI’s IP Intrusion Database is a new tool that will help both publishers and libraries by preventing intrusions of their secure IT systems. This new service will help protect publishers’ copyrighted content, universities’ intellectual property, and researchers’ personal data and identities. With the IP intrusion service, the academic research community can work together to fight against hackers, spammers, password crackers, scrapers and, most importantly, it will combat piracy and spear-phishing attacks. The PSI IP Intrusion Database exposes cyber-crime across the academic research community, with the added benefit of reducing service interruptions resulting from these malicious threats.  

PSI IP Intrusion Database was developed by PSI Ltd in collaboration with the AAP (American Association of Publishers) and a number of major STM publishers including IEEE, AMA, AWS, Taylor and Francis, SAE, Elsevier, ASME and ASTM. The service has been built using software developed by technology partner, Adactus Ltd.


Why do you think it demonstrates publishing innovation?


PSI IP Intrusion Database allows publishers and libraries to share detailed information about intrusions so they can protect themselves from future threats. Each day users of the service receive details of the latest intrusions across the industry allowing them to block the attacks at source.

graphic IP Intrusion threat notifications

As well as providing users with a block list, the database also cross-references activity against the 1.5 billion verified IP addresses held within theIPregistry.org for over 60,000 academic institutions, in effect the most comprehensive “white list” of verified institutional data available. This information is used to alert publishers that the attacks are being made via a legitimate organisation, allowing them to treat these intrusions differently, for example, by reaching out to the library before blocking access. Libraries can then be asked to review logs to determine the level of intrusion, alert users that credentials have been compromised and determine what may have been stolen. The IP-intrusion steering group will be able to assess all attacks on a regular basis in order to evaluate potential opportunities for group action.


graphic IP Intrusion

Libraries using the service are alerted in real time to breaches involving their IT systems. They will be prompted to investigate the real source IP of the intrusion and to share these details so that other libraries and institutions can block cyber-attacks from these sources before they even happen.

What are your plans for the future?


Th PSI IP Intrusion Database offers the academic research community a unique opportunity to work together to fight all forms of cyber-crime. Every intrusion reported within the database will provide the academic community with an opportunity to fight back. Having the IP-intrusion steering group evaluate all attacks on a regular basis will enable the community to take group action much faster. This will give the industry an opportunity to think ahead and try to stay one step ahead of the cyber-criminals. The consequences of not protecting published content, intellectual property, and personal data do not bear thinking about.


photo Andrew Pitts
Andrew Pitts is a partner and co-founder of PSI Ltd, he has worked in the field of STM publishing for over 20 years and has worked closely with most of the major STM publishers during that time.  

Websites:

Twitter: @PubSolutionsInt, @ip_registry


The ALPSP Annual Conference and Awards 2018 will be held at the Beaumont Estate, Old Windsor, UK from 12-14 September. #alpsp18

Tuesday, 28 August 2018

Spotlight on Kopernio - shortlisted for the 2018 ALPSP Awards for Innovation in Publishing

On 13 September we will be announcing the winner of the 2018 ALPSP Awards for Innovation in Publishing, sponsored by MPS Limited, at the ALPSP Annual Conference.  In this series of posts leading up to the Awards ceremony we meet our six finalists and get to know a bit more about them.

logo kopernio

In this blog, we speak to Ben Kaube, co-founder and Managing Director of Kopernio




Tell us a bit about your company. 


Kopernio is the brainchild of Jan Reichelt and myself, the co-founders of Mendeley and Newsflo, respectively. The idea for Kopernio came to me whilst I was writing up my PhD. I was working at home and in cafes and was growing frustrated at having to repeatedly log into various platforms over and over again in order to access papers.

On one hand, I could easily play any song I wanted in Spotify, but found myself with 40 browser tabs open to access all the papers I needed. It occurred to me that this was a wider problem – in fact some 10 million researchers worldwide endeavor to access 2.5 billion journal articles each year. It was then that we started thinking about how much more convenient it would be if there was a tool to access papers with a single click, wherever you are.

Kopernio does just this. It alleviates the hassle researchers currently have “hunting” for journal-article PDFs across the web. Kopernio eliminates the frustrating clicking, link chasing, and waiting on redirects and re-authentication delays that researchers currently face, and allows academics to spend more time focusing on their research. 


What is the project or product you submitted for the awards? 


Kopernio gives researchers one-click, legal access to journal articles anywhere and anytime. Kopernio’s aim is to create the definitive publisher-neutral platform for accessing research for scientific researchers, publishers and institutions worldwide. Through Kopernio, publishers are able to deliver their content, both subscription and OA, direct to researchers wherever they happen to be. Our vision is to dramatically improve and facilitate access to scientific knowledge.


Tell us more about how it works and the team behind it. 


Despite the millions invested by libraries in content and discovery solutions, journal articles are often not conveniently available to researchers at the point of need. As such, researchers are often forced to follow circuitous and time-consuming routes to access journal articles, and often don’t end up with the publisher’s version of record. This problem is compounded by the fact that typical researchers use 20 or more different online platforms each month as part of their literature discovery and access workflows.

Not only do these barriers waste time and cause frustration, they are stifling the pace of scientific innovation. If 10 million researchers spend an hour per year (a very conservative estimate!) trying to navigate clunky paywalls and university login pages just to read a few articles, it equates to 10 million hours (or 416,667 days) per year of wasted time that could be better used in actually conducting research.

Rather than trying to funnel users into “yet another destination site,” Kopernio enhances established workflows and travels with the researcher as they search and discover journal articles across 20,000 online platforms including discovery platforms, repositories and even scholarly collaboration networks.


Kopernio co-founder and new Managing Director for Clarivate’s Web of Science, Jan Reichelt discusses how Kopernio helps solve a major pain-point for 10 million researchers globally.

Kopernio integrates with existing institutional authentication systems to surface subscription content at the point of need for the researcher. This is done in a consistent and convenient user experience across many different platforms, and both on- and off-campus.
In April 2018 Kopernio was acquired by Clarivate Analytics, a data and analytics company with a desire to collaborate and solve problems across the research ecosystem.


Why do you think it demonstrates publishing innovation? 


Kopernio automatically detects institutional subscriptions a user already possesses and facilitates one-click access to these, giving convenient access to the publisher’s version of record across 20,000 platforms.


Kopernio co-founder and new Managing Director for Clarivate’s Web of Science Jan Reichelt on Kopernio’s benefit for publishers.

This is useful for researchers, who benefit from having professionally typeset, citable articles which are sure to contain all corrections from peer review (which is important for reproducibility). Publishers see increased utilisation of content (both OA and subscription) and can identify and meet new demand for their content. Both libraries and publishers can better understand journal usage and how the content needs of researchers can best be served. 


What are your plans for the future? 

Kopernio complements Clarivate's existing digital product portfolio, which is used by millions of researchers. The scale, reach and unique data at the heart of Web of Science, combined with the Kopernio researcher-facing platform, will allow us to build novel tools which we hope will delight researchers and support them as they work on some of society’s most important problems.

The future will see the investment and scaling of Kopernio and the integration with other Clarivate products and services. This will enable us continue building tools which we hope will delight researchers, and develop the business by building out commercial offerings for publishers and academic institutions.


Ben Kaube is a co-founder and the Managing Director of Kopernio, acquired by Clarivate Analytics in 2018. He obtained his PhD in computational physics from Imperial College London in 2017 and previously founded research 


Website: https://kopernio.com
Twitter: @kopernio @clarivate 

The ALPSP Annual Conference and Awards 2018 will be held at the Beaumont Estate, Old Windsor, UK from 12-14 September. #alpsp18

Friday, 24 August 2018

Spotlight on JPPS - shortlisted for the 2018 ALPSP Awards for Innovation in Publishing

On 13 September, at the ALPSP Conference, we will be announcing the winners of the 2018 ALPSP Awards for Innovation in Publishing, sponsored by MPS Limited.  In this series of posts leading up to the Awards ceremony, we meet our six finalists and get to know a bit more about them.

logo JPPS
In this post, we speak to Sioux Cumming of INASP, and Susan Murray of African Journals Online (AJOL) about Journal Publishing Practices and Standards (JPPS).


Tell us a bit about your companies


INASP is an international development organisation with over 25 years’ experience of working with a global network of partners in Africa, Latin America and Asia. Research and knowledge have a crucial role to play in addressing global challenges. Many of these challenges affect the Global South most acutely, but we believe that these challenges will not be addressed without Southern research and knowledge.

To realise this potential, we strengthen research and knowledge systems by addressing issues of power and supporting individuals and institutions to produce, share and use research. Broadly speaking, we have six areas of work: academic publishing; evidence for policy; gender & equity; higher education & learning; information access; and research communication.

African Journals Online (AJOL) was the first JOL platform and has been managed by a South African non-profit organisation of the same name since 2005. AJOL provides a highly visible online library of African-published, peer-reviewed scholarly journals, allowing global access to the research output of the continent. AJOL also works with journal partners throughout Africa to facilitate their capacity building in publishing best practices, and provides various technical services that many journals might not be able to afford or implement on their own. 

What is the project/product that you submitted for the Awards?


Journal Publishing Practices and Standards (JPPS) was established – and is managed - by AJOL and INASP to provide detailed assessment criteria for the quality of publishing practices of Global South journals.  
Northern journals dominate global research, leading to an underrepresentation of knowledge from the South. Championing Southern journals is essential for redressing imbalance in the dissemination of global research.
JPPS levels give readers assurance that the journals meet an internationally recognised set of criteria. The detailed feedback from the JPPS assessment helps editors identify ways to improve their publishing practices and standards.  

The initiative and its first awards have been widely welcomed by Southern journal editors and have already prompted significant improvements. 

Tell us more about how it works and the team behind it

image logo JPPS badge

The JPPS assessment process evaluates these journals based on 108 internationally accepted criteria. The result is one of six badges that are displayed on the official JPPS site and on the JOL platforms. These badges give guidance and reassurance to researchers as they are choosing which journals to read, cite and publish in.

image editor at deskBut JPPS goes further than this: the other output of the assessment process is a detailed, customised report for each journal editor highlighting the areas of journal publishing that could be improved. This feedback is supported by the Handbook for Journal Editors, which gives practical guidance on things like how to run editorial processes, communicate with authors, and improve a peer review system. 

Why do you think it demonstrates publishing innovation?


JPPS has several important, unique features. Firstly and crucially, JPPS isn’t just a metric; it’s also a framework to help improve quality. The aim of JPPS is to increase equity in global publishing and to help this by recognising and supporting legitimate journals. The detailed reports from JPPS are intended to not only highlight what is missing but also to help journals to improve.

It was developed in consultation with journal editors in Africa and it recognises the contexts that these editors operate in, and provides support and guidance appropriate to the contexts. However, although the focus is on the Global South, the standards expected in JPPS are global ones; journals awarded JPPS stars will have publishing standards and processes similar to other journals around the world.

What are your plans for the future?


On the technical side, we are working towards an online form (and database) to streamline the assessment process. This would be a tool that new journals could use in applying to join a JOL platform and also that journals already on the platforms could use in their applications for reassessment. 
Extensions to JPPS might also include going beyond the JOLs platforms in partnership with other journal platforms. In addition, we hope to roll out a full online course in journal quality following feedback and refinement from a recent pilot. 

AJOL and INASP have been grateful for funding and encouragement from Sida and DFID over many years to support the development of the JOLs platforms and, more recently, the JPPS initiative. We are pleased to have continued support from Sida for this work over the next year but are also keen to discuss other funding opportunities to extend this work. 


Sioux Cumming (left in photo) has worked on and managed INASP’s Journals Online project since 2003 and has helped establish and maintain eight JOLs platforms. She has also been instrumental in bringing international standards and initiatives such as DOIs, eISSNs, the anti-plagiarism software CrossCheck, article-level metrics and Kudos to the journals. In collaboration with African Journals Online, she helped to develop and is implementing the Journal Publishing Practices and Standards (JPPS) framework to help journals improve their publishing quality.


photo Sioux Cumming and Susan MurraySusan Murray (right in photo) is Executive Director of African Journals OnLine (AJOL). AJOL is a South African based non-profit organization working toward increased visibility and quality of African-published research journals. AJOL hosts the world’s largest online collection of peer-reviewed, African-published scholarly journals and is a sponsoring member of CrossRef. Ms Murray is also a member of the Advisory Committee of the Directory of Open Access Journals (DOAJ) and a member of the Advisory Board for the Public Knowledge Project’s current study on Open Access Publishing Cooperatives. She has an abiding interest in the role that access to research outputs can play in economic development in low income and emerging economies, as well as the practicalities of attaining this.

Website: www.journalquality.info

Twitter: @INASPinfo @AJOLinfo

The ALPSP Annual Conference and Awards 2018 will be held at the Beaumont Estate, Old Windsor, UK from 12-14 September. #alpsp18