Tuesday, 2 July 2019

The Marketer's Tool Kit: Leveraging social media - three steps to move beyond broadcasting


We spoke to Emma Watkins, Marketing Manager at IOP Publishing and co-tutor on our Effective Journals Marketing training course about how to get the most out of social media. Here's what she said.


It’s over ten years since Facebook became available to the general public and Twitter was launched (and even longer since long forgotten and yet somehow still in existence MySpace began). In that time we’ve seen numerous new networks rise (and fall) and yet for many marketers the social web is still a daunting place to be.

For those companies who aren’t afraid to try, there is an awful lot of value to be found in engaging researchers in the social sphere – here’s how to start.

1. Start listening


Social networks are a great place to find out exactly what the community wants, needs, and thinks of you. Make sure you’re set up to find those conversations – there are tonnes of social media listening services out there which will aggregate content by keyword or product name. Take the time to skim through these regularly, as there can be valuable insight nestled amongst the pictures of people’s breakfasts.

Helpful link: Brandwatch Blog's Top 10 Social Media Monitoring Tools

Top tip: Conference hashtags are the perfect place to start – search for relevant events and keep an eye on your timeline when they are on (for example #alpsp19)

2. Conversations are a two-way thing – but make sure you’re speaking the same language


So you’ve done some great listening, perhaps even followed a conference hashtag or two – what next?

Time to start having some conversations! If you can add value to a blossoming conversation, perhaps with a link to some free (and highly relevant) content, or some advice on a publishing problem, then do it! But make sure you enter the conversation as a human being, not as a brand automaton. Where possible include your name – ASOS do this really well on Facebook.

Helpful link: Harvard Business Review - 50 Companies that get Twitter - and 50 that don't

Top tip: Once you’ve joined a conversation remember to stay with it – don’t just log off as people may respond to you.

3. Embrace different forms of content


It’s easy to get stuck on just sharing text content and links, but if you really want to make a splash then you should vary the content you share. Vlogs, infographics, images, podcasts – all of these offer unique ways to get your message across, so make sure you don’t just choose the right channel but also the right content.

Helpful link: Hubspot - 45 Visual content marketing statistics you should know in 2019

Top tip: Audit your current content store (leaflets, blog posts etc…) to look for new ways to repackage this information for social sharing. You could turn an FAQ page into an infographic, or make a video out of a press release on a product launch.


Emma is a Marketing Manager for IOP Publishing (IOPP), where she oversees the academic marketing strategy for the entire journals portfolio, as well as community websites, B2B products, and ebooks programme.

The next Effective Journals Marketing course runs on Wednesday 17 July 2019 in London. Further information and booking available on the ALPSP website.

Tuesday, 4 June 2019

'Making the case for embracing microPublications: Are they a way forward for scholarly publishing?'

Albert Einstein said: “An academic career, in which a person is forced to produce scientific writings in great amounts, creates a danger of intellectual superficiality”. 

Researchers have been working with the pressures of ‘Publish or Perish’ for decades. The default response is to question the value of microPublications that are produced as a result. But what about when microPublications are carefully defined; peer review is stringently completed; and they enable publishers to more efficiently produce the ‘longer story’ research articles with pre-validated research outputs? Are there largely unknown opportunities and values to be gained quickly? Can microPublications enable synthesizing and distilling of information and integrate this information in established repositories to create a more meaningful and greater corpus of knowledge - dare we say, global knowledgebase?

In this blog we hear from scientific curators with new roles as editors of a microPublication, and from a publisher who encourages this new publishing genre.

Chair: Heather Staines, Head of Partnerships, MIT Knowledge Futures Group.

Contributors:

  • Daniela Raciti, Scientific Curator for Wormbase and Managing Editor, microPublication Biology
  • Karen Yook, Scientific Curator for Wormbase and Managing Editor, microPublication Biology
  • Tracey DePellegrin, Executive Director, Genetics Society of America

Heather Staines: As an historian and former acquiring editor for books, I’ve long thought of articles as short-form publications and have struggled with the ‘less is more’ school of thought. When I started to hear about microPublications a few years back, I was intrigued. I wondered how researchers would define the scope of these postings, how they would be viewed within their respective disciplines, and how they would fit within the larger scholarly communications infrastructure. I was thrilled to be asked to moderate the ALPSP webinar, to get to hear directly from the folks at microPublication Biology and at the Genetics Society of America. Here is a bit of what I’ve learned in preparation for the session.


Question 1: How would you define a microPublication?

microPublication Biology: A microPublication is a peer-reviewed report of findings from a single experiment. A microPublication typically has a single figure and/or results table, the text is brief, but has sufficient relevant background to give the scientific community an understanding of the experiment and the findings, and there is sufficient methodological & reagent information and references that the experiment can be replicated by others.

Genetics Society of America (GSA): I’ve got to agree with my colleagues on this one. I think one key here is that the findings in microPublication Biology are in fact peer-reviewed. They’re also discoverable, so they’re not lost in the literature. And I love the idea that these are compact yet powerful components scientists can build upon.


Question 2: What was the driving force behind the decision to move forward with microPublications?

microPublication Biology: There are two driving forces. The first is to increase the entry of research finding into the public domain. These findings are of value to the scientific community, they give the authors credit for their work, and publication fulfils the agreement researchers make with funding agencies (and taxpayers) to disseminate their findings. The second is to efficiently incorporate new data into scientific databases, such as WormBase. Scientific databases organize, aggregate and display data in ways that have tremendous value for researchers, greatly facilitating experimentation (increasing efficiency, decreasing cost). Databases are most useful when they are comprehensive; the microPublication platform allows efficient and economical incorporation of information into databases. We hope that in the long term, other scientific publishers will come on board to directly deposit data from publications into the authoritative databases.

GSA: GSA is supportive of microPublications for several reasons. First, incorporating new data into scientific databases is critical. Researchers in our fields depend on model organism databases like WormBase, FlyBase, Saccharomyces Genome Database (SGD), the Zebrafish Information Network (zfin), and others, many of which are supported by the National Human Genome Research Institute (NHGRI) and included in the Alliance of Genome Resources. These databases are critical in understanding the genetic and genomic basis of human biology, health, and disease, and are curated by experts in the field. The microPublication platform helps authors by incorporating their findings into these databases in a way that’s seamless and painless for busy scientists. Second, microPublication Biology reduces the barrier of entry for scientists hoping to freely share their peer-reviewed research in a credible venue. Also, it’s terrific that microPublication provides the opportunity to publish a negative result. Negative results are important, yet too few journals publish them. The bottom line is that microPublication Biology addresses a need in scholarly publishing, serving authors and readers alike by filling a gap existing journals don’t serve.


Question 3: How does the peer review process differ, if at all, from the peer review of longer articles?

microPublication Biology: The peer-review process is similar to other journals, with a few distinguishing features. First, since the publication is limited in scope and length, it is simple and quick to review. Second, the publication criteria are straightforward – is the work experimentally sound? - does the data support the conclusion? – is there sufficient information to allow replication? – and, are the findings of use to the community? The last point goes along with the categorical assignment of the microPublication as a New finding, Finding not previously shown (unpublished result in a prior publication), Negative result, Replication – successful, Replication – unsuccessful, and Commodity validation.

GSA: Because I’m not an editor at microPublication Biology, I can only generalize here. But I will use this opportunity to underscore the importance of high-quality peer review as well as editors who are well-respected leaders in the field. One glance at the editorial board of microPublication Biology shows that these scientists are in a position to guide the careful review and decision on submitted data in their respective fields. I also find the categorial assignments interesting – especially the idea of a successful (or unsuccessful) replication.


Question 4: What do you see as the future for microPublications?

microPublication Biology: Huge! This publishing model will help change how researchers communicate with one another, how a researcher’s accomplishments are evaluated and tracked, and provide an earlier step for budding researchers to be introduced to scholarly communication. The microPublication venue easily lends itself to expansion into entirely new fields. However, such expansions need to be driven by the field’s scientific community (the group that will submit manuscripts, peer review the manuscripts, and maintain community standards).

GSA: The sky’s the limit. I agree with everything (above). In times where we’re trying to encourage grant review panels and others to evaluate scientists by the data they’re publishing (rather than the impact factor of the journal in which the article appears), such venues as microPublication Biology provide a chance for researchers to get credit for contributions that might not otherwise be recognized. And that’s progress!

------------------------------------------------------------------------------------

Heather Staines: I’d like to take this opportunity to thank our panellists for taking the time to weigh in on these questions. I hope you will now agree with me that microPublications provide an interesting and useful twist on the traditional journal publication model.

To learn more, please register for the ALPSP webinar: 'Making the case for embracing microPublications: Are they a way forward for scholarly publishing?'

Wednesday 26 June.
16:00-17:00 BST, 11.00-12:00 EDT, 17:00-18:00 CEST, 08:00-09:00 PDT.

The webinar is ideal for: publishing executives, editors, librarians, funders and researchers.



Friday, 8 March 2019

Growing your content’s family tree: Life after primary sale

So much effort in our industry goes into new content: the launch, the debut, the first run. Yet, there is a complex, profitable second life for content after it serves its initial purpose. Often, it is up to publishers to become the guardians of that second act, spawning “children” from the original content that can flourish after primary sale.

Content creators and publishers shape the primary “parent” work in a certain format, for a certain audience, so it can be challenging for them to accept that the world might reinterpret that work in ways they can’t control. The use of child content can be so unpredictable and so detached from the original work that publishers might find it impertinent, trivial, or undermining.

Like a dysfunctional family, some publishers are stricter than others when it comes to sending child content out into the world for fresh creative or commercial endeavors. It’s a balancing act for publishers to protect the value with which they have been entrusted, without stifling the possibility of a productive future. The more inspiring the original work, the more likely that it will yield offspring that flourish beyond the scope of the primary sale. As surprising as the opportunities may appear, reinterpretation of child content can produce immense value to publishers who are open to the concept.

Over the past several years, I have been exposed to companies seeking reuse of creative output of all kinds. Excerpts, charts, and graphs are common, but we also hear about requests for instructional videos, posters, and secondary text created to support website features. These requests are often very difficult to process. I sometimes see bias on the part of content creators and publishers for the primary work to be protected just as it is, cut off from the potential of a second life. Beyond that, the creators of the content can be hard to find, and the intended reuse is hard to describe. When not a lot of money is involved, it’s easy for the trail to go dead.

Taking the widest view of the “permissions” landscape, which my job at Copyright Clearance Center allows me to do, I encourage creators - and custodians of creative works - to embrace the inspiration that others receive from an original work. The inspiration may seem “less than” because it has a different audience, format, or purpose, than the original, but that contribution could take the achievement to a new realm.

I don’t mean that creators should abandon control, allowing every proposed reuse. I’m also not implying that creators should not be compensated for their contributions. Rather, creativity should be encouraged as the seed of further achievement. When creating child content will cause no harm to the parent content, why not embrace the experiment of that creative output? Consider the options carefully, but trust that the intrinsic value of the parent content will be amplified by the life of the child content. If the significance is not apparent to you, take that as testament to the power of independent thought.


Unusual second lives of content in mainstream media

Parent content
Child content
Scene transition cartoons from variety show The Tracey Ullman Show
Television series The Simpsons
AOL’s trademarked email greeting sound, “You’ve got mail.”
Romantic comedy You’ve Got Mail
Nashville-area commercials featuring simpleton Ernest P. Worrell
Ernest children’s television show and nine-part movie series
The Pink Panther cartoon character
Owens Corning building insulation
Star Wars movie series
Scented candles, aquariums and terrariums, a grocery line of fresh fruit, furniture
Disney theme park rides
The Country Bears and The Pirates of the Caribbean movie series
Trading cards and sticker packs
B-movies Mars Attacks and The Garbage Pail Kids Movie
Board games
Movies Clue, Battleship and Ouija
Smartphone apps
Movies Angry Birds and The Emoji Movie
Toys
The Lego Movie series, UglyDolls, Bratz: The Movie, G.I. Joe, Transformers movie series, Toy Story movie series, My Little Pony television series
Television show theme songs
Ringtones for smartphones
“It’s the Hard Knock Life” from Broadway musical Annie
Hip hop track “Hard Knock Life (Ghetto Anthem)” by Jay-Z
Theme song from television series MacGyver
Hip hop track “Put Ya Signs” by Three 6 Mafia
Windows 98 chimes and tones
Hip hop track “Windows Media Player” by Charles Hamilton



Jamie Carter


Jamie currently works as manager of publisher account management at Copyright Clearance Center, where she has worked since 2011, finding opportunities to license content and increase royalty revenue.

Jamie’s publishing career began at Arcadia Publishing, a UK publisher with an office in Dover, New Hampshire. Hers was a start-up division; Jamie acquired titles, did production work and editing, and even sold books on the road from time to time.

In the earliest days of the internet, she worked at a web-design company, then worked for six years as a manufacturing buyer at Heinemann in Portsmouth, New Hampshire.

Jamie moved back online in 2007, when she became product manager at Publisher Alley, a subscription website for analysis of book sales. Publisher Alley was owned by Baker & Taylor at the time, and is now owned by EBSCO. In this position, she was the editor of Alley Talk, a free companion site for Publisher Alley featuring bestseller listings and industry white papers.

Tuesday, 22 January 2019

AI, Blockchain, Open Source - separating the value from the hype


AI, Blockchain and Open Source are terms which continually grab attention, but are they merely buzzwords or will they really disrupt our industry? Ahead of our planned series of webinars on this subject, Jennifer Schivas of 67 Bricks and Nisha Doshi of Cambridge University Press consider how to distinguish hype from reality, and why publishers should care...

AI, Blockchain and Open Source have been generating a lot of attention in the press over the past few years, and high profile announcements from the likes of eLife, Elsevier and Digital Science generate a lot of excitement, but can these technologies really help us improve publishing processes and enhance customer experience?  Can they save us money or help us offer new products and services to authors and researchers?  If so, how do we engage at the right level and the right speed?  How do we ensure the opportunity, if there is one, doesn’t become a threat?

Working at the coal face of publishing innovation means that these are questions we wrestle with on a day-to-day basis, and when we spoke to others at the 2018 ALPSP conference we realised we weren’t alone. Across the industry many of us are exploring options, running pilots, launching products, platforms and systems and putting in place strategies that utilise these new technologies. Some are dipping their toes in the water, while others are diving right in. However, at the other end of the spectrum there are those who dismiss these technologies as mere trends or buzzwords: AI has been around since the 1950s afterall, and isn’t Blockchain regularly described as “just a slow database”?!

So, who is right and who is wrong?  This debate will be at the heart of the forthcoming series of ALPSP webinars, in which we’ll invite industry experts to examine each technology in turn to help us separate the hype from the reality.

In each webinar we will include a short, jargon-free introduction to the technologies and discuss examples of where they are already being used in our industry. We’ll then assess their potential for positive change as well as considering alternative courses of action - which could even include “do nothing” - and look at the recommended first steps publishers can take to begin capitalising on opportunities.

We believe that it is important for publishers to engage with these technologies and make clear decisions with their eyes open. It is not usually wise to invest in cutting edge technology for technology’s sake alone, however there are ways to trial them without undue expense or risk; R&D programmes, pilot projects or collaborative partnerships can all work well.  We will explore how these might be set up to test the waters and release some early benefits before making a major investment or committing to a long-term path.

Join us to start a clear conversation and to begin to separate the hype from the reality. You’ll come away with a better understanding of what these technologies offer in the short, medium and long term, how they might align with wider product, platform or technology strategy, and if and how they might help meet customer needs. There will never be one single answer or one size fits all… so we look forward to some lively conversation!

To find out more about the planned webinars or to book your place please visit https://www.alpsp.org/Webinars/What-is-Hype/62872




Jennifer Schivas Jennifer Schivas is Head of Strategy and Industry Engagement at 67 Bricks, a technology company that helps publishers become more data driven www.67bricks.com









Nisha Doshi
Nisha Doshi is Senior Digital Development Publisher at Cambridge University Press, where she leads the digital publishing team across academic books and journals www.cambridge.org


Tuesday, 13 November 2018

Thinking of reviewing as mentoring

In this blog Siân Harris shares her personal experiences of being a peer reviewer for Learned Publishing.


Earlier this year I was contacted by Learned Publishing about reviewing a paper. This was an interesting experience for me because although I had been a researcher and then a commentator on scholarly publishing, including peer review, for many years, this was the first time I had done a review myself.

The paper I was invited to review was about publishing from a region outside the dominant geographies of North America and western Europe. Ensuring that scholarly publishing – and, in particular, the research that it disseminates – is genuinely global is something that I am passionate about (in my day job I work for INASP) so I was very happy to take on this review.

There have been plenty of complaints about peer review being provided freely to publishers and rarely recognized as part of an academic’s job description (it’s also not part of my non-academic job). And some researchers can feel bruised when their papers have been handled insensitively by peer reviewers.

On the other hand, there are powerful arguments for doing peer review in the interests of scholarship. What I’d not heard or realised until I did a review myself was how doing peer review is – or should be - a lot like mentoring. Since my time as a (chemistry) researcher I have regularly given others feedback about their papers, books and other written work, most recently as an AuthorAID mentor supporting early-career chemistry researchers in Africa and Asia. I also found, as I did the review, that I was very happy to put my name on it, even after recommending major revisions.

As I read the Learned Publishing paper I found I was reading it with that same mentoring lens and I realised there was an opportunity to help the authors not only to get their paper published but also to explain their research more clearly so that it has greater potential to make a difference. I wanted to encourage them to make their paper better — and to suggest what improvements they could make. Crucially, I didn’t feel like I was doing a review for the publisher; I felt I was doing the review for the authors and for the readers.

As I’ve seen with so many papers before, the paper had some really interesting data but the discussion was incomplete and a bit confusing in places; it felt to me a bit like an ill-fitting jacket for the research results. I made positive comments about the data and I made suggestions of things to improve. I hoped at the time that the authors found my feedback useful and constructive and so I was pleased that they responded quickly and positively.

The second version was much better than the first; a much clearer link was made between the data and the discussion and some answers had been given to many of those intriguing questions that had occurred to me in reading the first draft.We could have left it there but there were still some residual questions that the paper didn’t address, so in the second round I recommended further (minor) revisions.

Quickly, the third version of the paper came back to me. I know it can be frustrating for authors to keep revising manuscripts but the journey of this paper convinced me that it is worth it. The first version had great data that intrigued me and was very relevant to wider publishing conversations, but the discussion lacked both the connection and context to do the data justice. The second version was a reasonable paper but still had gaps between the data and the discussion that undermined the research. But the third version thrilled me because I realised I was reading something that other researchers would be interested in citing, and that could even be included in policy recommendations made in the authors’ country.

Having reflected on this process during this year's Peer Review Week with its theme of diversity, I am pleased that I read this paper and was able to provide feedback in a way that helped the authors to turn good data into an excellent article. First drafts of papers aren’t always easy to read, especially if the authors are not writing in their native language.  Authors can assume that readers will make connections between the results and the conclusions themselves, resulting in some things being inadequately explained. But peer review – and mentoring -– can help good research, from anywhere in the world, be communicated more clearly so that it is read, used and can make a difference.

Dr Siân Harris is a Communications Specialist at INASP. 


Friday, 2 November 2018

Why is innovation a challenge for established publishers?


Charles Thiede photo
Charles Thiede, CEO of tech startup Zapnito (and former CPO of Nature Publishing Group and CTO of Informa Business Intelligence), explores the theory of the innovator’s dilemma. And what publishers can learn from it.

In a couple of weeks’ time, I’m going to be chairing an event with ALPSP entitled ‘Innovate or Perish? How to think like a startup’. It’s got me thinking on the challenges of innovation within large publishers - challenges I’ve seen from the inside, as well as out.

I asked Zapnito advisor Mark Allin, former CEO of Wiley and a speaker at the event, for his view on how innovative publishing companies are at the moment. On a scale of 1-10, he gave them 3 or 4. That’s not ideal.

So why is this the case? There’s no question that the publishing industry is full of talent and resources. Yet startups are often seen as having the edge when it comes to innovation. The ‘innovator’s dilemma’ - a term coined by Clayton Christensen in 1997, offers insight into why that is.

The innovator’s dilemma
The problem isn’t necessarily lack of innovation itself, or enthusiasm to try new ideas, but more the environment in which those new ideas are developed and nurtured.

The value from new innovations isn’t realised immediately. It tends to follow an S-curve (see the yellow line on the graph). Improving a product takes time and many iterations, and quite often the early iterations provide minimal value. That can prove the sticking point for many businesses.

Ultimately, the primary aim of most established companies is to keep and add additional value to their existing customer base and their current business models. This means new and innovative ideas can be undervalued, because they are applied and tested with existing customers or through existing models, rather than looking at new markets or models. 

It also means that if innovative ideas fail to deliver results quickly, they are seen as failing - the ROI is thought to be too low. In this case, often management acts ‘sensibly’, in what they view to be the company’s best fiduciary interests, and rejects continued investment in the innovation. 

This is why startups, usually with little or nothing to lose when they enter the market, are so much more successful. They find new markets to apply their innovations, largely by trial and error, at low margins. Their nimbleness and low cost structures allow them to operate sustainably where established companies could not. They don’t have the same responsibilities to, for example, shareholders or existing customers.

At the same time, especially with ‘bootstrapped’ companies, startups must survive on their own two feet. This means that if the initial idea doesn’t work, they can adapt and even pivot their models. We did this at Zapnito early on. In contrast, for an established publisher, the initial idea is often fixed and changing direction means failure. 

By finding the right application use and market, startups advance rapidly and hit the steep part of the S-curve, eventually entering the more mature markets of the established companies and disrupting them.

What’s the solution?
There is no one way to do innovation. But to me, the most vital change is a change in attitudes. Traditional publishers will need to think outside their traditional business models. Innovation does not need to be in context of existing ways of doing business. Too many media companies are organised around delivery models vs. solutions to a market. That leaves little room for innovation.

There’s also a need to start playing the long game and looking for ways to manage development processes so that it’s okay to change direction, or even to fail.

I also want to challenge the idea of innovation itself. Innovation does not mean invention. Most people think innovation and invention are synonymous. But Jeff Bezos did not invent ecommerce. Steve Jobs did not invent smartphones. The innovation happened in the execution of those ideas and how they were delivered to the market.

There are lots of potential ways for publishers to nurture more innovation within their companies. This could be through mergers and acquisitions (M&A), partnering with disruptive businesses, creating an internal ‘skunkworks’-style structure, or even separating out new innovations into offshoot companies.

These are all ideas I’m looking forward to exploring at the event. Hope to see you there.

The Innovate or Perish? seminar will take place on Thursday 15 November 2018. To find our more or to book your place visit: https://www.alpsp.org/Events/Innovate-or-Perish/60274

Thursday, 18 October 2018

Getting From Word to JATS XML

In this blog Bill Kasdorf, Principal, Kasdorf & Associates, LLC talks us through a perennial problem and the different approaches to addressing this:


It is a truth universally acknowledged that journal articles need to be in JATS XML but they’re almost always authored in Microsoft Word.

This is not news to anybody reading this. This has been an issue since before JATS existed. Good workflows want XML. So for decades (yes, plural) publishers have been trying to get well structured XML from authors’ manuscripts without having to strip them down to plain text and tag them by hand. (This still happens. I’m not going to include that in my list of strategies because nobody thinks that’s a good idea anymore.)

There are four basic strategies for accomplishing this:
• Dedicated, validating XML editors.
• Editors that emulate or alter MS Word.
• Use Word as-is, converting styles to XML.
• Editors that use Word as-is, with plug-ins.
Here are the pros and cons of these four approaches.


Dedicated, Validating XML Editors


This is the “make the authors do it your way” method. The authors are authoring XML from the get-go. And not just any XML. Not even just any JATS (or whatever XML model). Exactly the specification of JATS that the publisher needs, conforming in every way to the publisher’s or journal’s style guide and technical requirements. This strategy works in controlled authoring situations like the people developing technical documentation. (They’re probably authoring DITA, not JATS.) They’re typically employees of the publisher, and the document structures are exactly the same every day those employees show up to work.

I have never seen this strategy successfully employed in a traditional publishing context, although I have seen it attempted many times. (If anybody knows of a journal publisher doing this successfully, please comment. I’d like to know about it.) This doesn’t work for journals for two main reasons:
1. Authors hate it. They want Word.
2. They have already written the paper before submitting it to the journal. The horse is out of the barn!


Editors that Emulate or Alter MS Word


This always seems like a promising strategy, and it can work when it’s executed well in the right context. The idea is to either let authors use Word, but make it impossible for them to do things you don’t want them to do (like making a line of body text bold when it should be styled as a heading), either by disabling features in Word like local formatting or by creating a separate application that looks and acts a lot like Word.

I have seen this work in some contexts, but for authoring, I’ve seen it fail more often. The reason is No. 1 above. Despite being a lot like Word, it’s not Word, and authors balk at that. These are often Web-based programs, and authors want to write on a plane or the subway. And there’s always No. 2: most journal articles are written before it’s known which journal is going to publish it.

This strategy can work well, though, after authoring. Copyeditors and production staff can use a structured tool like this more successfully than authors can. We’re seeing these kinds of things proliferate in integrated editorial and production systems like Editoria, developed by the Coko Foundation for the University of California Press, and XEditPro, developed by a vendor, diacriTech.


Use Word As-Is, Converting Styles to XML


This is by far the most common way that Word manuscripts get turned into XML today. A well designed set of paragraph and character styles can be created to express virtually all of the structural components that need to be marked up in JATS for a journal article. This is done with a template, a .dotx file in Word, which, when opened, creates a .docx document with all of the required styles built in. And since modern Word files are XML under the hood, you can work with those files to get the JATS XML you need.

The question is who does the styling, and how well it gets done.

Publishers are sometimes eager to give these templates to their authors so they can either write or, post-authoring, style their manuscripts according to the publisher’s requirements. Good luck with that. The problem is that it’s too easy to do it wrong. Use the wrong style. Use local formatting (see above). Put in other things that need to be cleaned up, like extra spaces and carriage returns. Somebody downstream has to fix these things.

Those people downstream tend to be trained professionals, and it’s usually best just to let them do the styling in the first place. This is how most JATS XML starts out these days: as professionally styled Word files. Many prepress vendors have trained staff take raw Word manuscripts and style them, often augmented by programmatic processing to reduce the manual work. These systems, which the vendors have usually developed in-house, also typically do a “pre-edit,” cleaning up the manuscript of many of those nasty inconsistencies programmatically to save the copyeditor work.

This is also at the heart of what I would consider the best in class of such programs, Inera’s eXtyles. Typically, a person or people on the publisher’s staff are trained to properly style accepted manuscripts; eXtyles provides features that makes this easier to do than just using Word’s Styles menu. Then it goes to town, doing lots of processing of the resulting file based on under-the-hood XML. It’s primarily an editorial tool, not just a convert-to-XML tool.


Use Word As-Is, With Plug-Ins


This is not necessarily the same as the previous category, but there’s an overlap: eXtyles is a plug-in for Word, and the resulting styled Word files can just be opened up in Word without the plug-in by a copyeditor or author. But that approach still depends on somebody having styled the manuscript, and subsequent folks not having messed up the styling. It also presents the copyeditor (and then usually the author, who reviews the copyedits) with a manuscript that doesn’t look like the one the author submitted in the first place.

This tends to make authors suspicious—what else might have been changed?—and suspicious authors are more likely to futz. That’s why in those workflows it’s important to use Tracked Changes, though some authors realize that that can be turned on and off by the copyeditor so as not to track every little punctuation correction that’s non-negotiable anyway.

An approach that I have just recently come to appreciate is what Ictect uses. This approach is not dependent on styles. As much as I’ve been an advocate of styles for years, this is actually a good thing. Styles are the result of human judgment and attention. When done by trained professionals, that’s pretty much okay. But on raw author manuscripts—not.

Ictect uses Artificial Intelligence to derive the XML not from the appearance of the article, which is unreliable, but on the content. Stop and think about that a minute. Whereas authors are sloppy or incompetent in getting the formatting right, they are pretty darn obsessive about getting the content right. That’s their paper.

Speaking of which, in addition to not changing the formatting the author submitted, Ictect doesn’t change the content either. The JATS XML is now embedded in that Word file, but you only see that if you’re using the Ictect software. After processing by Ictect, the document is always a Word document and it is always a JATS document. To an author or a copyeditor it just looks like the original Word file. This inspires trust.

I was initially skeptical about this. But it actually works. Given a publisher’s style requirements and a sufficiently representative set of raw author manuscripts, Ictect can be set up to do a shockingly accurate job of generating JATS from raw author manuscripts. In seconds. Nobody plowing through the manuscripts to style them.

There have been tests done by large STM publishers that have demonstrated that Ictect typically produces fully correct, richly tagged JATS for over half of the raw Word manuscript files submitted by authors, and over 90% of manuscripts can be perfected in less than ten minutes by non-technical staff like production editors. The Ictect software highlights the issues and makes it easy for publishing staff to see what the problem is in the Word file and fix it. That’s because the errors aren’t styling errors, they’re content errors. They have to be fixed no matter what.

In case you think this is simplistic or dumbed-down JATS XML, nope. I’m talking about fully expressed, granular JATS, with its metadata header and all the body markup and even granularly tagged references that enable Crossref and PubMed processing. Not just good-enough JATS. Microsoft Office 365 is not exactly a new kid on the block now, but journal publishers have not made much use of it. As things evolve naturally, more and more authors are going to use Office 365 for peer review, quick editing, corrections and even for full article writing. Since Ictect software creates a richly tagged Word document that can be edited using Office 365, it opens up some interesting workflow automation and collaboration possibilities, especially for large scale publishing.

And if you need consistently styled Word files, no problem. Because you’ve got that rich JATS markup, a styled file can be generated automatically in seconds. For example, in a consistent format for copyediting (I would strongly recommend that), or a format that’s modeled after the final published article format. Authors also really like to see that at an early stage. It’s an unavoidable psychological truism that when an author sees an article in published form she notices things she hadn’t noticed in her manuscript. So you can do both: return the manuscript in its original form, and provide a PDF from the styled Word file to emulate the final layout.

All of the methods I’ve discussed in this blog have a place in the ecosystem, in the right context. I haven’t mentioned a product that I wouldn’t recommend in the right situation. For example, you might initially view Ictect as a competitor of eXtyles and those home-grown programs the prepress vendors use. It’s not. It belongs upstream of them. It’s a way to get really well tagged JATS from raw author manuscripts to facilitate the use of editorial tools, without requiring manual styling. It’s the beginning of an Intelligent Content Workflow. It’s a very interesting development.

Bill Kasdorf is Principal of Kasdorf & Associates, LLC, a consultancy specializing in accessibility, XML/HTML/EPUB modeling, editorial and production workflows, and standards alignment. He is a founding partner of Publishing Technology Partners 

Website: https://pubtechpartners.com/

Twitter: @BillKasdorf


To find out further information on Ictect visit: http://www.ictect.com/ 

or register for one of their free monthly webinars at: http://www.ictect.com/journal-webinars