Key changes, challenges and opportunities
Change is all around us yet the constants traditionally underpinning scholarly publishing largely remain in place: quality control, trusted brands, citability, even mainstream business models like the big deal. Perhaps reassured by that, we can afford to be less fearful when confronting the many changes that are coming our way. There is pressure from governments to review copyright legislation. Governments and funding agencies want to see much more open access than we have been willing to entertain until recently. Technical changes, not necessarily designed with publishers’ needs in mind, are racing ahead and we are forced to go with the flow.
We sometimes forget that our customers also face change. Librarians confront technical and resource constraints and increasingly have to demonstrate how they add value. Authors and readers suffer severe time constraints and have to change their workflows to be most effective. Both need our help. Change is now a constant yet publishers’ reactions can be coloured by fear of it, causing them to cling to outdated models and overlook the undoubted opportunities that change brings with it. ‘Disruptive technologies’ can have a positive impact.
Mobile
Mobile technologies and social networking techniques are ubiquitous and all around us. The growth of consumer-focused mobile technologies such as tablets and smart phones did not initially alert us to the benefits that this might bring to scholarly publishing. Researchers are also consumers, however, and increasingly demand access to their information wherever they might be.
“The scholarly article in 140 characters? No!” Plenary 2 title
While publishers may have been slow off the mark, mobile access is increasing and ‘Apps’ are springing up apace. Matt Rampone of HighWire gave an overview: in the UK in August this year, over 13% of total online use was via mobile – smartphones, tablets and also e-readers. iOS is the dominant operating system with Android coming up fast. PDF is the format of choice for readers. Average time spent on a site is about 50s (a little longer for a selection of Apps). Mobile usage has a doubling time of 12 months! Rampone’s advice:
- Invest in mobile.
- Invest in analytics (know your readers).
- Create good experiences.
It is fairly clear that publishing on mobile platforms does not yet require a rebirth of the scholarly article but is essential if publishers are to keep pace with readers’ changing behaviours and does give the opportunity to organize and present content in new ways. We mustn’t forget, Tom Reding of BBC Worldwide reminded us, that we are in the business of insight, not journals. The carrot for embracing change is that mobile users are more willing to pay for additional services than desktop users.
Mark Ware’s presentation based on a series of case studies carried out for Outsell confirmed that publishing to mobile platforms is increasingly settling in with STM publishers, starting with the medical and healthcare areas but spreading more widely. The publishers involved included BMJ, Elsevier, Nature, OUP, Wiley-Blackwell and others. Beyond the technical and presentational issues are business ones such as authentication of a user who wants to access a website from a mobile device from within their institution or while on the move. The solutions for RSC Mobile and Oxford Journals Mobile are slightly different for example, and no doubt evolving in response to reader reaction.
Discoverability
There is a data deluge in scholarly publishing, suggested Sophia Ananiadou, chairing one session: information overload, but is that the right metaphor? The deluge is a feature of a system where increased research funding leads to increased publishable output and, even with the most rigorous peer review, to a growing mountain of stored information. How do we retrieve the stuff we need to progress research further?
“The problem is not information overload, nor filter failure; it is discovery deficit.” Cameron Neylon following Clay Shirky
The issue is not a new one but the scale is, according to Harry Kaplanian, who took us back to card catalogues and OPACs before moving on to more contemporary techniques of finding and retrieving information. We have since moved on to web search, standard references, author generated key words, metadata, A&I services and aggregators to make searches more exhaustive and reliable. A lot is being learned and there is an increasing convergence. Publishers need to be prepared to serve a variety of reader types looking for information through a number of different ‘discovery channels’ (Simon Inger), and need to be on top of their user statistics so that they can better serve their readers and also share their user data with librarians to support their case (value added again).
- Free content outside the paywall – abstracts, keywords etc
- Improved MARC programmes
- Enhanced linking between products and services
- New mobile sites for Oxford Journals
- Optimized search engines
- New approaches to library discovery services
- Analytic tools for tracking user behaviour.
At the top of the tree is the Oxford index, a standardized description of every item of content in one place, an evolving Oxford interface and a way to create links and relationships between content elements.
We are beginning to move towards text and data mining, from a single publisher’s output towards the whole growing corpus of accumulated information, i.e. from looking for a single needle in one haystack to finding a collection of compatible needles in a whole field of haystacks. Anita de Waard, Director of Disruptive Technologies at Elsevier set the scene by talking about working with biologists on how scientists read, how computers read and how they might come together to discover relevant and reliable information rather than just isolated research papers. She sees an evolving future of research communication where researchers compile data, add metadata, overlay the whole thing with workflow tools, then create papers from this material in Google.doc accessible to editors and reviewers, all in the ‘cloud’. Publishers? – we provide the tools!
John McNaught from the National Centre for Text Mining illustrated some of the techniques for adding more value to discovery: natural language searching, searching for acronyms, checking for language nuances, looking for associations, all designed to peel away layer upon layer of increasing complexity to turn unstructured text into structured content linked to other knowledge.
“The value of a network is proportional to the square of the number of connected members.” Metcalfe’s law
Networking and the semantic web continue to be buzzwords. Information is dispersed in many different places. To make sense of it we need structure and context, a resource description framework that identifies objects and connects them, and the appropriate vocabulary. Knowledge networks – associated communities that work on specific topics, linking to move on to a different level – are the next level up from networks of individual papers and reports, according to Stefan Decker.
So what’s the problem?
All this is fascinating work going on at the moment in academic or similar institutions. So will all our problems be solved soon? Not in a hurry, according to Cameron Neylon of PLoS, unless publishers change their ways. The majority of publishers behave as broadcasters of information and are still not thinking of networks of people and tools. The tools publishers have provided are not adequate, and often licences prohibit text mining of material to which the reader already has access. This is an opportunity for publishers to produce premium services. The hardware tools – mobile devices – are already available and are powerful.
“Publishers are too focused on controlling access.” Cameron Neylon
The bottom line for all these speakers is open access, that is having metadata and full text freely accessible with licensing arrangements that permit text mining, if the dream of improved discoverability for researchers is to be achieved. If there is a straw in the wind pointing to how publishers policies may develop it is the recent agreement by ALPSP, STM and the Pharma Documentation Ring (PDR) on a new clause for the PDR Model Licence:
“Text and Data Mining (TDM): download, extract and index information from the Publisher’s Content to which the Subscriber has access under this Subscription Agreement. Where required, mount, load and integrate the results on a server used for the Subscriber’s text mining system and evaluate and interpret the TDM Output for access and use by Authorized Users. The Subscriber shall ensure compliance with Publisher's Usage policies, including security and technical access requirements. Text and data mining may be undertaken on either locally loaded Publisher Content or as mutually agreed.”
Similar sentiments, though in the context of mobile delivery, were expressed by Charlie Rapple later in the conference. Getting quite used to trying to shake audiences out of complacency, Charlie claimed that our users are not happy with us: we have not evolved products and services in line with how they have evolved. New ways of creating, evaluating, curating and distributing information are all around us. We need to win our users back or we will go out of business: if we don’t, someone else will take over!
“Our audience and their needs should direct our strategy.” Charlie Rapple
Critically this means starting with the audience rather than content or devices on which content is delivered. Find out what users need; look into and understand their workflows; deconstruct our content and integrate it with these workflows, making it interactive, relevant and ‘friction-free’.
No comments:
Post a Comment