Monday, 15 September 2014

A call to all ambitious society publishers from Susan Hezlet

ALPSP Committees and Council are at the heart of what we do. They are a group of dedicated industry volunteers who advise, steer and guide the secretariat, providing strategic direction and practical support so we can connect, inform, develop and represent the scholarly publishing community.

Susan Hezlet, Publisher at the London Mathematical Society and outgoing member of the ALPSP Council, reflects on the pains and pleasures of volunteering in this guest post.

"About ten years ago I was asked to give a talk at an ALPSP seminar, it wasn't particularly good, but the next thing was a request 'would I join the Professional Development Committee?' This involved co-organising a seminar - I think it was with Felicity Davie and Karen Hillmansen. That was fun, I learnt a lot in the process and we were guided and kept to the task with the help of Lesley Ogg.

The next request was 'would I be Treasurer of ALPSP?' Five years and a lot of direct involvement in authorising payments, helping with the interviewing and appointment of a Chief Executive, being asked to attend meetings on open access policy development, trotting over to the Department for Business, Innovation and Skills to tell them how important scholarly publishing is - not that they still do regular meetings since Vince Cable moved in.

Then it was 'would I do a second term?' This was followed by me saying no, but it would be nice to do one more year on Council as an ordinary member. Three years later... and I'm finally finished.

Perhaps this doesn't sound like fun? However, it is perhaps the best free education you can get in publishing! I have had good times with my fellow publishers on the PDC and ALPSP Council, all of whom are way more senior and knowledgeable than me. All of the people who work for ALPSP are highly professional and generous in the extra time they devote to supporting the rest of us.

It was through ALPSP connections that I managed to find and persuade some excellent publishers and consultants to come and join our Publications Committee at the London Mathematical Society. I have learnt a great deal from them and over the years it has transformed the committee, from one where the Editors spent time on the use of the Oxford comma, to a business committee with a strategic plan (thanks Mark!)

As a small society publisher who spends most of her time being a pain in the arse for my larger publishing distributors, I would not have had the confidence to ask and occasionally insist on a decent service from them without the conversations and support of people involved with ALPSP.

On the second day of the 2014 conference, while I should have been listening to even more good advice from the great speakers we had at the event, I was tucked away with the first round of business proposals from no less than nine publishers. Without the networking and experience of dealing with senior publishers in ALPSP, I would not have known where to begin. It has been a fascinating read and one of these days I will write my memoirs...

Finally, I have made some good friends. In the long run, there is nothing more important.

So. This is a call to all ambitious Society Publishers who want to learn something beyond their own field of publishing; Volunteer!"

Huge thanks go to Susan for her time on PDC and Council. She can be contacted via the London Mathematical Society website.

Friday, 12 September 2014

Open access: the daily challenge (new customers, processes and relationships)

Phil Hurst, Jackie Jones, Wim van der Stelt
Springer's Wim van der Stelt chaired the final panel at the 2014 ALPSP International Conference. The panellists reflected on the daily challenges of starting an open access product and how the new business model fundamentally alters the publication chain.

Phil Hurst from the Royal Society talked about how they approached launching OA journals. They had a gap in their portfolio and they thought the best way to do this was to launch an OA journal. Open Biology was their first online only journal.

They learnt that many things are the same. Getting the right people on the board is key: you need top people. Content is still king and they need to get the journal in the right places. Open access is a big benefit to everyone. They didn't really realise until they launched the journal. The benefits to stakeholders include speeding up science. There is greater visibility for universities and funders. For researchers there is increased visibility and results. It was good for them to get involved in the OA community.

Much of the marketing is the same, but they supplemented this with an OA membership. Authors from member institutions receive a discount on pure or hybrid access. They incorporated a wider range of metrics including DORA and Altmetrics as a range of measures of research impact. It has also provided them with a springboard. They launched the journal to learn and prepare for the future. OA is consistent with the mission of learned societies. Sustainable? The jury is still out, we will only learn by putting OA journals to the test.

Alex Christoforou from BioMed Central asked who is the actual customer for an OA publisher? They have hundreds of journals across BMC and Springer with hundreds of members, thousands of authors and transactions. What they all need is access to the publisher, support and excellent customer service.

Alex Christoforou
They continue to provide some good old fashioned and reassuring tools to support authors such as a fax number, photos of the team on the web and lots of different ways to contact them. They have 4000 customer service queries each quarter that they deal with. It's increasing as well. They have to provide some kind of service 24 hours a day so they can turn around queries in one day. They even provide online banking for those that spend large amounts of money.

Customer services is not just complaints, payments and author services. It's a way of thinking across the organisation so that all stakeholder groups can build a constructive relationship with them and business can grow over time.

Jackie Jones from Wiley talked about subscription versus open access and 'flipping' journals. They have flipped eight journals so far and it is early days. Some of the key flip criteria they assess include
whether it has a modest subscription revenue. Typically these are young journals that haven't achieved predicted revenues. They also look at areas where there is evidence of good OA funding and existing hybrid activity. Where there is longer term growth potential or attractiveness to authors. They also consider ratio of current revenue to articles.

For publishers flipping can lead to potential for faster growth, but there is higher volatility of revenues. From a society perspective, it provides an opportunity to explore OA. However there is risk commercially and editorially. From the funder point of view some prefer full OA journals but Gold OA is the only option. From an institutional point of view there is no subs fee, but you have to track APCs and costs. For tools and modelling they have flow charts and decision trees to help monitor and track submissions and revenues.

EMBO Molecular Medicine launched in 2009. It had modest subscription sales and the society had concerns about visibility. There was an 85% rejection rate. The per page publication charge was 125 euros and pre-flip the average author cost was 1600 euros. They set the APC at 3000 euros in line with other journals in the society's portfolio. Submissions doubled in the launch period. However, on other journals there was a dip in submissions initially, but they do recover.

They have learnt you need to plan ahead and time communication really carefully. Make sure papers in hand are under the new model so you don't have to waive fees. Don't flip mid year and avoid complications of subs reimbursements. Undertake submissions and publications surveys.

Welcoming the robots

Mark Bide, Chairman at the Publishers Licensing Society chaired the penultimate panel at the ALPSP International Conference on text and data mining (TDM).

Gemma Hersh, Policy Director at Elsevier talked through the Elsevier TDM policy. It has been controversial with calls to change it. Central to their policy is the use of the ScienceDirect API, designed to help preserve the performance of website for everyone else.

One controversy is that a license is Elsevier's way of exerting control. However, they have a global license (which complies with the UK copyright exception and balances with copyright frameworks). Another complaint is around the click through agreement: critics believe it controls what researchers are doing and takes control away from libraries to place liability on researchers. However, it is an automatic process, there is no additional liability, it is aligned with institutional e-amendment, provides guidelines on reuse and can offer one to one support.

Another complaint is that they didn't allow text mining of images. The reason was they did not hold copyright in all the images so they would do it on request. However, they now do it automatically and include terms of use flagging when they need to contact the copyright owner where it doesn't lie with Elsevier.

There were criticisms that they were trying to claim copyright over TDM output. This was inadvertent and they have adjusted the policy to be a little more flexible and take this into account. A final misconception was that the policy was rigid.

In Europe, they have signed a commitment to facilitate TDM for researchers, but their policy is global. They are also a signatory of CrossRef and think the new service is good.

Mark Bide introduces the panel
Lars Juhl Jensen, based at the Novo Nordisk Foundation Center for Protein Research at the University of Copenhagen, provided an academic perspective on TDM. He considers himself a pragmatic text and data miner. The volume of biomedical research that he has to read is huge. Making sense of structured and unstructured data is key. All he wants to do is data mine. It enables him to do things such as associate diseases and identify conditions. Once you've got the data from text mining, you can then bring it together with experimental data, and from other sources.

As a researcher doing text mining, he needs the text. He doesn't want much else. The format doesn't matter too much. If he can get it in a convenient format, great. The licence has to be reasonable.

Andrew Clark, Associate Director Global Information and Competitive Intelligence Services at UCB,  articulated what TDM means and the part it plays in the scientific industry. He recounted the work of the Pharma Documentation Ring (P-D-R). Their aims are to:

  • Promote exchange of experience/networking among members
  • Encourage commercial development of new information services and systems
  • Jointly assess new and existing products and services
  • Provide a forum for the information industry

Gemma Hersh, Lars Juhl Jensen and Andrew Clark
Literature patent analysis, sentiment analysis and drug safety are just a few of the benefits of TDM. One of the challenges is around the unstructured format that the data comes in at. They need several aggregators to make the data mineable. It's not always easy to get the datasets - from small publishers to large ones. It's quiet expensive and labour intensive.

There are high costs for setting up your data mining. There are a lack of technical skills in the organisation.

There are benefits to TDM that include a managed and in some cases auditable processes for protecting IP. It provides added value and potential new revenues streams. Clark closed with a call for industry collaborations and asked everyone to watch this space.

Thursday, 11 September 2014

ALPSP Awards spotlight on... Frontiers, a community-run open-access publisher and research network

Kamila Markram is co-founder and CEO of Frontiers
The ALPSP Awards for Innovation in Publishing will be announced at the conference this week. In the final post in our series about the finalists, Kamila Markram, co-Founder and CEO of Frontiers, answers questions about the Frontiers Open-Science platform.

ALPSP: Tell us a bit about your company

KM: We founded Frontiers in 2007 to enable researchers to drive open-access publishing. To achieve this, we built an Open-Science platform with innovative web tools that support researchers in every step of the publishing process. These include collaborative peer review, detailed article and author impact metrics, democratic research evaluation and social networking.

From our beginnings as a group of just a few scientists, Frontiers has evolved to be the fourth leading open-access publisher worldwide. We have published almost 24,000 articles and are on track to publish our 30,000th article before the end of 2014. Our portfolio of open-access titles is also growing rapidly: in just 7 years, we have launched 48 open-access journals across all STM fields.

ALPSP: What is the project that you submitted for the Awards? 

KM: The Frontiers Open-Science platform, which embodies our community-driven philosophy and hosts our innovative online tools to improve all aspects of reviewing, publishing, evaluating and disseminating articles.

Frontiers is a community run, open access academic publisher and research network

ALPSP: Tell us more about how it works and the team behind it. 

KM: Our growing community consists of almost 50,000 leading researchers on the editorial board and more than 100,000 authors. In Frontiers, researchers run the journals and take all editorial decisions. Behind the scenes, we have a team of 140 employees in our headquarters in Lausanne and in offices in Madrid and London. These include mainly journal managers and editorial assistants, who support our editors and authors in the publishing process, as well as software engineers who continuously develop our publishing and networking platforms. We are a highly motivated and dynamic team, of whom many hold Ph.Ds in diverse disciplines, and from many nationalities. Crucially, we all believe that science forms the basis of modern society and that we need to improve the publishing process, so that we all, researchers and society, can benefit from a discovery as quickly as possible.

ALPSP: Why do you think it demonstrates publishing innovation? 

KM: Our approach is unique: we work with leading researchers across all academic communities and empower them with our latest custom-built web technologies to radically improve publishing.

We introduced the novel concept of “Field Journals” – such as Frontiers in Neuroscience – which are structured around academic communities and into specialty sections, such as Neural Circuits, with their own editorial board and which can be cross-listed across journals. This modularity gives synergy between related disciplines, strengthens niche communities, and makes it easy for authors and readers to find the content that interests them.

Also central to our publishing model is the Collaborative Peer Review we introduced. It safeguards authors' rights and gives editors the mandate to accept all articles that are scientifically valid and without objective errors. The review occurs in our online Interactive Review Forum, where authors engage in discussions directly with reviewers to improve the article. It is constructive and transparent, because we publish the names of reviewers on accepted articles. This ensures high quality of reviews and articles. It works – as confirmed by our high impact factors, views and downloads. On top of that, our online platform makes the process fast – with an average review time of 84 days.

We were also the first publisher to develop, in 2008, detailed online article metrics to measure views, downloads, and shares with a breakdown of readership demographics, and we were an early adopter of a commenting system for post-publication evaluation. And we are also the only publisher that uses these article-level metrics, not only to highlight the most impactful articles as selected by thousands of expert readers, but also to translate these discoveries into “Focused Reviews” that make them more accessible to a broader readership. Post-publication evaluation at Frontiers is democratic and objective, using the collective wisdom of numerous experts.

Lastly, we are the first and only publisher to completely merge our own custom-built networking technology with an open-access platform, to raise the visibility and impact of authors and disseminate their articles more efficiently — readers are provided with articles that are the most relevant to them.


Frontiers - Open-Access Publication and Research Networking from Frontiers on Vimeo.

ALPSP: What are you plans for the future? 

KM: To keep growing, innovating and providing the best tools and service. We will continue to bring our publishing model to all STM fields, and also across the humanities and social sciences in the near future. At the same time, we are improving our networking platform, to enable even better dissemination of articles and to increase visibility and impact. Another growing initiative is Frontiers for Young Minds, a new science journal for kids. Young people – aged 8 to 15 – act as reviewers of neuroscience papers by leading scientists. It is a fun, important and engaging way to get children curious about science and let scientists reach out to a young audience. Launched less than a year ago, Frontiers for Young Minds has already been listed as one of the “Great Websites for Kids” by the American Library Association. And by popular demand, we are now about to expand the project across other fields, including astronomy, space sciences and physics. The ALPSP Awards for Innovation in Publishing are sponsored by Publishing Technology.

The ALPSP Awards for Innovation in Publishing are sponsored by Publishing Technology. The winners will be announced tonight at the ALPSP International Conference Wednesday 10 - Friday 12 September, Park Inn Heathrow, London.

Follow the conversation via #alpsp14 and #alpspawards on Twitter.

Who's afraid of big data?

Who's afraid of big data? panel
Fiona Murphy from Wiley chaired the final panel on day two of the 2014 ALPSP International Conference. She posed the question: how do we skill up on data, take advantage of opportunities and avoid the pitfalls?

Eric T. Meyer, Senior Research Fellow and Associate Professor at the University of Oxford was first up trying to answer. He observed how a few years ago you would struggle to gain an audience for a big data seminar. Today, it's usually standing room only.

Big data has been around for years. People were quite surprised when Edward Snowden leaked the NSA documents via Wikileaks, but it had been going on for a long time. Big data in scholarly research has also been around a long time in certain disciplines such as physics or astronomy. There was always money to be made in big data, but there's even more now, and everyone is starting to realise it. So much so, you need a big data strategy.

Meyer defines big data as data unprecedented in scale and scope in relation to a given phenomenon. It is about looking at the whole datastore rather than one dataset. Big data for understanding society is often transactional. We're talking really big. If you can use it on your laptop, it won't be big data.

Meyer drew on some entertaining examples of how big data can be used. If you key in the same sentence in different country versions of Google you'll see the variety of responses change. There are limits to big data approaches, they can come up with misleading results. What happens when bots are involved? Does it skew the results? The challenge will be how you can make it meaningful and more useful.

David Kavanagh from Scrazzl reflected on how the challenge researchers face when making decisions about how to structure and plan your experiments. If you want to leverage collective scientific knowledge and identify which products you want to use for your work, there wasn't a structured way of searching of doing this. Kavanagh urged publishers to throw computational power at data and content as a way to solve problems, improve how you work and help make sense of unstructured content.

That's what they have tackled with Scrazzl which is a practical application of structured or unstructured data that Eric Meyer mentioned. You need to have a product database. Then you have to cut out as much human intervention out as you can. Automation is key. Where they couldn't find a unique identifier or a catalogue match, they had to make it as fast as possible for a human to make an association. Speed is key.

Finally, they built a centralised cloud system that vendors could update their own records. It's a crowd sourced system for those who have a vested interest in keeping it up-to-date. The opportunity for them going forward will be through releasing this structured information through unstructured APIs to drive new insights. It also allows semantic enablement of content and offers the opportunity to think about categorisation in new ways.

For publishers running an ad supported model, they can get use the collection of products from the content search and then identify which advert is the most suited for you.

Paul F. Uhlir
Paul F. Uhlir  from the Board on Research & Information at The National Academies  observed that even after 20 years, we have yet to deconstruct the print paradigm and reconstruct it on the Net very well. In the 1980s a gigabyte was considered a lot of data. in the 1990s. a terabyte was a lot of data. In this decade, the exabyte era is not far ahead of us and a whole lot of others ahead of it.

Huge databases in business, mining marketing information and other data. The Internet of Things and semantic web. Everything now can be captured, represented and manipulated in a database. It's an issue of quantity. But there is also an issue of quality. There needs to be a social response. There are a series of threats around big data.



Disintermediation

The rise of big data promises a lot more disruption. Think about 3D printing. The consequence could be millions of product designers specifying items. Manufacturing will be affected. Jobs will be lost. What will happen to the workers in a repair and body shop when cars are driverless? What will happen to the insurance industries. Workers will be disintermediated. What is certain is that there will be massive labour shifts and disruptions.

Playing God

Custom organs for body parts. The ability of insert genes into another organism. All these applications are data intensive and will become even more so. They have profound social and ethical issues and have potential to do great harm.

End of Privacy

Meyer touched on the NSA files. What about spying satellites? The ubiquity of CC TV? These images are kept in huge databases for future use. Product information is held and used to identify preferences by private companies. There is no such thing as privacy any more.

Inequality

Big data are increasingly powerful means to increase hold on global power.

Complexity

The more we learn, the less we know. Any scientist will tell you that greater understanding leads to more questions than answers.

Luddite reactions

The reaction of people to the encroachment of strange and frightening techniques of the technology age where through passive resistance they try to lead a simple life.

There are also a number of weaknesses that centre around:

  • Improving the policies of the research community
  • New or better incentive mechanisms versus mandates
  • Explicit links of big data to innovation and tech transfer
  • Changing legal landscape-lag in law/bad law/IP law
  • Data for policy-communicating with decision makers.




Industry updates: Publons

Andrew Preston
Andrew Preston from Publons outlined their focus on peer review. As a crucial part of the publishing process, peer review is a leading indicator; it's what the experts thought. It is also valuable content.

Publons is about recognition for good review and a measurable research output for reviewers and editors. It is a proof of quality review process for journals.

They believe openness breeds quality. They provide tools for editors. They measure impact, help them engage with reviewers, and assist with finding, vetting and connecting with reviewers. Finally they build communities to help generate engagement, combining pre- and post-pub reviews with searchable, indexed content.

Publons in numbers

The journal adds review. Publons emails a unique token to reviewer. The reviewer signs in and selects privacy settings. They combine and respect views of journal editor as well as reviewer on how much content they show on Publons. There are two versions of the review - public and private. They have found that regular reviewers review up to 50 articles a year. It can be a leading indicator of expertise.

The service highlights the quality of review process by showing reviews and reviewers. It helps building a community. It helps with Almetrics as they always link to your content. Every review on Publons is eligible to appear in Altmetric so you will never see a zero again. This helps to build out the long tail of articles that don't get picked up in the press! They always link back to content. The article is not their focus so they generate clicks back to the publisher's website. There are a suite of editor tools and it helps them find a reviewer. For publishers, it helps to get more submissions as well as better and faster reviews. It boosts article level metrics and generates post-publication discussion.

Further information available on the Publons website. If you are a publisher and would like to see some metrics about reviews on Publons, complete this online form.

Metrics and more

Melinda Kenneway on metrics
Publication metrics are part of a much bigger picture. Where resources are restricted and there is a lot of competition, metrics become more essential to help prioritise and direct funding. The 'Metrics and more' 2014 conference session was chaired by Melinda Kenneway, Director at TBI Communications and Co-founder of Kudos. The panel comprised Mike Taylor, Research Specialist at Elsevier Labs, Euan Adie, Founder of Altmetric and Graham Woodward, Associate Marketing Director at Wiley. Kenneway opened by observing that as an industry we need to consider new types of metrics.

Publication performance metrics include:
  • Anti-impact factor: DORA
  • Rise of article level metrics
  • introduction of almetrics
  • New units of publishing: data/images/blogs
  • Pre-publication scoring (Peerage of Science etc)
  • Post-publication scoring (assessment, ranking etc)
  • Tools for institutional assessment

Researcher performance metrics include:

  • Publication output
  • Publication impact
  • Reputation/influence scoring systems
  • Funding
  • Other income (e.g. patents)
  • Affiliations (institutional reputation)
  • Esteem factors
  • Membership of societies/editorial boards etc
  • Conference activity
  • Awards and prizes

Institutional performance metrics include:

  • University ranking systems
  • Publication impact metrics
  • STAR/Snowball metrics
  • Research leaders and career progression
  • Patents, technologies, products, devices
  • Uptake of research

Graham Woodward, Associate Marketing Director at Wiley, provided an overview of a trial of altmetrics on a selection of six titles. On one article, after a few days of having altmetrics on the site, they saw the following results: c. 10,000 click throughs; average time on page over three minutes; over 3,500 tweets; an estimated 5,000 news stories; 200 blog posts; and 32 F1000 recommendations.

Graham Woodward
They asked for user feedback on the trial and the 50 responses provided a small but select snapshot that enabled them to assess the effectiveness of the trial.

Were the article metrics supplied on the paper useful? 91% said yes. What were the top three most useful metrics? Traditional news outlets, number of readers and blog posts. 77% of respondents felt the experience enhanced the journal.

Half of respondents said they were more likely to submit a paper to the journal. 87% used the metrics to gauge the overall popularity of the article, 77% to discover and network with researchers who are interested in the same area of their work and 66% to understand significance of paper in scientific discipline.

What happened next? The completion of six journal trial was followed by an extension to all OA journals. They have now rolled out metrics across the entire journal portfolio

Euan Adie from Altmetric reflected on the pressures and motivations on researchers. While there is a lot of pressure within labs for young researchers, funders and institutions are increasingly looking for or considering others types of impact, research output and contribution. There is an evaluation gap between funder requirements and measuring impact. That's where altmetrics come in. They take a broader view of impact to help give credit where it is due. HEFCE are doing a review of metrics within institutions at the moment.
Euan Adie

Seven things they've learnt in the past year or so.

  1. Altmetrics means so many things to so many people. But the name doesn't necessarily work. It is complimentary rather than alternative and it is about the data, not just the measure.
  2. It's worked really well for finding where a paper is being talked about where they wouldn't have known before, but also the demographics behind it.
  3. Altmetrics is only a weak indicator of citations, but the whole point is to look beyond. Different types of sources correlate to different extents.
  4. Don't take all altmetrics indicators as one lump, there are many different flavours of research impact.
  5. When you have an indicator and you tie it to incentives, it immediately corrupts the indicator. While he doesn't believe there is massive gaming of altmetrics there is an element of this with some people. It's human nature.
  6. The top 5% of altmetric scores are not what you expect.The most popular paper is a psychological analysis of the characters in Winnie the Pooh.
  7. Peer review is a scary place. Scientists and researchers can be pretty nasty! Comments can be used in a different (more negative) way than expected. But that is not necessarily a bad thing.
Mike Taylor believes we are approaching a revolution rather than an evolution. What we have at the moment is a collision of varying different worlds because the value of interest in metrics is increasing. What makes for great metrics, and how do we talk about them? Do we want the one-size-fits-all approach? We have data and metrics and in between those two things there is theory, formulae, statistics and analysis. Within the gap between the two things there are a lot of political issues. 

Taylor reflected on the economies of attention (or not) and how you assess if people are engaged. With an audience, when hands go up, you know they are paying attention, but no hands doesn't mean they aren't. Metrics so far are specialist, complex, based on 50 years of research, are mostly bibliometrics/citation based and much is proprietary. The implications for changing nature of metrics: are: as metrics are taken more seriously by institutions, the value of them will increase. As the value increases, we need to be more aware of them. As a scholarly community we need to increase awareness about them. Awareness implies critical engagement, mathematics, language, relevance, openness, agreement, golds standards, and community leadership.

Mike Taylor
We are experiencing a collision of worlds. Terms like 'H-Index' are hard to understand, but are well defined. Terms like 'social impact' sound as if they're well defined, but aren't. There are particular problems about the 'community' being rather diverse. There are multiple stakeholders (funders, academics, publishers, start-ups, governments, quangos), international perspectives and varying cultures (from fifty years of research to a start-up). 

Taylor suggested an example metric - 'internationalism'. Measures could include: how well an academic's work is used internationally; how well that academic works; through readership data; citation analysis (cited, citing); co-authorship; funding data (e.g. FundRef); conference invitations e.g. ORCID; guest professorships; text-analysis of content.

Taylor doesn't think metrics is a place where publishers will have the same kind of impact that they might of 30 years ago. He said to expect to see more mixed metrics with qualitative and quantitative work. Taylor concluded that metrics are being taken more seriously (being used in funding decisions). Many stakeholders and communities are converging. 

Big data + cloud computing + APIs + openness = explosive growth in metrics. 

It is a burgeoning research field in its early days. Publishers need to be part of the conversation. We need to enable community leadership and facilitate decision making.