|Horse burger, anyone?|
With a slide showing a horse-shaped burger, Pentz observed no one knew what was happening in the supply chain and ingredients were mis-labelled. As a consumer it's hard to know what's verified. Third party certification such as Fairtrade or the Soil Association mark have arisen to help consumers. This is an important lesson for the scholarly publishing community.
Pentz is not talking about bibliographic metadata. This is about some of the things that are changing in broader descriptive metadata - what are users starting to ask? They are interested in the status of the content. What's been done to this content? And what can I do with this content?
Good quality metadata drives discovery, however, there are problems with metadata and identification. This is a challenge for primary and secondary publishers as the existing bibliographic supply chain hasn't been sorted, new things being added in, and this could potentially lead to big problems.
NISO announced two weeks ago standards for open access metadata and indicators. The detail is still to follow which will include things like: licensing; has an APC been paid?; if so, how much and who pays it? These factors will be particularly important to help identify open access articles in hybrid journals.
There are a number of new measures that have to be captured via the workflow. These include:
|The FundRef Workflow|
- CrossRef has launched the FundRef pilot to provide a standard way of reporting funding sources.
- Altmetrics allow you to look at what happens after publication, looks at aspects of usage, post-publication peer review, capturing social buzz and getting beyond impact factors.
- PLOS has article level metrics - available via APIs.
What about content changes? Historically, the final version of the record has been viewed as something set in stone. We need to get away from this idea because it doesn't recognise the ongoing stewardship publishers have for the content.
Many things happen to the status of content - post-publication - including:
- protocol updates.
As we have heard throughout the conference, the number of retractions are on the rise. Pentz referred back to an article in Nature 478 (2011) on the trouble with science publishing retractions. The case is clear: when content changes, readers need to know, but there is no real system to do this.
In a digital world, notification of changes can be done more effectively, and that's what CrossRef is all about. Another challenge is the use of PDF: there is no way of knowing whether the status has changed. When online, the correction is often listed below the fold, even on a Google search. The whole issue of institutional repositories is also a factor.
What is CrossMark? It is a logo that identifies a publisher maintained copy of a piece of content. When you click on the logo it tells you whether there have been updates, is the copy being maintained by the publishers, where is it publisher maintained, what version is it and other important publication record information.
Taking the example of the PDF sitting on a researcher's hard drive, the document has the CrossMark logo. Click on it for an update on whether the PDF version is current. You can then link through to the clarification if it is there. It includes a status tab and publication record tab. The record tab is a flexible area where publishers can add lots of non-bibliographic information that is useful to reader, for example, peer review, copyright and licensing, FundRef data, location of repository, open access standards, etc.
Lots of things can be enabled by this such as Mendeley. Pentz showed a demo of how a plugin for Google might be written that flags CrossMark when you search. It was launched in April 2012 and has been developing slowly with 50,000 CrossMark pilot deposits since launch, with 400+ updates. They are working with 20+ publishers on CrossMark implementation.