Melinda Kenneway on metrics |
Publication performance metrics include:
- Anti-impact factor: DORA
- Rise of article level metrics
- introduction of almetrics
- New units of publishing: data/images/blogs
- Pre-publication scoring (Peerage of Science etc)
- Post-publication scoring (assessment, ranking etc)
- Tools for institutional assessment
Researcher performance metrics include:
- Publication output
- Publication impact
- Reputation/influence scoring systems
- Funding
- Other income (e.g. patents)
- Affiliations (institutional reputation)
- Esteem factors
- Membership of societies/editorial boards etc
- Conference activity
- Awards and prizes
Institutional performance metrics include:
- University ranking systems
- Publication impact metrics
- STAR/Snowball metrics
- Research leaders and career progression
- Patents, technologies, products, devices
- Uptake of research
Graham Woodward, Associate Marketing Director at Wiley, provided an overview of a trial of altmetrics on a selection of six titles. On one article, after a few days of having altmetrics on the site, they saw the following results: c. 10,000 click throughs; average time on page over three minutes; over 3,500 tweets; an estimated 5,000 news stories; 200 blog posts; and 32 F1000 recommendations.
Graham Woodward |
Were the article metrics supplied on the paper useful? 91% said yes. What were the top three most useful metrics? Traditional news outlets, number of readers and blog posts. 77% of respondents felt the experience enhanced the journal.
Half of respondents said they were more likely to submit a paper to the journal. 87% used the metrics to gauge the overall popularity of the article, 77% to discover and network with researchers who are interested in the same area of their work and 66% to understand significance of paper in scientific discipline.
What happened next? The completion of six journal trial was followed by an extension to all OA journals. They have now rolled out metrics across the entire journal portfolio
Euan Adie from Altmetric reflected on the pressures and motivations on researchers. While there is a lot of pressure within labs for young researchers, funders and institutions are increasingly looking for or considering others types of impact, research output and contribution. There is an evaluation gap between funder requirements and measuring impact. That's where altmetrics come in. They take a broader view of impact to help give credit where it is due. HEFCE are doing a review of metrics within institutions at the moment.
Euan Adie |
Seven things they've learnt in the past year or so.
- Altmetrics means so many things to so many people. But the name doesn't necessarily work. It is complimentary rather than alternative and it is about the data, not just the measure.
- It's worked really well for finding where a paper is being talked about where they wouldn't have known before, but also the demographics behind it.
- Altmetrics is only a weak indicator of citations, but the whole point is to look beyond. Different types of sources correlate to different extents.
- Don't take all altmetrics indicators as one lump, there are many different flavours of research impact.
- When you have an indicator and you tie it to incentives, it immediately corrupts the indicator. While he doesn't believe there is massive gaming of altmetrics there is an element of this with some people. It's human nature.
- The top 5% of altmetric scores are not what you expect.The most popular paper is a psychological analysis of the characters in Winnie the Pooh.
- Peer review is a scary place. Scientists and researchers can be pretty nasty! Comments can be used in a different (more negative) way than expected. But that is not necessarily a bad thing.
Mike Taylor believes we are approaching a revolution rather than an evolution. What we have at the moment is a collision of varying different worlds because the value of interest in metrics is increasing. What makes for great metrics, and how do we talk about them? Do we want the one-size-fits-all approach? We have data and metrics and in between those two things there is theory, formulae, statistics and analysis. Within the gap between the two things there are a lot of political issues.
Taylor reflected on the economies of attention (or not) and how you assess if people are engaged. With an audience, when hands go up, you know they are paying attention, but no hands doesn't mean they aren't. Metrics so far are specialist, complex, based on 50 years of research, are mostly bibliometrics/citation based and much is proprietary. The implications for changing nature of metrics: are: as metrics are taken more seriously by institutions, the value of them will increase. As the value increases, we need to be more aware of them. As a scholarly community we need to increase awareness about them. Awareness implies critical engagement, mathematics, language, relevance, openness, agreement, golds standards, and community leadership.
Mike Taylor |
We are experiencing a collision of worlds. Terms like 'H-Index' are hard to understand, but are well defined. Terms like 'social impact' sound as if they're well defined, but aren't. There are particular problems about the 'community' being rather diverse. There are multiple stakeholders (funders, academics, publishers, start-ups, governments, quangos), international perspectives and varying cultures (from fifty years of research to a start-up).
Taylor suggested an example metric - 'internationalism'. Measures could include: how well an academic's work is used internationally; how well that academic works; through readership data; citation analysis (cited, citing); co-authorship; funding data (e.g. FundRef); conference invitations e.g. ORCID; guest professorships; text-analysis of content.
Taylor doesn't think metrics is a place where publishers will have the same kind of impact that they might of 30 years ago. He said to expect to see more mixed metrics with qualitative and quantitative work. Taylor concluded that metrics are being taken more seriously (being used in funding decisions). Many stakeholders and communities are converging.
Big data + cloud computing + APIs + openness = explosive growth in metrics.
It is a burgeoning research field in its early days. Publishers need to be part of the conversation. We need to enable community leadership and facilitate decision making.
Publication metrics are part of a much bigger picture. Where resources are restricted and there is a lot of competition . i am agree with your views
ReplyDelete