In this post, Judith Hegenbarth, Head of Research Skills in Library Services, introduces the responsible use of research metrics and UoB’s Commitment to Responsible Research Assessment.
Any government minister will tell you that performing research costs money, and that public spending on it has to be justified. The allocation of research funding is based on a perception of ‘quality’, and part of the equation is whether an individual, research group or institution has performed ‘quality’ research in the past.
Measuring quality is a contentious issue, particularly when it concerns the ‘performance’ of an individual researcher or scholar. In the past, the number of times a publication has been cited by other researchers has been used as a proxy for influence and thereby quality. The h-index became a shorthand for author excellence. This kind of metric has been shown to privilege certain fast publishing disciplines which produce multi-authored papers. For those researchers who take career breaks to raise families, or lone scholars who publish larger works less frequently, a single measure isn’t helpful or fair. There’s more discussion of this on our Influential Researcher intranet page (including Canvas course).
In this post, Vicky Wallace from Library Services’ Research Skills Team introduces ORCID, a persistent digital identifier that distinguishes you from every other researcher.
In today’s research climate, the scope for information about you and your work to be displayed and connected is huge. Historically, publishers and libraries took ownership for distributing and curating works, but roles are blurring in today’s world, where indexing and curation of online content is largely done algorithmically. The picture is further complicated by:
the range of research output types (“online-only” articles, blog posts, slide decks and datasets) and other research activity;
difficulties in author disambiguation, exacerbated where people have common names, perhaps change names after marriage, move institutions, or are affiliated with more than one institution.
Vicky would like to make it clear that she is not a fan of Chesney (despite knowing all the words).
An introduction to bibliometrics for researchers by Vicky Wallace, Subject Advisor, Library Services
Have you ever heard the term bibliometrics? Bibliometrics can be described as a means of measuring the impact of a given publication by looking at the number of times subsequent authors have cited that publication.
Bibliometrics can be applied at various levels, including:
Author level (e.g. the h-index)
Article level (e.g. altmetrics)
Journal level (e.g. impact factor)
There are philosophical questions about the merits of using a citation as a measure of impact. Ask yourself the question of why you cite papers in your work, is it for positive or negative reasons, are you building on a researchers work, criticising it, or acknowledging their contribution to a field? Also, citation patterns vary across disciplines, with some areas having numerous co-authors and citing prolifically, and other areas citing fewer papers and having more sole authors. Nevertheless, bibliometrics are often used as a quantitative measure to determine the impact of researchers, research groups, departments and institutions, although this is often tempered by using peer review alongside them to bring in a qualitative element. Continue reading “Bibliometrics for researchers”