In August 2017, I shared the HuMetricsHSS project with an international cross-section of librarians at the IFLA World Libraries Information Congress in Wrocław, Poland.
In this post, I’ve adapted a portion of my talk. You can download the related conference paper that explores these ideas in-depth from Humanities Commons.
Libraries are no strangers to thinking about their values, and designing services to match those values.
Some basic values are not controversial: for example, responding to patron requests professionally and competently.
Other values demand activism like defending intellectual freedom, or supporting a range of social justice issues. For example, in the United States, some libraries have created Black Lives Matter displays to inform their patrons of current struggles for civil rights.
But others argue that neutrality is a crucial ethical value for libraries. Though not everyone agrees, many libraries policies on acquisition and intellectual freedom are often shaped by the value of neutrality.
Many academic libraries have taken an interest in research evaluation metrics, which is a hot topic nowadays in academia in general. In fact, libraries have been integral to research evaluation efforts worldwide.
But there are important challenges for the field of research evaluation. For one, humanities researchers are being evaluated using metrics that aren’t appropriate for them — for example, journal impact factors don’t really apply in the humanities, but we’ve heard anecdotally that administrators are considering them (by way of using evaluation tools that report on JIFs).
Also, while many institutions claim to care about teaching, professional service, mentoring, and other aspects of scholarly life, evaluation practices in the US are usually focused primarily upon research — how much you’ve published, how often it’s being cited, and so on.
But these problems can’t be blamed simply upon the use of research evaluation metrics.
The problem is how we’re using these metrics. We’re trying to fit square metrics pegs into round evaluation holes, as it were. In other words, it’s the misalignment between the values that institutions hold dear and the metrics they use to evaluate their researchers that cause these problems.
That’s where HuMetrics come in.
The HuMetricsHSS project imagines a better use of research evaluation metrics in the humanities and social sciences, and a broadening of evaluation practices to consider and reward service, teaching, mentorship, and myriad other aspects of a scholar’s work.
We’re a research team that’s taking a bottom-up approach to making metrics better for evaluation. Our initial research has found there to be five core values that underpin scholarly pursuits in HSS.
These values are:
- Collegiality, which can be described as the professional practices of kindness, generosity, and empathy towards other scholars and oneself;
- Quality, a value that that demonstrates one’s originality, willingness to push boundaries, methodological soundness, and the advancement of knowledge both within one’s own discipline and amongst other disciplines and the general public, as well;
- Equity, or the willingness to undertake study with social justice, equitable access to research, and the public good in mind;
- Openness, which includes a researcher’s transparency, candor, and accountability, in addition to the practice of making one’s research open access at all stages; and
- Community, the value of being engaged in one’s community of practice and with the public at large, and also leadership.
By aligning evaluation practices and metrics with these values, we believe that academia — including libraries — can better incentivize positive professional behavior.
Taking the IFLA 2017 World Congress theme of Libraries. Solidarity. Society. as a starting point, let’s explore how libraries might embody and evaluate services based on these values.
For each of these examples, I’ll first give some scenarios in which libraries might embody a particular value, then suggest some useful metrics or indicators that we can use to judge success by.
Equity and Collection Management
In a library landscape that embraced Equity, more collection dollars might go towards supporting Open Access publishing practices in the humanities and social sciences, so that everyone might benefit globally from the production of knowledge locally. Collection budgets might also be allocated to prioritize materials needed by marginalized communities locally (e.g. migrants, the LGBT community, etc.).
In a way, these efforts towards Equity are relatively easy to measure: one could showcase the growth in Open Access fund monies, or in the percentage of community-relevant materials in one’s budget.
One might also consider growth in citations to and discussions of (vis-à-vis altmetrics) their institution’s Open Access research from communities worldwide as an indicator of increased access, and therefore Equity.
Openness and Purchasing
Similarly, there’s support for the value of Openness. When viewed through the perspective of purchasing decisions made by libraries, here’s how that might play out. One might subscribe only to OA-friendly journals (which allow self-archiving and/or offer the option of Gold OA publishing); one might direct collections dollars towards OA funds; or one might buy tools that help researchers practice open scholarship (e.g. Figshare, Zotero, GitHub, etc.).
Related indicators of Openness could include the budgetary growth for support of OA-friendly journals, or tools like Zotero that support greater transparency in researcher workflows. Given the correlations between open access and citations (OA work having a higher likelihood of being cited), an increase in citations to and altmetrics for one’s institution’s research could be another useful indicator.
The Response
The audience at IFLA WLIC 2017 was very receptive to the idea of values-based indicators. Many attendees were their institution’s bibliometrics and altmetrics experts, and as such appreciated a nuanced view towards the utility of these types of data for evaluation purposes. Likewise, we appreciated the opportunity for a knowledgeable debate about HuMetrics.
Like what you’ve just read? You can download the related conference paper that explores these ideas in-depth from Humanities Commons.