Last week, the inaugural Transforming Research conference took place in Baltimore, MD. This meeting was unique in its ability to bring together researchers, administrators, librarians, funders, and vendors to talk about the current evaluation climate and brainstorm solutions for improving collective practices. Immediately, the theme […]
The real work of the HuMetricsHSS initiative begins in Michigan this week, when an insightful group of thinkers—faculty members of all ranks, teaching in any number of HSS disciplines at all kinds of institutions, along with administrators, graduate students, university publishers, and librarians—has agreed to come together to rip apart, interrogate, and rebuild that values framework, to come to a consensus on the values we share as a larger group.
In August, I shared the HuMetricsHSS project with an international cross-section of librarians at the IFLA World Libraries Information Congress in Wrocław, Poland. In this post, I’ve adapted a portion of my talk. You can download the related conference paper that explores these ideas in-depth […]
This October, the HuMetricsHSS team is excited to bring together a diverse group of scholars, teachers, administrators, and students from a wide range of institutions for a topic that we believe will transform academia. Over the course of a two-day workshop, we’ll interrogate, brainstorm, break apart, […]
When we first introduced the work of the HuMetrics group to the other TriangleSCI teams last October, we met with some resistance. Much like peer groups in the run-up to an election, we had become used to having our own ideas reflected and amplified internally. Four days of intensive brainstorming had inoculated us to the controversial and slightly scary nature of much of what we’re trying to do, and when we chose to present an extreme example of the potential application of the HuMetrics framework (“What if we took a values-based approach to assessing the quality of academic mentoring?”) we — understandably, in retrospect — got significant pushback. What we thought we were proposing was a way to recognize and reward the often hidden labor of peer and student mentoring by paying attention to the ways in which mentoring can embody not only the core values we propose (equity, openness, collegiality, quality, and community) but also many of the values we had grouped under those core values — transparency, empathy, accountability, candor, engagement, and respect. What our audience heard us propose, however, was something altogether different: a neoliberal performance metrics that would measure and assess the quality of time and interpersonal interactions tied to HR and promotion and tenure decisions.
What this made us realize was that while we had brainstormed in constant cognizance of our aim — “nurturing fulfilling scholarly lives” — other people were and would be coming to the project with very real concerns about the potential abuse of any kind of metrics in an increasingly quantified academy and would have very legitimate fears of the additional work it might take to interrogate the whole gamut of scholarly practices with an eye to ameliorating the lived experience of academe. To begin our discussion of values-based metrics with mentoring was to take things many steps too far, to move much too rapidly from an idea to the further reaches of its potential implementation. When, the following day, we demonstrated how one might apply our proposed values framework to a syllabus, the project’s decidedly un-neoliberal aims were understood and appreciated much more quickly.
It was a hard lesson, but an important one. We understand now that to bring about substantial culture change in scholarly practice, we need to start by demonstrating how one might use a values framework to interrogate the more tangible products of that practice: articles and monographs, sure, but also digital projects, syllabi, annotations, and peer review. How might academic culture change if we paid attention to equity, openness, collegiality, quality, and community in the process of creating these products as well as in the final products themselves?
During our time at TriangleSCI 2016, the HuMetrics team made a commitment to continue the work we began, and we have been fortunate enough to do so with the support of our respective institutions (without speaking for them of course). With a Social Science Research Council (SSRC) staff member on the team, however, we realized quickly that limiting our work to the humanities without taking the social sciences into account was, in a way, reinforcing an artificial and often institutionally imposed divide between two fields of research that have much more in common than not. Both humanists and social scientists face many of the same problems when it comes to showcasing the value of our work or in fighting to combat predatory publishing practices and corrosive academic behavior. By bringing the two fields together, we believe we’ll be able to make an even stronger case for a values-based framework on which to base assessments of excellence.
A schedule of regular biweekly brainstorming meetings, two in-person meetings at the SSRC offices in New York, and several intense bouts of co-writing and peer editing (minimally interrupted by transatlantic moves, promotions, presidential elections, and small and not-so-small children) have resulted in a close-knit team that’s excited to extend the HuMetricsHSS project beyond internal brainstorming and establish a proof of concept that might actually change things for the better.
Current metrics of humanities scholarship have been shown to be too blunt to capture the multiple dimensions of scholarly output and impact (see Haustein and Larivière). In addition, the inappropriate nature of current indicators can incentivize perverse scholarly practices (see The Metric Tide, Wilsdon et al.)…. In […]
“A critical component of our emerging #Humetrics conversation at Triangle SCI involves finding ways to expose, highlight, and recognize the important scholarship that goes into the all-too-hidden work of peer review, syllabus development, conference organizing, mentoring, etc. Our current metrics fail to capture what is […]
In Aristotle’s Nicomachean Ethics, there is a famous passage in which he reminds us that “to be happy takes a complete lifetime; for one swallow does not make a spring, nor does one fine day; and similarly one day or a brief period of happiness does not make a person blessed and happy” (Nic. Eth., 1098a16–20).
This passage came to frame our conversations around #HuMetrics at this week’s Triangle Scholarly Communications Institute, because it reminds us that a fulfilling life — what Aristotle calls, eudaimonia, happiness, that is, a life well lived — requires cultivated habits rooted in core values that, when intentionally practiced, shape the character of a good life.
In the end, what we value should be embodied in what we do, not once or twice, but regularly over the course of a lifetime.
In framing our conversation about #HuMetrics with this ancient conception of ethics, excellence, and character, we seek also to advance and reinforce the idea that a scholarly life can only be well lived in communities of practice with others.
For the #HuMetrics team, this year’s Triangle SCI experience was a swallow that signifies but does not yet fully manifest the coming spring. It opened for us a space for the flowering of a community of practice oriented toward the question of how we might more broadly cultivate communities of practice that embody the values of fulfilling scholarly lives.
Five Core Excellences of Enriching Scholarship
Working out loud together, we identified five core excellences of enriching scholarship:
For too long, we humanists have been allergic to metrics. This allergy has prevented us from engaging in a serious and sustained conversation about what practices of scholarship we might want to cultivate and incentivize both through the activities we measure and those we celebrate.
As a result, a large and growing battery of metrics have been developed based on the practices of more scientifically oriented scholarship or simply on what it was possible to use our technologies to measure.
Current metrics of humanities scholarship have been shown to be too blunt to capture the multiple dimensions of scholarly output and impact (see Haustein and Larivière). In addition, the inappropriate nature of current indicators can incentivize perverse scholarly practices (see The Metric Tide, Wilsdon et al.).
A critical component of our emerging #HuMetrics conversation at Triangle SCI involves finding ways to expose, highlight, and recognize the important scholarship that goes into the all-too-hidden work of peer review, syllabus development, conference organizing, mentoring, etc. Our current metrics fail to capture what is most substantive about the rich life of scholarship we practice together in living academic communities.
In this context, our challenge and our responsibility is to articulate, incentivize, and reward practices that enrich our shared scholarly lives and expand our understanding of scholarship itself.
Without being naïve about how difficult it is to change culture, we hope to begin to reshape the conversation about metrics around the values of enriching scholarly practices and the communities in which they thrive.
Although our time together at the Triangle SCI was only one swallow that does not yet make a spring, the seeds planted there may begin to take root over the weeks and months to come, and the communities of scholarship that blossomed there just might be “made glorious by this sun” that shines when a broader public is invited to join the conversation.
It’s our last day at Triangle SCI, and I’ve been contemplating overnight the feedback the #HuMetrics team got yesterday afternoon from our colleagues. In our presentation of what we’ve been doing this week, we attempted to sketch out, using the activity of mentoring, how actions […]
We began the day with a walk. The morning was cool and fresh, and the grounds around the DuBoise House at the Rizzo Center offered the six of us on the #HuMetrics team at the TriangleSCI the peace and space we needed to think out […]
The #HuMetrics team is making progress in attempts to reverse engineer that which we want to measure, by starting with the values than often govern the (in some ways broader, more comprehensive) range of work that we engage in (see Chris Long’s previous post for further discussion). We were able to distill a wide-ranging brainstorm of values into five categories, and have aims to think about how those value categories relate to the processes and products of our work. The important core of our conversation today centered around the fact that while metrics often seen (or are taken) as the end-goal, in point of fact the indicators always align to a value, and so part of the work here, in this free and open thinking space, is to be aspirational about the values we’d like to see elevated, incentivized, and rewarded. If openness as a value is prioritized, for example, one could imagine more weight given to scoring articles and/or journals that were OA rather than not. If that seems like an extreme example, it’s perhaps a worthy future exercise to consider the ways that certain ways of showing impact actually might already tweak the scale toward different values.
For our quick 20-minute afternoon exercise, each of us is taking a crack at writing about one of those five values: Equity, Openness, Collegiality, Quality, and Community. In our framework, Quality can take on one or more of the following characteristics:
Replication and reproducibility have a certain emphasis in some of the social sciences, but the terms might in other contexts also be thought of in the sense of extensibility. It’s important to note that these are preliminary notions, and we welcome your feedback.
Keep in mind that we’re considering metrics than can apply to multiple kinds of academic processes and outputs, so not just whether or not your article or book has high quality (measured currently, say, based on whether or not the article is in a journal with a high impact score, or if your book receives a certain number of citations), but whether or not you play a role in helping measure the degree of quality for an object (say, serving as a peer reviewer for a grant application, a reviewer for a book, a referee for an article, etc.). Both of these kinds of activities are part of the transaction related to “quality,” but currently we overwhelmingly incentivize and reward the former rather than the latter. What such a focus on the transactional value in the sense of quality does is unpack the transactional relationship and scholarly networks that undergird much of our work, disturbing the notion of individual acts of scholarship by revealing the deep relationships behind scholarly works.
Another challenge to current methods for measuring impact related to quality is the well-noted challenge of addressing context (such as whether or not a citation is a positive one or a negative one), and the degree to which such measurements lend themselves to a certain kind of gaming the system (through overuse of citations to drive up citation scores). How do we successfully implement speed bumps (rather than roadblocks) that require some small additional effort that will likely not prevent gaming the system but may, to carry the metaphor, slow it to a reasonable speed? The use of “active citation” (a precise and annotated citation and link to the source), as argued by Andy Moravcsik (PDF) in the context of political science research, is one potential method, especially for more qualitative work.
Today is the Day Two of the Triangle Scholarly Communication Institute, where I’m heading up a team that’s focusing on HuMetrics: Building Humane Metrics for the Humanities. Our team has focused a lot on the importance of working out loud, of process over product, and […]
Second day at #TriangleSCI with the #HuMetrics team. Today we focused on clustering our values into major value-categories (Equity, Openness, Collegiality, Quality, and Community) with the idea that excellence in scholarship is an expression and combination of these value-categories as they are embodied in scholarly […]
What do we mean by “equity”? For us, equity is very much the concept described by Falk et al. (1993:2):
“Equity derives from a concept of social justice. It represents a belief that there are some things which people should have, that there are basic needs that should be fulfilled, that burdens and rewards should not be spread too divergently across the community, and that policy should be directed with impartiality, fairness and justice towards these ends.”
In the academy, valuing the principle of equity can (and we would argue should) inflect a number of everyday activities: creating courses, advising and mentoring students (and colleagues), organizing conferences, facilitating workshops, appointing or serving on search committees and editorial boards — and so much more. The results of embracing work done in the spirit of achieving an equitable academy? We want to imagine a world in which all who partake in teaching, learning, reading, researching, and writing — in short, all of us engaged in the scholarly enterprise — commit to actively listen and to openly question our own assumptions, to share, to amplify, and ultimately to empower.
Apologies for the false dichotomy I’ve set up by my framing of this post in its title as “impact versus influence.” It’s a result of the quickblogging process, one that Christopher Long, Rebecca Kennison, Nicky Agate, Simone Sacchi, Jason Rhody, and I agreed upon as […]
Today is the first day of the Triangle Scholarly Communication Institute, where I am part of a team that’s focusing on HuMetrics: Building Humane Metrics for the Humanities. We have each agreed to quickly blog some thoughts as part of our process; warning: what follows […]
As one might expect, many of those values were ones that embraced “openness” — whether “equitable access,” “open process,” “engagement,” “transparency,” “open source,” “candor,” “accessibility,” or “sharing.” What struck me, however, as someone who spends all day every day discussing “open access,” was what was missing from our list of explicitly “open” products of scholarship. What made that list? Not articles, not books, not even data; instead, we identified explicitly as “open” only “preprint OA” and “OERs” (open education resources).
Why only those two products? Was it because we assumed that “openness” was already inherent in other products and outputs? Was it because we were thinking intentionally about values that are currently not rewarded and so considered only tagging as “open” products that also are in that category? I don’t know. But it does point to the different kind of thinking that the Triangle SCI inspires. I am looking forward to tomorrow!
First day at #TriangleSCI working with the #HuMetrics team. It is quite amazing what can happen when you put together in a room people from faculty administration, granting agencies, and societies with scholarly communication, information science, and metrics experts — without the constraints of their […]
Today is the first day of the Triangle Scholarly Communication Institute, where I’m heading up a team that’s focusing on HuMetrics: Building Humane Metrics for the Humanities. Our team has focused a lot on the importance of working out loud, of process over product, and […]