Hacking evaluation: towards values-based professional advancement practices for the digital humanities Presenter: Stacy Konkiel Digital humanities scholars face a hard, continued socio-technical problem: though the results of their research are often web-native, interactive, and iterative (think: websites, exhibits, datasets, and more), their careers are often […]
This 2.5-day intensive workshop at Michigan State University will bring approximately 25 humanities and social science (HSS) scholars and administrators at all levels and from all types of institutions into conversation with each other and with the project team. Working groups will focus on each of the proposed five values (Equity, Openness, Collegiality, Quality, Community) as they might relate to practices in academic HSS disciplines; the intention is not simply to reaffirm the values of the framework, but to interrogate, challenge, and revise them.
With six weeks to go until our first workshop, it was a somewhat tense HuMetricsHSS team that met up at the Social Science Research Center in Brooklyn in early September. We didn’t exactly relieve the tension by starting with a conversation about living our values as a team: how can the seven of us, scattered from Utrecht to New York, Michigan to Minnesota, live our own values as we work our way through what is often a difficult process? Like any team, we all have our preferred working styles and communication styles. They may not align. And then there’s the fact that this project is additional labor beyond our day jobs — a labor of love, but labor nonetheless — for all of us, and we all have lives, and families, and commitments, and bodies, and hormones, and frustrations that might come into play at any point. We thought through what candor and collegiality might mean for a remote team, how we might better communicate with one another in open but collegial ways. Spilling curry sauce all over the SSRC carpet and into a colleague’s bag was not, in retrospect, my proudest “embodying the team values” moment.
As a fiery-tempered, stubborn Irish lass, I realize that I haven’t always lived the shared values that this team came up with last October — but I want to. I hadn’t necessarily been thinking about the application of this framework outside the academy until fairly recently, and now its existence is beginning to permeate all areas of my life. I’m finding the preliminary framework invaluable: it’s a checkpoint of sorts (I’ve even put up a copy of the infographic on my wall at work), a reminder to double-check before I speak, write, agree, organize: Am I being constructive? Am I thinking about inclusivity and community as I put together any kind of team? Am I thinking about them when I agree to be on any kind of panel or board? Am I working in a transparent and reproducible way — is my process open? — or am I keeping my knowledge and workflows to myself? Am I collegial and candid in my dealings with my colleagues and peers? Am I willing to be accountable for my mistakes, and to learn from them too? (My partner might have a thing or two to say about that!) As a team, we’re working on it.
But the preliminary framework against which we’re trying to double-check ourselves only represents the shared values of a small group of people who met in North Carolina last October. The real work begins in Michigan this week, when an insightful group of thinkers — faculty members of all ranks, teaching in any number of HSS disciplines at all kinds of institutions, along with administrators, graduate students, university publishers, and librarians — has agreed to come together to rip apart, interrogate, and rebuild that values framework, to come to a consensus on the values we share as a larger group. We’ll be opening that conversation up on Twitter, too, so please feel free to follow along and share your thoughts at #hssvalues.
Our Brooklyn meeting quickly turned to conversations about what we want to achieve during this workshop and how we might hope to achieve it. We’ve all experienced loosey-goosey thought forums where participants come together without much structure and separate without much sense of what exactly they were doing — and why. And we’ve all attended overly structured, deliverable-driven meetings that ask the participants to do the work of the project, rather than interrogate and criticize that work. We know what we don’t want, then.
We’re asking our participants to complete an assignment before they join us: breaking down a given scholarly practice (such as creating a syllabus) into the set of micro-practices that go into its creation (compiling a reading list, choosing assignments, writing a code of conduct) and the objects that might be produced by it (student work, a bibliography, etc.). When we’re together in East Lansing, we’ll try to think about what a values-based approach to that practice would look like. You can take a look at the agenda here — and again, if you aren’t joining us in Michigan, please follow along and participate using the #hssvalues hashtag.
This October, the HuMetricsHSS team is excited to bring together a diverse group of scholars, teachers, administrators, and students from a wide range of institutions for a topic that we believe will transform academia. Over the course of a two-day workshop, we’ll interrogate, brainstorm, break apart, […]
“A critical component of our emerging #Humetrics conversation at Triangle SCI involves finding ways to expose, highlight, and recognize the important scholarship that goes into the all-too-hidden work of peer review, syllabus development, conference organizing, mentoring, etc. Our current metrics fail to capture what is most substantive about the rich life of scholarship we practice together in living academic communities.”
— Christopher Long, “Nurturing Fulfilling Scholarly Lives”
“We want to argue that there’s much more to scholarship than publishing regularly in high-impact journals,” suggests my colleague Rebecca Kennison; indeed, we want to argue that there’s much more to scholarship than publishing. So how might the products and processes of scholarship writ large (from journal creation to peer review to organizing a conference to mentoring) be recognized and rewarded as scholarship? And how might they encourage practices that embody the five core values we saw as essential to the humanistic endeavor, rather than bolstering what Stacy Konkiel calls “corrosive values” such as bureaucracy, exclusivity, and competition? Shouldn’t we be rewarding practices that enrich the scholarly community rather than ones that detract from it?
On the last day of this week’s Triangle Scholarly Communication Institute, we tried to boil down all the brainstorming during team time, discussions over lunch, arguments over bourbon, subtweets, comments, and questions to a concrete example of what we mean by #Humetrics. We chose the syllabus as guinea pig: what questions could we ask ourselves about our syllabi to ensure that they, as objects, embody the values (Equity, Openness, Collegiality, Quality, and Community) and practices we want to encourage and reward in the humanities community? (I’ll just note here that the “we” and the “our” I mention currently represents our TriangleSCI team. But they’re a “we” and an “our” in need of expansion, so that they come to represent a constellation of scholarly communities in the humanities. This is partly why we’re working through our process publicly; we encourage anyone reading this to comment on that set of proposed values, to add to them, argue against them, and so on.)
Humanities Values: A List in Progress
Some (non-exhaustive) examples, then, of the questions one might ask oneself about a syllabus:
Equity: Am I including works by women? Am I including works by underrepresented groups? Am I attentive to diverse perspectives? Am I taking the cost of required course materials into consideration? Do I have a code of conduct? If online, is my syllabus ADA compliant?
Openness: Are there open access versions of the assigned material available? Do I assign a diverse range of materials, from books and blog posts to news articles and video? Is the syllabus itself openly available for others to consult, reuse, and remix?
Collegiality: Do my students have a safe space to speak? Am I encouraging constructive feedback? Does my code of conduct encourage kindness and generosity? Do I credit others’ work?
Quality: Does my syllabus reflect student or peer feedback from previous semesters? Does the syllabus push the boundaries of the discipline? Do I provide my students with a “general analytic framework with which to approach the corresponding readings and assignments”? (Thanks to Kevin Gannon for that last one, and to Donna Laclos for pointing it out).
Community: Does my syllabus enable interdisciplinary conversation? Do I encourage engagement with the world outside the classroom (and the campus)?
If the objects and processes of our scholarship embody values that enrich our professional lives and our scholarly community, shouldn’t they be rewarded? Aren’t these a truer indication of excellence than any article download or citation count?
In Aristotle’s Nicomachean Ethics, there is a famous passage in which he reminds us that “to be happy takes a complete lifetime; for one swallow does not make a spring, nor does one fine day; and similarly one day or a brief period of happiness […]
Today is the second full day of the Triangle Scholarly Communication Institute, where I am part of a team that’s focusing on HuMetrics: Building Humane Metrics for the Humanities. We have each agreed to quickly blog some thoughts as part of our process; warning: what follows is quickly-crafted prose, full of rough edges.
The #HuMetrics team is making progress in attempts to reverse engineer that which we want to measure, by starting with the values than often govern the (in some ways broader, more comprehensive) range of work that we engage in (see Chris Long’s previous post for further discussion). We were able to distill a wide-ranging brainstorm of values into five categories, and have aims to think about how those value categories relate to the processes and products of our work. The important core of our conversation today centered around the fact that while metrics often seen (or are taken) as the end-goal, in point of fact the indicators always align to a value, and so part of the work here, in this free and open thinking space, is to be aspirational about the values we’d like to see elevated, incentivized, and rewarded. If openness as a value is prioritized, for example, one could imagine more weight given to scoring articles and/or journals that were OA rather than not. If that seems like an extreme example, it’s perhaps a worthy future exercise to consider the ways that certain ways of showing impact actually might already tweak the scale toward different values.
For our quick 20-minute afternoon exercise, each of us is taking a crack at writing about one of those five values: Equity, Openness, Collegiality, Quality, and Community. In our framework, Quality can take on one or more of the following characteristics:
- Pushing boundaries
- Advancing knowledge
Replication and reproducibility have a certain emphasis in some of the social sciences, but the terms might in other contexts also be thought of in the sense of extensibility. It’s important to note that these are preliminary notions, and we welcome your feedback.
Keep in mind that we’re considering metrics than can apply to multiple kinds of academic processes and outputs, so not just whether or not your article or book has high quality (measured currently, say, based on whether or not the article is in a journal with a high impact score, or if your book receives a certain number of citations), but whether or not you play a role in helping measure the degree of quality for an object (say, serving as a peer reviewer for a grant application, a reviewer for a book, a referee for an article, etc.). Both of these kinds of activities are part of the transaction related to “quality,” but currently we overwhelmingly incentivize and reward the former rather than the latter. What such a focus on the transactional value in the sense of quality does is unpack the transactional relationship and scholarly networks that undergird much of our work, disturbing the notion of individual acts of scholarship by revealing the deep relationships behind scholarly works.
Another challenge to current methods for measuring impact related to quality is the well-noted challenge of addressing context (such as whether or not a citation is a positive one or a negative one), and the degree to which such measurements lend themselves to a certain kind of gaming the system (through overuse of citations to drive up citation scores). How do we successfully implement speed bumps (rather than roadblocks) that require some small additional effort that will likely not prevent gaming the system but may, to carry the metaphor, slow it to a reasonable speed? The use of “active citation” (a precise and annotated citation and link to the source), as argued by Andy Moravcsik (PDF) in the context of political science research, is one potential method, especially for more qualitative work.
Follow team #HuMetrics as we wrestle with humanities metrics. We are Christopher Long, Rebecca Kennison, Stacy Konkiel, Simone Sacchi, Jason Rhody, and Nicky Agate, and we’ll be writing here all week.
Today is the Day Two of the Triangle Scholarly Communication Institute, where I’m heading up a team that’s focusing on HuMetrics: Building Humane Metrics for the Humanities. Our team has focused a lot on the importance of working out loud, of process over product, and […]