Recapping the Third HuMetricsHSS Workshop: Valuing and Evaluating Annotation

In November 2018, the HuMetricsHSS team gathered for our third workshop on values and research impact metrics. This workshop examined the role of annotations as a scholarly indicator.

The purpose of the first day of deliberations was to understand what it would look like to demonstrate one’s values through the process of annotating scholarly works.

We began by picking apart what we meant when we talk about "annotations." The idea of critiquing another scholar’s work by annotating it is closely related to the peer review process, a more formalized vehicle for analyzing others' contributions to a field. As expected from a group of humanists and social scientists, we problematized the idea of "annotations" well, pointing out issues such as:

  • Public vs. private annotations (cf. 60% of annotations in Hypothes.is are done in private groups); how can you use annotations as an indicator if they are most often private (and thus the data unavailable for metric development)?
  • How does Twitter (as a vehicle for commentary) factor in? Similarly, where do critical editions fit in discussions about annotations?
  • How do you capture offline engagement with research (e.g., marginalia), when new indicator development is somewhat biased toward what can easily be text-mined nowadays?
  • What about annotations as a scholarly product unto themselves: some annotations secondarily benefit readers, who learn from the enriched text?

The common theme in our deliberations was that annotations are discussions around a central text that shape the reader’s understanding of a text.

Grounded in this common understanding, we then moved on to discuss how values can play out through one’s own annotation practices. For example, integrity (though absent from the proposed HuMetricsHSS framework) was seen by workshop participants as being central to the scholarly review and engagement process. Equity is another example: how can we moderate or put boundaries around an annotation process in such a way that invites all relevant voices to the table, no matter one's stature or credentials?

Providing values-infused structures for inviting others to annotate your work is another means of putting one's values into practice. For example, Kathleen Fitzpatrick invited feedback and annotations on a draft her latest book in a transparent manner, on an open source platform (Humanities Commons).

Any invitation to annotate is asking for investment (of time/attention, intellectual standing, generosity, etc.) from others. We agreed that recognition (through acknowledgements in papers/books, collaboration opportunities, even simply continuing a conversation started in the margins through emails, letters, etc.) was an important aspect of repaying this investment.

If we view annotations as enriching to source texts, it follows that we should value annotations themselves as scholarship. The second day of the workshop sought to examine how annotators can use their contributions to the field to tell a story about their scholarship.

In breakout groups, we discussed a number of initiatives that give credit to annotators in a variety of ways. Climate Feedback was suggested as model for recognizing the contributions of annotators; in a way, it is the equivalent of being on a panel or journal editorial board, bringing disparate content from across the web into one view with added commentary. Other kinds of credit suggested included:

  • Group membership for a project/research initiative; a shared way of articulating the nature of one’s annotation work
  • Letters from authors or colleagues about one's annotation practices
  • Micro-publications based on annotations (e.g., public holistic comments in the Public Philosophy Journal)
  • Paratexts (e.g., the comments for Generous Thinking as they are connected to the final draft of the book)

"Number of annotations made" was widely agreed to be a poor indicator for intellectual contributions to a field. Instead, the burden falls on the individual to articulate what their annotation activities are in narrative form, arguing what and why the annotations are and why they are important. That asks a lot of the researcher, who needs to put in a lot of time to think about their personal values and how their annotations support that value. Participants discussed ways to automate part of this process, so that researchers spent more time telling the story of their scholarship rather than performing rote collection of their contributions.

We wrapped up the workshop by recapping the HuMetricsHSS iniative and where we are headed next — plans that we look forward to sharing with you in the coming months!