Rethinking humane indicators of excellence in the humanities and social sciences

Blog

On “The Value of Values” Workshop, Part 1

On “The Value of Values” Workshop, Part 1

During the 2016 TriangleSCI, the HuMetricsHSS team spent a lot of time making lists. Lists of scholarly objects, large and small. Articles, books, syllabi, annotations, editions, committee minutes, e-mails, and blog posts. We even grappled with how to represent the “object” that comes out of […]

HuMetricsHSS: Can (Should) We Develop Humane Metrics for the Humanities?

This post was originally written by Adriel Trott, Associate Professor in Philosophy at Wabash College and participant in the HuMetricsHSS “The Value of Values” (#hssvalues) workshop. It reflects her experiences of the event. We’ve cross-posted this from Adriel’s blog, with her permission. I just got back […]

Transforming Research 2017: Can an exploration of values move research evaluation practices forward?

Transforming Research 2017: Can an exploration of values move research evaluation practices forward?

Last week, the inaugural Transforming Research conference took place in Baltimore, MD. This meeting was unique in its ability to bring together researchers, administrators, librarians, funders, and vendors to talk about the current evaluation climate and brainstorm solutions for improving collective practices.

Immediately, the theme of scarcity–so prevalent in the prior week’s HuMetricsHSS #hssvalues workshop (more info coming soon!)–became central to our discussions. How can we find the best metrics that help us do our jobs with fewer resources, faster?, was what most of the presenters hinted at. But no one seemed too concerned with the assumptions that got us to this place (the commodification of the research endeavor and of higher ed).

Nevertheless, two sessions from Transforming Research stuck out to me.

The first was Chaomei Chen’s (Drexel University), who explored the role of uncertainty in driving knowledge creation and disciplines forward. Chen’s research uses advanced textual analysis techniques to tease out the certainty of certain statements made in STEM literature (e.g. “HIV may cause AIDS”), looking specifically for hedging statements to find where uncertainty in the literature occurs. He argued that uncertainty often signals that one is on the precipice of exciting new discoveries, that work is boundary pushing.

The second session included the conferences lone humanists, a group of philosophers (Robert Frodeman, University of North Texas; J Britt Holbrook, New Jersey Institute of Technology; and Steve Fuller, University of Warwick) who led a lively debate over the value of research metrics. All agreed that in theory metrics can be a good thing, but the three came down on different sides of the fence as to whether they could be at all useful in evaluation.

Holbrook suggested that metrics are only useful inasmuch as they can help individual researchers to grow in their career; for example, by looking at social media engagement with a research article, one can find potential collaborators. Frodeman and Fuller argued that metrics could be good for evaluation–if only they were built a bit smarter. Fuller–a futurist–believes that the power of Big Data that we now have available at most institutions can help us to build intricate, precise metrics that measure exactly what we want them to measure (kind of like how Google’s search algorithm is excruciatingly complex, but it does what we need it to do, so we don’t worry too much about the mechanics).

Frodeman drove home this point by saying, “We need to design metrics that fail gracefully–that steer human activities in the right direction, even when they don’t work as well as we want them to.” His point brought to mind how researcher “productivity” is increasingly understood by counting the number of publications written per year, and how that can drive down the quality of publications overall. In theory, it’s possible to design a metric that doesn’t encourage that kind of behavior–but what that metric is, we don’t yet know.

I also had the honor of speaking at this meeting, and brought considerations of values into the conversation. Below, I’ve recapped my talk, the slides of which are available for download on Humanities Commons.


I want to start this talk by asking you to write down your answer to the following questions. I’m serious in that I want you to write down your answers (even you, the people reading this post!):

  • What values drive your organization?
  • What values drive your personal work, your professional growth and development?
  • If you work in a role that serves researchers (e.g. administrator, librarian, etc), what values do you think drives their work?

You might be wondering, “Why values?”

We’re talking values because we know that misused metrics are damaging the academy and providing perverse incentives to knowledge creation.

Nowadays, we ask researchers to prove their value a lot: whether in the university annual review process, when going up for tenure, or grant application process.

This is because we’ve been given the impression that there are scarce resources to go around, and we have to have the best “return on investment” for funded research, or university resources–to make the most impact. So we ask researchers to prove their personal value all the time. They usually do it with citations: I’ve got an h-index of 15, or my latest paper’s been cited 100 times in the past 6 months, isn’t that great?

But this insistence on proving value–when not grounded in values–causes researchers to begin to focus on the wrong questions.

Rather than “What’s worth researching?” they ask “What’s a hot topic that’ll get me cited a lot?”

Rather than asking “What resources–if any–do I need to undertake my studies in a thorough and complete manner”, they ask, “What will get me funded” (because at many institutions, having funding–even if you don’t really need it–is more desirable).

Rather than asking, “What important new knowledge do I want to share with my colleagues and the public about my research findings, and what’s the most effective format to do that in?” they ask “How fast can I write articles, and how many can I write?”–because they are asked to be more productive by writing more, more quickly.

An oil rig with "It's all rather crude." written overtop

It’s all rather crude, in two senses of the word.

The first way that it’s “crude” is that by oversimplifying the metrics we use to understand the influence of research, we’re using very rudimentary and in some cases incorrect and damaging measures, and incentivizing the wrong behaviors: quantity of articles over quality of research, publishing in time-consuming, toll access, and difficult to get into venues like Nature rather than open access journals with arguably bigger audiences and quicker impact, and so on.

It’s crude in another, more important way, too: it treats the research endeavor as something that’s simply value to be extracted, like crude oil from the earth: more papers, more citations, more grant dollars, and so on. Even objectives and outcome oriented assessment can do this, because those models don’t always allow for failure nor account for what happens when we fall short of our goals but still learn a great deal.

It’s dehumanizing and damaging to the pursuit of knowledge, the advancement of humankind.

But these deficiencies to the status quo are why we need a reimagined approach to research evaluation. One that’s situated in community values, and keeps the humanity of the scholar at its core. We need to transform not only the research portfolio, but also how we understand research impact. That’s where HuMetrics come in.

We’ve proposed a set of core values that we believe drives scholarship, and can in theory use metrics towards helping us to embody these values.

Our five proposed core values are:

  • Collegiality, which can be described as the professional practices of kindness, generosity, and empathy towards other scholars and oneself;
  • Quality, a value that that demonstrates one’s originality, willingness to push boundaries, methodological soundness, and the advancement of knowledge both within one’s own discipline and amongst other disciplines and the general public, as well;
  • Equity, or the willingness to undertake study with social justice, equitable access to research, and the public good in mind;
  • Openness, which includes a researcher’s transparency, candor, and accountability, in addition to the practice of making one’s research open access at all stages; and
  • Community, the value of being engaged in one’s community of practice and with the public at large, and also leadership.

By aligning evaluation practices and metrics with these values, we believe that academia and all the related players adjacent – like funders – can better incentivize positive scholarly practices.

Our values come from a very particular worldview, one’s the situated in the humanitistic approach to research–which doesn’t assume objectivity, and relishes in nuance and muckiness.

#hssvalues workshop members crowded around tables, looking at posters

In early October 2017, we hosted a workshop to test these values with a group of twenty five humanists and social sciences from all stages of their career, from community colleges, R1 institutions, and Ivy League schools.

We naively thought that we would come out of the workshop with an agreed upon set of values that are shared across the academy (at least in the social sciences and humanities), but we couldn’t have been more wrong! At the end of the workshop–two full days of debate and thinking and brainstorming and support–the only thing that everyone could agree upon was the importance of being able to debate your values, and the value of process in helping people with very different worldviews to come to an agreement on the values they did share.

There probably are not “core values” shared by all humanists and social scientists.

So, that was the value of our work last weekend. We failed in our original goal. We discovered that there is likely no shared core set of values that can be applied in any situation, across institutions and career levels.

There are likely shared values in organizations and departments.

But that’s not to say that core values don’t exist. At the organizational level, they probably already exist for you. They’re in your mission statements, your departmental strategic directions documents, your annual reports you write in your funding agency. They’re probably highly contested, hard won, and subject occasionally to disbelief, cynicism, and ridicule. But they’re there. And you can use better metrics to measure your progress towards embodying those goals. You can use them to guide your funding decisions, your tenure cases, your personal work.

I’ll end with an example. Let’s say you a historian of widget production in the greater Baltimore area during the Great Migration. You want to understand your own progress towards doing quality scholarship in this field.

You can ask some very different questions of yourself than you might if you were focused only upon citations–that is, if you were driven by the value of doing quality work.

Are my methods reproducible?
Do I show creativity in my approach?
Does my body of work advance knowledge in my field?
Am I intentional in my approach?

Measuring yourself by these questions isn’t as hard as you might think, either! Here are some back of the envelope thoughts on more precise metrics you can use to understand your progress towards embodying the goal of quality for your work:

Quality: book reviews, peer reviews
Reproducibility: cited in Methods, “forked” on Github
Creativity: interdisciplinary citing, new formats, depth of elaboration
Advancing knowledge: sustained citations over time, awards, sustained social media discussions
Intentionality: time spent/depth of thinking, regular reflection upon goals

Now let’s turn it back to you. I want you to revisit the values you wrote down at the start of my this post a few minutes ago. What are they? How will you measure your own progress towards embodying them? Think creatively and boldly. And I invite you to share them with the larger scholarly community pondering HuMetricsHSS by emailing us at humetricshss@gmail.com or tweeting them to use @humetricshss.

Featured image courtesy of Kristi Holmes/Twitter

On Living Our Values While Under Stress (and Preparing for Workshop One)

On Living Our Values While Under Stress (and Preparing for Workshop One)

The real work of the HuMetricsHSS initiative begins in Michigan this week, when an insightful group of thinkers—faculty members of all ranks, teaching in any number of HSS disciplines at all kinds of institutions, along with administrators, graduate students, university publishers, and librarians—has agreed to come together to rip apart, interrogate, and rebuild that values framework, to come to a consensus on the values we share as a larger group.

HuMetricsHSS at LIBER2017: Meeting the European Research Library Community

HuMetricsHSS at LIBER2017: Meeting the European Research Library Community

I had the opportunity to present the HuMetricsHSS projects to the European research library community at LIBER2017, the 46th LIBER Annual Conference, which happened in Patras, Greece, in July.

IFLA WLIC 2017: Exploring Values-Based (Alt)Metrics to Enhance Library Services

IFLA WLIC 2017: Exploring Values-Based (Alt)Metrics to Enhance Library Services

In August, I shared the HuMetricsHSS project with an international cross-section of librarians at the IFLA World Libraries Information Congress in Wrocław, Poland.

In this post, I’ve adapted a portion of my talk. You can download the related conference paper that explores these ideas in-depth from Humanities Commons.

Libraries are no strangers to thinking about their values, and designing services to match those values.

Some basic values are not controversial: for example, responding to patron requests professionally and competently.

Other values demand activism like defending intellectual freedom, or supporting a range of social justice issues. For example, in the United States, some libraries have created Black Lives Matter displays to inform their patrons of current struggles for civil rights.

But others argue that neutrality is a crucial ethical value for libraries. Though not everyone agrees, many libraries’ policies on acquisition and intellectual freedom are often shaped by the value of neutrality.

Many academic libraries have taken an interest in research evaluation metrics, which is a hot topic nowadays in academia in general. In fact, libraries have been integral to research evaluation efforts worldwide.

"Metrics and goals are mismatched"

But there are important challenges for the field of research evaluation. For one, humanities researchers are being evaluated using metrics that aren’t appropriate for them–for example, journal impact factors don’t really apply in the humanities, but we’ve heard anecdotally that administrators are considering them (by way of using evaluation tools that report on JIFs).

Also, while many institutions claim to care about teaching, professional service, mentoring, and other aspects of scholarly life, evaluation practices in the US are usually focused primarily upon research–how much you’ve published, how often it’s being cited, and so on.

But these problems can’t be blamed simply upon the use of research evaluation metrics.

The problem is how we’re using these metrics. We’re trying to fit square metrics pegs into round evaluation holes, as it were. In other words, it’s the misalignment between the values that institutions hold dear and the metrics they use to evaluate their researchers that cause these problems.

That’s where HuMetrics come in.

The HuMetricsHSS project imagines a better use of research evaluation metrics in the humanities and social sciences, and a broadening of evaluation practices to consider and reward service, teaching, mentorship, and myriad other aspects of a scholar’s work.

We’re a research team that’s taking a bottom-up approach to making metrics better for evaluation. Our initial research has found there to be five core values that underpin scholarly pursuits in HSS.

These values are:

  1. Collegiality, which can be described as the professional practices of kindness, generosity, and empathy towards other scholars and oneself;
  2. Quality, a value that that demonstrates one’s originality, willingness to push boundaries, methodological soundness, and the advancement of knowledge both within one’s own discipline and amongst other disciplines and the general public, as well;
  3. Equity, or the willingness to undertake study with social justice, equitable access to research, and the public good in mind;
  4. Openness, which includes a researcher’s transparency, candor, and accountability, in addition to the practice of making one’s research open access at all stages; and
  5. Community, the value of being engaged in one’s community of practice and with the public at large, and also leadership.

By aligning evaluation practices and metrics with these values, we believe that academia–including libraries–can better incentivize positive professional behavior.

Taking the IFLA 2017 World Congress theme of Libraries. Solidarity. Society. as a starting point, let’s explore how libraries might embody and evaluate services based on these values.

For each of these examples, I’ll first give some scenarios in which libraries might embody a particular value, then suggest some useful metrics or indicators that we can use to judge success by.

Equity and collection management

In a library landscape that embraced Equity, more collection dollars might go towards supporting Open Access publishing practices in the humanities and social sciences, so that everyone might benefit globally from the production of knowledge locally. Collection budgets might also be allocated to prioritize materials needed by marginalized communities locally (e.g. migrants, the LGBT community, etc).

In a way, these efforts towards Equity are relatively easy to measure: one could showcase the growth in Open Access fund monies, or in the percentage of community-relevant materials in one’s budget.

One might also consider growth in citations to and discussions of (vis a vis altmetrics) their institution’s Open Access research from communities worldwide as an indicator of increased access, and therefore Equity.

Openness and purchasing

Similarly, there’s support for the value of Openness. When viewed through the perspective of purchasing decisions made by libraries, here’s how that might play out. One might subscribe only to OA-friendly journals (which allow self-archiving and/or offer the option of Gold OA publishing); one might direct collections dollars towards OA funds; or one might buy tools that help researchers practice open scholarship (e.g. Figshare, Zotero, GitHub, etc).

Related indicators of Openness could include the budgetary growth for support of OA-friendly journals, or tools like Zotero that support greater transparency in researcher workflows. Given the correlations between open access and citations (OA work having a higher likelihood of being cited), an increase in citations to and altmetrics for one’s institution’s research could be another useful indicator.

The response

The audience at IFLA WLIC 2017 was very receptive to the idea of values-based indicators. Many attendees were their institution’s bibliometrics and altmetrics experts, and as such appreciated a nuanced view towards the utility of these types of data for evaluation purposes. Likewise, we appreciated the opportunity for a knowledgeable debate about HuMetrics.

Like what you’ve just read? You can download the related conference paper that explores these ideas in-depth from Humanities Commons.

If you’re wondering if the HumetricsHSS workshop is for you, the answer is yes!

If you’re wondering if the HumetricsHSS workshop is for you, the answer is yes!

This October, the HuMetricsHSS team is excited to bring together a diverse group of scholars, teachers, administrators, and students from a wide range of institutions for a topic that we believe will transform academia. Over the course of a two-day workshop, we’ll interrogate, brainstorm, break apart, […]

Triangle and Beyond

When we first introduced the work of the HuMetrics group to the other TriangleSCI teams last October, we met with some resistance. Much like peer groups in the run-up to an election, we had become used to having our own ideas reflected and amplified internally. […]

The Syllabus as HuMetrics Case Study

The Syllabus as HuMetrics Case Study

Current metrics of humanities scholarship have been shown to be too blunt to capture the multiple dimensions of scholarly output and impact (see Haustein and Larivière). In addition, the inappropriate nature of current indicators can incentivize perverse scholarly practices (see The Metric Tide, Wilsdon et al.)…. In this context, our challenge and our responsibility is to articulate, incentivize, and reward practices that enrich our shared scholarly lives and expand our understanding of scholarship itself.

Christopher Long, “Nurturing Fulfilling Scholarly Lives”

We chose the syllabus as guinea pig: what questions could we ask ourselves about our syllabi to ensure that they, as objects, embody the values (Equity, Openness, Collegiality, Quality, and Community) and practices we want to encourage and reward in the humanities community?

—  Nicky Agate, “The Syllabus as Scholarship”

For nearly a week, the HuMetrics team circled in on a set of values and modes of scholarship that often go unrecognized and unrewarded when talking about scholarly productivity and academic excellence. Chris Long summarized prior posts and wrote about the intellectual framework that guided our deliberations, and Nicky outlined the value framework (very much a work in progress, and in need of feedback and testing) that can help shape the kinds of indicators we might want to gather. Values were important for us as a starting point because, as we reminded ourselves over the week, any kind of metric can be traced back to values, but those values are often tacitly present and uncritically examined, leading to the presumed status of metrics revealing value rather than reflecting it. How, then, might we reframe the indicators that help us tell the story of humanities and social science scholarship, starting with a set of values and practices (as Nicky remarks in her post) that we want to “encourage and reward in the humanities community.”

The syllabus became an object of keen interest to our group as an interesting test case, as an object that reflects a set of values, of scholarly decisions, and importantly, of time investment. We wondered how the syllabus might enhance citation indicators in the humanities and how such indicators might help us rethink and influence notions of impact that currently favor an article-based intellectual economy. Does the syllabus reveal the circulation of ideas in humanities and social science subjects in more effective ways than traditional citation networks using only articles? Is the syllabus a site of more rapid or even potentially up-to-date scholarship, rather than that reflected in the slow burn of a book three years in the making?

Additionally, using the syllabus as a potential site for indicators reflects some of the values expressed in our previous postings, and stands to offer us a more comprehensive view of scholarship, citation, and influence. The syllabus serves as connective tissue between our research and teaching practices, where the latter is often under-rewarded and under-recognized labor for the purposes of promotion. The syllabus expands our conception of audience for considerations of impact, bringing students into the conversation and offering a more comprehensive view of scholarly networks. As an object that reveals choices and policies from the instructor, but also speaks to the impact of the authors featured, the syllabus offers multidirectional indicators. For example, what do we learn about the variety of scholarly modes of expression, from the blog post to the article, the archive to the monograph? What might we learn about the impact based on the authors cited? What about the diversity of representation for authors, regions, or disciplines? We might even stand to learn if citations used in the classroom align with citations in the articles and books in frequently-circulated research literature, or more about the role the syllabus plays in amplifying and influencing other (often more rewarded and recognized) forms of scholarship like the article or book.

Because of its unique format, where citations are contextualized by some form of time stamp (the class period, e.g.), we hope the syllabus can also help us capture engagement indicators, based on a ratio of the number of texts in a time spot and the time allotted. A syllabus might have one class meeting, for example, where there are two required readings, and the following class meeting where there are 13. Where certain kinds of citation metrics can lead to a game-the-system approach of citation flooding, an engagement metric based on a texts:time ratio inverts this approach, where a fewer number of texts during a time period would suggest a higher level of engagement, and a higher number of texts might suggest a lower level of engagement. While speculative at this point, such an indicator might inform the development of other metrics that serve as speed bumps, rather than accelerants, to practices that serve metrics alone rather than advancing scholarship.

This third post, following Nurturing Fulfilling Scholarly Lives and The Syllabus as Scholarship, serves as a summary of the HuMetrics team presentation on the final day of #TriangleSCI. In the coming weeks, our team will be co-authoring an article discussing our preliminary research, our deliberations, our attempts to reverse-engineer metrics so that we can begin with a question of values, and our next steps toward operationalizing this work in a series of case studies. Importantly, we welcome your feedback and input, so please comment, respond, challenge, and contribute.

The Syllabus as Scholarship

The Syllabus as Scholarship

“A critical component of our emerging #Humetrics conversation at Triangle SCI involves finding ways to expose, highlight, and recognize the important scholarship that goes into the all-too-hidden work of peer review, syllabus development, conference organizing, mentoring, etc. Our current metrics fail to capture what is […]

Nurturing Fulfilling Scholarly Lives

Nurturing Fulfilling Scholarly Lives

In Aristotle’s Nicomachean Ethics, there is a famous passage in which he reminds us that “to be happy takes a complete lifetime; for one swallow does not make a spring, nor does one fine day; and similarly one day or a brief period of happiness […]

Changing Behavior by Changing Incentives

Changing Behavior by Changing Incentives

It’s our last day at Triangle SCI, and I’ve been contemplating overnight the feedback the #HuMetrics team got yesterday afternoon from our colleagues.

In our presentation of what we’ve been doing this week, we attempted to sketch out, using the activity of mentoring, how actions that we wanted to reward (as assessed by what our team member Stacy Konkiel calls a “basket of metrics”) would inspire the behaviors we want to instill to reinforce the values we’ve identified  —  equity, openness, collegiality, quality, and community  —  to create a more humane scholar and academy.

It was gratifying that most of our colleagues in the room did think our five values were spot-on. But the devil is always in the details. Maybe it was our choice of the example of “mentoring,” an activity not well-recognized already as a scholarly activity, that derailed the conversation, which seemed to shift from values we wanted to instill to focus primarily on “we don’t need more evaluation” and “we already reward that work,” neither of which seemed to address the issues we in the HuMetrics group have been looking to tackle this week. Yes, we’re already subjected to evaluation of all kinds. Yes, many colleges and universities have worked hard to balance research, teaching, and service. But what we’re arguing we need to do this quite different.

We are arguing for culture change that rewards unrecognized “silent” labor  —  which is most often undertaken regularly by women and minorities while their white male colleagues focus on (and are abundantly rewarded for) their “research”  —  as well as a culture that levels the playing field for all scholars and all students. Culture change is hard when the status quo is not only reinscribed but reinforced by what is rewarded. In the academy now we have rewards based on the products of scholarship that often not only permit but encourage less than stellar behaviors. (Publish often in recognized venues and we’ll let you be a jerk in department meetings  —  when you show up at all.) We want to argue that there’s much more to scholarship than publishing regularly in high-impact journals. I don’t think anyone disagrees with that. But it nevertheless seems to me that the unspoken argument behind much of yesterday’s feedback was that people were fine with our rethinking how we measure scholarship as long as we limited that thinking to that one area  —  publishing  —  alone. Sure, let’s reward peer review, it seemed to me to be the tacit argument, since that’s an important part of the publishing process that goes unrecognized. But mentoring or conference planning? Maybe not so much.

What concerns me is that such a focus on publishing once again merely reinforces the notion of a scholar that privileges her one activity over all her others. As Christopher Long has urged us to consider throughout this week, we need to create an environment where the scholarly life, in all its activities, is not only permitted but nourished and thus becomes a life well lived. How can we get there? We on the HuMetrics team welcome your thoughts.

The Excellences of Scholarship: Collegiality

The Excellences of Scholarship: Collegiality

We began the day with a walk. The morning was cool and fresh, and the grounds around the DuBoise House at the Rizzo Center offered the six of us on the #HuMetrics team at the TriangleSCI the peace and space we needed to think out […]

On Quality

On Quality

Today is the second full day of the Triangle Scholarly Communication Institute, where I am part of a team that’s focusing on HuMetrics: Building Humane Metrics for the Humanities. We have each agreed to quickly blog some thoughts as part of our process; warning: what […]

Community as a Humanistic Value

Community as a Humanistic Value

Today is the Day Two of the Triangle Scholarly Communication Institute, where I’m heading up a team that’s focusing on HuMetrics: Building Humane Metrics for the Humanities. Our team has focused a lot on the importance of working out loud, of process over product, and agreed that we would each take 20 minutes once or twice a day to blog, pomodoro style, about what we’ve been up to. In other words, don’t expect polished prose; this is pre-alpha humetrics-in-the-making.

We started off team time today by clustering the 20–30 values that we believe would enhance and enrich humanities scholarship under five headings: Equity, Openness, Collegiality, Quality, and Community. Taken together, these values could embody what the Greeks called arete, or excellence in practice; if we encompass them in our scholarly practice, they should have impact (see Stacy Konkiel’s post on influence vs. impact for what we understand by that).

I offered to write the post on community because it was a perceived lack thereof (in the humanities) that prompted me to leave academia. The infighting and backstabbing, the condescension and derision — these were not something I wanted to be a part of. And yet there’s so much potential for things to be better. That’s why we wanted to reverse engineer humanities metrics, to see if we could start from the things we value and then work out how to incentivize and measure practices that embody those values.

Under Community, we clustered engagement, network, holistic, attunement, and leadership. [Note: while I’m writing this, we’re continuing today’s discussion on Slack, and have decided, given conversations about posterity and future audiences that were part of this afternoon’s session, that preservation should also be included here.] It’s a diverse list, to be sure (I’m sure you’re scratching your head or stroking your beard or furrowing your brow just a little), but hopefully I can explain our thinking. That is, if I can remember our thinking — I’m beginning to believe there’s such a thing as too much brainstorming.

Paying attention to community in scholarly life means fostering, cultivating, and participating in relationships and networks to which one gives and from which one takes. Like collegiality, it’s about generosity and mentorship; it’s about knowing when to lead and when to listen. It encompasses attunement because it asks us to be intentional about the connections we make and the way we enact them. One might create a community in the classroom, or acknowledge the full roster of people that contributed to the making of a book (is there really such a thing as a monograph?), or orient a class around service learning in such a way that it benefitted the larger community, the one beyond the academy. One might participate in a community by sharing data or code or primary source materials, by adding certain materials to a syllabus, or by opening up one’s own creative process (rather than just the product) to critique and conversation. And being part of a community means thinking beyond the now, proactively considering the preservation of all elements of the scholarly record (from blog posts to conference papers to tweets and vines), thinking forward to the publics and communities that might find value or interest in our work ten, fifty, or one hundred years from today.

Follow team #HuMetrics as we wrestle with humanities metrics. We are Christopher Long, Rebecca Kennison, Stacy Konkiel, Simone Sacchi, Jason Rhody, and Nicky Agate, and we’ll be writing here all week.

On Openness

On Openness

Second day at #TriangleSCI with the #HuMetrics team. Today we focused on clustering our values into major value-categories (Equity, Openness, Collegiality, Quality, and Community) with the idea that excellence in scholarship is an expression and combination of these value-categories as they are embodied in scholarly […]

Equity as a Core Value

Equity as a Core Value

We’ve just completed Day 2 at #TriangleSCI, and more hard (but good) work is now behind the #HuMetrics team. Today we took our huge brainstorming list from yesterday and distilled the values we believe should underpin the development of “humane metrics.” We came up with […]

Influence vs. Impact: Which Are Humanists Really Trying to Achieve?

Influence vs. Impact: Which Are Humanists Really Trying to Achieve?

Apologies for the false dichotomy I’ve set up by my framing of this post in its title as “impact versus influence.” It’s a result of the quickblogging process, one that Christopher Long, Rebecca Kennison, Nicky Agate, Simone Sacchi, Jason Rhody, and I agreed upon as a means of digesting and sharing our daily work at the TriangleSCI meeting.

Yesterday, the #HuMetrics team spent the better part of our day articulating values, outputs, processes, and metrics that humanistic research results in. Our idea was that if we could “reverse engineer” metrics from values and practices, that we could come up with metrics that are more humane: they not only reflect and incentivize the practices that humanists value most, but also help humanists avoid the “impact trap” that many in STEM find themselves a part of.

Our team came up with a list of values that fall into several general areas: equity, openness, collegiality, quality, and community:

(It’s important to note that our list is by no means exhaustive, and for the most part it draws upon our own personal experience and is not as informed by the existing research in this area. We’re aware of that limitation  —  the list is mostly meant to be a starting point for thinking about metrics.)

As you can see, these core values are flanked on either side by two overarching desires: for research excellence and for research impact. It’s upon the latter point that I want to think aloud for a few minutes.

“Impact” is a term with very particular connotations, depending upon where you stand in the world.

From the STEM and social sciences perspective, it’s often related to measurable changes in the world that are attributable to research outcomes (nod to Cameron Neylon for that succinct definition). Much of the time, this results in an emphasis upon research commercialization, economic impacts, or public health impacts.

For the humanities, “impact” is also often tied to money: how many jobs the cultural sector produces, income related to cultural activities like the film industry or museum openings, and so on. But as the UK Arts and Humanities Research Council has pointed out, there’s a hierarchy of impact assumed in our current neoliberal environment, one that puts economic value above all other values  —  and that should not be the case, especially for the humanities. And as David Budtz Pedersen at the Humanomics Research Center at the University of Copenhagen has pointed out, “the humanities may find many pathways into society, some of which are deeply integrated in the functioning and affluence of modern liberal societies.”

We discussed the need to push back against the idea of “impact” as outcome oriented (especially as those outcomes relate to the economy), and to also reclaim the term “impact” to mean what humanists want it to mean  —  in all its messiness, and sometimes at odds with what’s demanded of researchers by the institutions, governments, private funders, and public that want to see an easy-to-digest statement of “return on investment” from the humanists whose work they support.

What remains to be seen  —  what we’ll tackle tomorrow  —  is whether it’s actually possible to find metrics to relate to less-tangible values, beyond economics: those that tell us whether humanities research is truly changing a discipline, affecting the way the public thinks, or having any other number of personal and societal impacts.

Perhaps a better way to think about what humanists wish to achieve is to use the term “influence” instead of impact?

Follow team #HuMetrics as we wrestle with humanities metrics. We are Christopher Long, Rebecca Kennison, Stacy Konkiel, Simone Sacchi, Jason Rhody, and Nicky Agate, and we’ll be writing here all week.

Scales of Measurement and the Public Good

Scales of Measurement and the Public Good

Today is the first day of the Triangle Scholarly Communication Institute, where I am part of a team that’s focusing on HuMetrics: Building Humane Metrics for the Humanities. We have each agreed to quickly blog some thoughts as part of our process; warning: what follows […]

The Value of Openness

The Value of Openness

Day 1 of the Triangle Scholarly Communication Institute is underway, and our HuMetrics: Building Humane Metrics for the Humanities team (Nicky Agate, Simone Sacchi, Christopher Long, Stacy Konkiel, Jason Rhody, and me) is already hard at work. We began by putting aside (for the moment) […]