Transforming Research 2017: Can an Exploration of Values Move Research Evaluation Practices Forward?

Last week, the inaugural Transforming Research conference took place in Baltimore, Maryland. This meeting was unique in its ability to bring together researchers, administrators, librarians, funders, and vendors to talk about the current evaluation climate and brainstorm solutions for improving collective practices.

Immediately, the theme of scarcity — so prevalent in the prior week's HuMetricsHSS #hssvalues workshop (more info coming soon!) — became central to our discussions. How can we find the best metrics that help us do our jobs with fewer resources, faster?, was what most of the presenters hinted at. But no one seemed too concerned with the assumptions that got us to this place (the commodification of the research endeavor and of higher ed).

Nevertheless, two sessions from Transforming Research stuck out to me.

The first was Chaomei Chen's (Drexel University), who explored the role of uncertainty in driving knowledge creation and disciplines forward. Chen's research uses advanced textual analysis techniques to tease out the certainty of certain statements made in STEM literature (e.g., "HIV may cause AIDS"), looking specifically for hedging statements to find where uncertainty in the literature occurs. He argued that uncertainty often signals that one is on the precipice of exciting new discoveries, that work is boundary pushing.

The second session included the conference's lone humanists, a group of philosophers (Robert Frodeman, University of North Texas; J. Britt Holbrook, New Jersey Institute of Technology; and Steve Fuller, University of Warwick) who led a lively debate over the value of research metrics. All agreed that in theory metrics can be a good thing, but the three came down on different sides of the fence as to whether they could be at all useful in evaluation.

Holbrook suggested that metrics are only useful inasmuch as they can help individual researchers to grow in their career; for example, by looking at social media engagement with a research article, one can find potential collaborators. Frodeman and Fuller argued that metrics could be good for evaluation — if only they were built a bit smarter. Fuller — a futurist — believes that the power of Big Data that we now have available at most institutions can help us to build intricate, precise metrics that measure exactly what we want them to measure (kind of like how Google's search algorithm is excruciatingly complex, but it does what we need it to do, so we don't worry too much about the mechanics).

Frodeman drove home this point by saying, "We need to design metrics that fail gracefully — that steer human activities in the right direction, even when they don't work as well as we want them to." His point brought to mind how researcher "productivity" is increasingly understood by counting the number of publications written per year, and how that can drive down the quality of publications overall. In theory, it's possible to design a metric that doesn't encourage that kind of behavior--but what that metric is, we don't yet know.

I also had the honor of speaking at this meeting, and brought considerations of values into the conversation. Below, I've recapped my talk, the slides of which are available for download on Humanities Commons.


I want to start this talk by asking you to write down your answer to the following questions. I'm serious in that I want you to write down your answers (even you, the people reading this post!):

  • What values drive your organization?
  • What values drive your personal work, your professional growth and development?
  • If you work in a role that serves researchers (e.g., administrator, librarian, etc), what values do you think drives their work?

You might be wondering, "Why values?"

We're talking values because we know that misused metrics are damaging the academy and providing perverse incentives to knowledge creation.

Nowadays, we ask researchers to prove their value a lot: whether in the university annual review process, when going up for tenure, or grant application process.

This is because we’ve been given the impression that there are scarce resources to go around, and we have to have the best "return on investment" for funded research, or university resources — to make the most impact. So we ask researchers to prove their personal value all the time. They usually do it with citations: I’ve got an h-index of 15, or my latest paper's been cited 100 times in the past 6 months, isn't that great?

But this insistence on proving value — when not grounded in values — causes researchers to begin to focus on the wrong questions.

Rather than "What’s worth researching?" they ask "What’s a hot topic that'll get me cited a lot?"

Rather than asking "What resources — if any — do I need to undertake my studies in a thorough and complete manner?" they ask, “What will get me funded?” (because at many institutions, having funding — even if you don't really need it — is more desirable).

Rather than asking, "What important new knowledge do I want to share with my colleagues and the public about my research findings, and what's the most effective format to do that in"” they ask "How fast can I write articles, and how many can I write?" — because they are asked to be more productive by writing more, more quickly.

An oil rig with "It's all rather crude." written overtop
An oil rig with "It's all rather crude." written overtop

It's all rather crude, in two senses of the word.

The first way that it's "crude" is that by oversimplifying the metrics we use to understand the influence of research, we’re using very rudimentary and in some cases incorrect and damaging measures, and incentivizing the wrong behaviors: quantity of articles over quality of research, publishing in time-consuming, toll access, and difficult to get into venues like Nature rather than open access journals with arguably bigger audiences and quicker impact, and so on.

It's crude in another, more important way, too: it treats the research endeavor as something that’s simply value to be extracted, like crude oil from the earth: more papers, more citations, more grant dollars, and so on. Even objectives and outcome oriented assessment can do this, because those models don’t always allow for failure nor account for what happens when we fall short of our goals but still learn a great deal.

It's dehumanizing and damaging to the pursuit of knowledge, the advancement of humankind.

But these deficiencies to the status quo are why we need a reimagined approach to research evaluation. One that's situated in community values, and keeps the humanity of the scholar at its core. We need to transform not only the research portfolio, but also how we understand research impact. That’s where HuMetrics come in.

We've proposed a set of core values that we believe drives scholarship, and can in theory use metrics towards helping us to embody these values.

Our five proposed core values are:

  • Collegiality, which can be described as the professional practices of kindness, generosity, and empathy towards other scholars and oneself;
  • Quality, a value that that demonstrates one’s originality, willingness to push boundaries, methodological soundness, and the advancement of knowledge both within one’s own discipline and amongst other disciplines and the general public, as well;
  • Equity, or the willingness to undertake study with social justice, equitable access to research, and the public good in mind;
  • Openness, which includes a researcher’s transparency, candor, and accountability, in addition to the practice of making one’s research open access at all stages; and
  • Community, the value of being engaged in one’s community of practice and with the public at large, and also leadership.

By aligning evaluation practices and metrics with these values, we believe that academia and all the related players adjacent — like funders — can better incentivize positive scholarly practices.

Our values come from a very particular worldview, one's the situated in the humanitistic approach to research — which doesn't assume objectivity, and relishes in nuance and muckiness.

#hssvalues workshop members crowded around tables, looking at posters
#hssvalues workshop members crowded around tables, looking at posters

In early October 2017, we hosted a workshop to test these values with a group of twenty-five humanists and social scientists from all stages of their career, from community colleges, liberal arts colleges, R1 institutions, and Ivy League schools.

We naively thought that we would come out of the workshop with an agreed upon set of values that are shared across the academy (at least in the social sciences and humanities), but we couldn't have been more wrong! At the end of the workshop — two full days of debate and thinking and brainstorming and support — the only thing that everyone could agree upon was the importance of being able to debate your values, and the value of process in helping people with very different worldviews to come to an agreement on the values they did share.

There probably are not "core values" shared by all humanists and social scientists. There are likely shared values in organizations and departments.

So, that was the value of our work last weekend. We failed in our original goal. We discovered that there is likely no shared core set of values that can be applied in any situation, across institutions and career levels.

I’ll end with an example. Let's say you are an historian of widget production in the greater Baltimore area during the Great Migration. You want to understand your own progress towards doing quality scholarship in this field. But that's not to say that core values don’t exist. At the organizational level, they probably already exist for you. They're in your mission statements, your departmental strategic directions documents, the annual reports you write for your funding agency. They're probably highly contested, hard won, and subject occasionally to disbelief, cynicism, and ridicule. But they're there. And you can use better metrics to measure your progress towards embodying those goals. You can use them to guide your funding decisions, your tenure cases, your personal work.

You can ask some very different questions of yourself than you might if you were focused only upon citations There are likely shared values in organizations and departments. that is, if you were driven by the value of doing quality work.

Are my methods reproducible?
Do I show creativity in my approach?
Does my body of work advance knowledge in my field?
Am I intentional in my approach?

Measuring yourself by these questions isn’t as hard as you might think, either! Here are some back of the envelope thoughts on more precise metrics you can use to understand your progress towards embodying the goal of quality for your work:

Quality: book reviews, peer reviews
Reproducibility: cited in Methods, “forked” on Github
Creativity: interdisciplinary citing, new formats, depth of elaboration
Advancing knowledge: sustained citations over time, awards, sustained social media discussions
Intentionality: time spent/depth of thinking, regular reflection upon goals

Now let's turn it back to you. I want you to revisit the values you wrote down at the start of my this post a few minutes ago. What are they? How will you measure your own progress towards embodying them? Think creatively and boldly. And I invite you to share them with the larger scholarly community pondering HuMetricsHSS by emailing us or tweeting them to us at @humetricshss.