My HuMetricsHSS journey began over ten years ago, at least, when I became quite interested, concerned, worried, slightly panicked, and finally deeply troubled about how we measure and understand the quality and impact of our research.
I was on leave from my faculty position at Maryland, working in Academic and Professional Affairs at my main disciplinary association (the National Communication Association) when I began to reflect seriously on this phenomenon in research assessment called the “impact factor.” While not new, the “impact factor” was gaining significant traction in those early years of the 2010s. On several award committees at Maryland, for instance, I’d seen the “impact factor” wielded as a cudgel against applications for funding or fellowships; on several promotion and tenure committees, I’d watched my colleagues quibble over tenths of difference in particular journals’ impact factors and what that difference said about the quality of a colleague’s research.
I remember clearly one particular meeting where we were deliberating on graduate student applications for a dissertation fellowship designed to hasten their completion of the dissertation and, hence, the degree. One of the criteria for this fellowship was the quality and productivity of the student’s academic record with particular attention to their publications. One student had published several journal articles; she was in a scientific field but was doing critical/cultural work in that field. As I was defending her application for the fellowship, another member of the committee (a biologist, I believe) chimed in to argue that he’d “never heard of this journal and it doesn’t even have a very strong impact factor.” The student did not receive the fellowship; just another in a long list of instances where someone uses the metrics of academic scholarship to demean and demote academic achievement.
At NCA, I ended up penning a report on the role of “impact factors” in the assessment and evaluation of faculty research. That report concluded with a number of concerns about metrics and impact factors, for Communication and for the humanities and social sciences in general: 1. Impact factors measure restricted and time-bound citation patterns and practices, not impact or quality; 2. Impact factors are improperly used in a variety of ways; 3. Impact factors and citation data are frequently misused to define disciplines and their research; and 4. Impact factors are potentially manipulated and may derive from alternative factors. This report gave rise to a couple of presentations, including one to the Conference of Executive Officers of the American Council of Learned Societies. From this report and from the discussion of its findings in other interdisciplinary settings, I came to believe that this persistent search for the ideal metrical assessment of research is a fool’s errand and that the impact factor is not an effective means of measuring research quality, particularly in the humanities and social sciences.
In the decade since, the relentless quest in higher education for newer and better research metrics has only intensified—new citation measures, h-indexes, i-indexes, Google Scholar—the list is dizzying. Gone are the days of actually reading our colleagues’ work; all that’s required to reach some judgment about someone’s research is to determine how often they’re cited and by whom. It’s this frantic search for new and different ways of metrically assessing research that spurs and continues to motivate my interest in these conversations.
My experiences on the College APT Committee at Maryland were heartening—I found there numerous colleagues equally concerned with the increasing metrication of research assessment even as we all navigated the institutional prerogatives urging the use of such metrics in our evaluation of our colleagues. Perhaps most heartening was the careful attention paid by individual departments to diligently consider the quality and broader impact of research and creative activity rather than simply highlight the quantity and alleged “impact” of the scholarship. And I was pleased to note the developing interests of our College’s leadership in thinking through these problems. That interest materialized in the Dean’s leadership and involvement with the HuMetricsHSS project and an invitation to all of us on APT to the first HuMetricsHSS workshop at the University of Maryland. That workshop was formative in a number of different ways.
First, I discovered in that workshop a broad community of my colleagues who expressed consistent and serious concerns about the progression and development of research assessment. From department chairs to faculty leaders to deans and associate deans—virtually everyone was concerned that our existing regime for evaluating research 1) didn’t account for alternative forms of scholarship (e.g., Digital Humanities, creative scholarship), 2) didn’t allow for a true and meaningful assessment of research, and 3) was ineffective at registering the value and importance of community-engaged scholarship.
Second, I learned that real change, despite everyone’s best efforts, is difficult and may well be impossible. There were some participants in the workshop who were wedded to and reified the metrics, the impact factors, the h-indexes. There were some participants who willingly capitulated to the inherent scientism of such metrics, believing (sincerely, I think) that doing so only enhances the credibility and palatability of the humanities in an institution skewed toward the STEM side of things. And there were some participants who were simply jaded and cynical, believing that the University’s processes were so skewed and entrenched in this metrical direction that nothing could be done to change things.
Fortunately, the third thing I gleaned from this initial experience with HuMetricsHSS was the sense that all was not lost, that other people at very high levels of administration cared about thinking through and changing the protocols of research assessment at Maryland and elsewhere. This motivated my foray deeper into the project via the HuMetricsHSS Community Fellowship—even as my proposal was somewhat vague in its focus and direction, it nonetheless signaled my continued concern and involvement with these issues, from a faculty perspective, within the larger structures and imperatives of a research university as it faced and confronted challenging issues.
In many ways, my HuMetricsHSS journey took an unexpected turn when, in the midst of the Fellowship period, I found myself working in higher education administration, first as an Associate Dean for Diversity, Equity, and Inclusion, and then as Associate Dean for Faculty Affairs and Research. In this later role, I found myself in an often contradictory context—on the one hand, I was now significantly responsible for the College-level assessment processes and procedures relevant to faculty hiring, promotion, and research. On the other hand, I was challenging and thinking through those very processes and working with my sympathetic colleagues and with my entirely supportive Dean to begin to pursue seriously meaningful, values-based change on my campus. Toward that end, we hosted at Maryland a HuMetricsHSS workshop on “The Value of Values,” that brought together administrative leaders and faculty for an in-depth discussion of the role of values in bringing about meaningful institutional change. This workshop was stimulating and generative precisely because it extended the initial concern with research assessment to all aspects of higher education. Not willing to give up my interests in research assessment and the APT process, I was happy to participate in a series of Provost-level dialogues examining various aspects of the APT process; I was thrilled to hear our Provost proclaim at a college-wide forum that she’s ready to see our college (Arts and Humanities) “take the lead” in reforming and rethinking Maryland’s APT processes and criteria. And with my HuMetricsHSS Fellowship colleagues and friends, Kiril Tomoff (UC-Riverside) and Jason Rhody (MLA), I was glad to help organize a session for the next National Humanities Conference entitled “Living our Values: Leading Efforts for Values-Enacted Institutional Transformation in the Humanities” (and if you’re at the NHC in Indianapolis this October, be sure to stop in—Friday at 2:00).
The journey continues, fully aware to the hurdles ahead, but secure in the knowledge that many, many of my colleagues are genuinely committed to this work. The fellowship of the HuMetricsHSS community has been wonderful in instilling a drive and a purpose for values-based change in higher education. And, with all the many challenges and obstacles facing higher education and faculty in ensuing years, from declining enrollments to political threats to academic freedom to public disenchantment with higher education, there is not a better, more opportune time to sincerely and powerfully deliberate about these weighty matters. I look forward to those conversations…