- This event has passed.
Research Evaluation in the Social Sciences and Humanities
July 7 @ 10:45 AM - 12:25 PM CEST
Towards Values-Based Evaluation in the Humanities and Social Sciences
Session: Alternative Metrics
Presenter: Nicky Agate
For too long, humanists and social scientists have been allergic to metrics. This allergy has prevented researchers from engaging in a serious and sustained conversation about what practices of scholarship should be cultivated and incentivized both through the activities that are measured and those that are celebrated. As a result, a large and growing battery of metrics have been developed based on the practices of science scholars or simply on what it was possible to use our technologies to measure (Brooks, 2005; Guetzkow, Lamont & Mallard, 2004; Hemlin, 1993; Hemlin & Gustafsson, 1996; Oancea & Furlong, 2007).
Much focus has been paid to measuring usage and citations in the research literature. Applications of these metrics in humanities and social sciences (HSS) contexts, especially against the backdrop of efforts like the Research Excellence Framework in the United Kingdom, has generated considerable skepticism from scholars (Ochsner, Hug & Galleron, 2017; Wilsdon et al., 2015). Many HSS researchers believe that evaluation based only upon these metrics neither correctly assesses the impact of HSS scholarship nor serves the best interest of HSS researchers. Evaluators’ collective challenge and responsibility, then, are to articulate, incentivize, and reward practices that enrich scholarly lives and expand the understanding of scholarship itself.
This paper, based on the work of the HuMetricsHSS initiative, proposes a fundamental change in how approaches to research evaluation in the humanities and social sciences are built. The paper calls for a holistic, values-based evaluation paradigm, one that uses metrics only to measure a scholar’s progress towards embodying five values that initial research suggests are central to all humanistic and social science disciplines:
- Collegiality, which can be described as the professional practices of kindness, generosity, and empathy towards other scholars and oneself;
- Quality, a value that that demonstrates one’s originality, willingness to push boundaries, methodological soundness, and the advancement of knowledge both within one’s own discipline and amongst other disciplines and the general public, as well;
- Equity, or the willingness to undertake study with social justice, equitable access to research, and the public good in mind;
- Openness, which includes a researcher’s transparency, candor, and accountability, in addition to the practice of making one’s research open access at all stages; and
- Community, the value of being engaged in one’s community of practice and with the public at large, and also leadership.
Most existing research that explores the links between evaluation, metrics, and values in the social sciences and humanities addresses only the values of quality and originality (Gogolin, Aström & Hansen, 2014; Guetzkow, Lamont & Mallard, 2004; Hemlin, 1993; Hug & Ochsner, 2014; KNAW, 2011; Ochsner, Hug & Daniel, 2012). Moreover, though a survey of the literature suggests that academics and policymakers are well aware of the gap between desired behaviors and related metrics, incentives, and evaluation practices, few institutions (especially in the United States and United Kingdom) are putting changes into place that can close that gap.
Where the HuMetricsHSS initiative differs from previous HSS evaluation critiques is in the exploration of a fuller suite of scholarly values, including and beyond the value of research quality. HuMetricsHSS also seeks to engage researchers and university administrators to create strategies to drive the adoption of recommended, values-based evaluation practices at the university-level.
This paper articulates scholarly values in depth and describes work currently underway to validate these values with HSS researchers themselves. Both traditional metrics (i.e., bibliometrics) and altmetrics are explored as means of measuring one’s progress towards these values. Also described is the larger arc of the HumetricsHSS project being conducted by a team of scholars and information professionals working to find ways to expose, highlight, and recognize the important scholarship that goes into not only research activities, but also the all-too-hidden work of peer review, teaching, service, and mentoring.
Brooks, R. L. (2005). “Measuring university quality.” The Review of Higher Education, 29(1), 1-21. doi: 10.1353/rhe.2005.0061
Gogolin I., Aström, F. & Hansen, A. (eds.) (2014). Assessing Quality in European Educational Research: Indicators and Approaches. Wiesbaden, Germany: Springer.
Guetzkow, J., Lamont, M. & Mallard, G. (2004). “What is originality in the social sciences and the humanities?” American Sociological Review, 69(2), 190–212. doi:10.1177/000312240406900203.
Hemlin, S. (1993). “Scientific quality in the eyes of the scientist: A questionnaire study.” Scientometrics, 27/1: 3–18. doi:10.1007/BF02017752
Hemlin, S. & Gustafsson, M. (1996). “Research production in the arts and humanities: A questionnaire study of factors influencing research performance.” Scientometrics, 37(3), 417–432.
Hug, S. E. & Ochsner, M. (2014). “A framework to explore and develop criteria for assessing research quality in the humanities.” International Journal of Education Law and Policy, 10(1), 55–68.
KNAW [Royal Netherlands Academy of Arts and Sciences]. (2011). Quality Indicators for Research in the Humanities. Amsterdam: Royal Netherlands Academy of Arts and Sciences. Retrieved from https://www.knaw.nl/shared/resources/actueel/publicaties/pdf/quality-indicators-for-research-in-the-humanities.
Oancea, A. & Furlong, J. (2007). “Expressions of excellence and the assessment of applied and practice-based research.” Research Papers in Education, 22(2), 119–137.
Ochsner, M., Hug, S. E. & Daniel, H.-D. (2012). “Indicators for research quality in the humanities: Opportunities and limitations.” Bibliometrie—Praxis und Forschung; 1, 4.
Ochsner, M., Hug, S. E. & Galleron, I. (2017). “The future of research assessment in the
humanities: Bottom-up assessment procedures.” Palgrave Communications, 20, doi: 10.1057/palcomms.2017.20
Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., et al. (2015). Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. London: Higher Education Funding Council for England.