- This event has passed.
November 3, 2017 - November 4, 2017
Hacking evaluation: towards values-based professional advancement practices for the digital humanities
Presenter: Stacy Konkiel
Digital humanities scholars face a hard, continued socio-technical problem: though the results of their research are often web-native, interactive, and iterative (think: websites, exhibits, datasets, and more), their careers are often evaluated based upon discrete, static, and ossified publication formats like print monographs and journal articles, due in large part to disciplinary cultures and technological limitations. Such evaluations–and the metrics that sometimes underpin them–are often opaque and inappropriate when applied to digital humanities research. Worse yet, they can be divorced from the values that many humanists hold dear: equity, openness, collegiality, quality, and community.
Though in recent years, some sectors of academia have edged closer to a more correct means of evaluating digital humanities research (cf. scholarly societies that have adopted statements of support for the inclusion of born-digital research formats in promotion and tenure dossiers and the emergence of altmetrics as a means of research evaluation), there remains a dearth of values-based evaluation practices for DH.
This session will explore a possible new world of professional advancement for DH scholars, one shaped by evaluation practices rooted deeply in the scholarly values we wish to embody (as identified by the HuMetricsHSS project <http://humetricshss.org/>), and that uses web-native metrics and qualitative data to better evaluate the born-digital research methods that digital humanists employ. We will first explore the current state of the art in research evaluation practices for DH in the United States, including instances where DH scholars have successfully proven the relevance and quality of their work using a variety of web-native research impact data, including altmetrics. We will then engage attendees to explore the values that drive their own research practices, and imagine what metrics for a fully transparent, responsive, and values-based DH evaluation paradigm would look like.