How to Evaluate Digital Humanities Scolarship?
Work Work Work! I think the chorus of Rhianna’s latest song Work featuring Drake best describes my sentiments towards the necessary criteria for evaluating digital humanities scholarship. I don’t mean to insinuate that the evaluators will have extra work to perform, quite the opposite. Taking Digital humanities has one given me clarity on what the digital signifies in digital humanities projects, often times it is an arduous unglamorous process of gathering data and experimental learning. I’d say the most critical component for evaluating digital humanities work is understanding and quantifying the amount of attempts and time spent in trial in error while using digital methodology. For instance, projects that use topic modeling, or QGIS require large datasets and beyond the meticulous process of creating data from text it takes patience to work through the potential setbacks when your data is not correct once uploaded. My own attempts working through QGIS tutorials and the many failed attempts I had to show for my progress was upsetting so much so that I now believe that in digital humanities failure is progress. This does not mean that projects should go incomplete of illegible, but I do think building in a process that measures the failed attempts would be an invaluable part of evaluating digital humanities work. Measuring the various attempts to produce a cogent project could reveal the amount of risks and technical skills the research required, as well as measure a new kind of learning- experiential or exploratory learning.
The ability to discern how essential the digital method was to the project. Was it effect, could it have been analyzed differently in a non-digital method? In other words, I think there should be a capacity to evaluate the significance and effectiveness of the digital aspects of the projects and how do they interact with the non-digital parts. Perhaps the traditional peer review model needs to make sure there is at least one to two persons on the board who understand the technicalities of digital Humanities projects.
In >Guidelines for Evaluating Work in Digital Humanities and Digital Media> MLA has helpful tips towards evaluating digital scholarship. One I found particularly helpful was the need for constant documenting. For instance, MLA suggest the scholar documents their role, and/ or document and explain the work. This could be helpful for the evaluators to get a better understanding of the digital labour, skills, and time that went into creating the project in addition to answering research questions. Although I see documenting as ultimately beneficial it can also add on a more work for the scholar. In terms of explaining methodology as well as documenting the progress of the methodology and the overall project I think both arguably can function as a methodology page.Or if both are equally necessary then the documenting should serve not as reassurance but as the primary mode of presentation. In addition to documenting, the ability to contextualize the digital scholarship within the rapidly changing technological field is equally important. If a scholar is working with a newer software or program that has yet to have a hub of online forums then this should be accounted for as well. Also the ability to discern how the scholar manipulates the software within the context.
Beyond documenting, The New Wave Review suggests that peer review can also take place on social media and does not have to be a formal process. For example, they consider
Blogging, Twitter, and other online platforms [that] have stood at the heart of the field for years [and] often tout the speed and openness of these platforms compared to the molasses-slow publishing cycles or gated paywalls of print journals. And yet, with some rare exceptions, we don’t use these platforms to engage in substantive or critical evaluation of the work of our peers.
I think this style of review respects the medium specificity in terms of open source information. Because digital scholarship can sometimes pride itself on its collaborative and open information efforts it seems fitting that the review and feedback mirrors this style. Conducting reviews via twitter allows for the feedback to be read by a wide range of audience who are both followers or various onlookers. The feedback can serve as a both feedback and alternative online forum where the suggestions for the scholarship in question come with hyper links to inform the scholar and the readers of alternative methods. This mode of peer review also allows for an on going editing process that is brief and responsive. This would ultimately shift the ways in which traditional peer review is practiced as well as put in question what is to be considered a finished work. With the peer review moving towards brief on going editing style it leaves more room for progress to be the main focus and not a complete object.
Overall, I am not sure if all of the traditional peer review should shift or lean heavily towards digital humanities but the digital projects should be equally considered when putting together a review board to take into account the difference in epistemological approaches digital scholarship generates. Although there are some epistemological differences between traditional scholarship and the ways in which digital humanist seek to interrogate the very means of scholarship both have similarities in terms of pushing ideas and new information as a career. I also think that both can learn from each other’s peer review process.