When I present about the academic crowdsourcing work my colleagues and I do at Zooniverse, one of the questions we sometimes get asked is: Does crowdsourcing take paid work away from people? The first few times I was asked this question I answered that the processing of relatively straightforward or simple data by a crowd frees the researcher, be they a top tier professor or a first year graduate student, to start asking more complex and interesting questions of their data sooner than they could otherwise. This, in turn, can lead to more interesting discoveries, faster, and can arguably create more jobs or at least make existing resources stretch further. This point generally goes down well, but I know that I haven’t convinced everyone I’ve spoken with about of the merits of academic crowdsourcing.
Earlier this week I was one of three panelists presenting to a group of Arts and Humanities Research Council (AHRC) Digital Transformations award holders at an event in Bristol. I gave a brief overview of ‘Constructing Scientific Communities’, a new interdisciplinary project underway at the Universities of Oxford and Leicester, that aims to illuminate crowdsourcing practices in the 19th and 21st Centuries, and see what we can learn from collaborations between scientists and enthusiasts of the past. Someone in the audience asked me the abovementioned question, and I gave the abovementioned answer. A spirited but polite exchange ensued over lunch, and I was interested to hear what this researcher had to say about Mechanical Turk and other platforms that operate a ‘sale to the lowest bidder’ crowdsourcing model.
But as I sat on the train back home, it occurred to me that there is a different or additional point I should make when this question comes up in future:
Crowdsourcing the classification of academic data not only speeds up the research process, it enables research that would never happen otherwise.
Sometimes, particularly in the humanities, volunteers can tackle work that would not be undertaken at all. Not because it is unimportant, but because it is not feasible for one person or a small group to do alone or because it is too expensive or somehow unfashionable.
My own period of study—early modern literature—not to mention my specialist area of research—convent literature—is a prime example of a field that has a great deal of data that will never be processed unless we use crowdsourcing and/or teach machines to read manuscripts better. Most manuscripts and early printed material have never been and will never be edited by scholars. There is simply too much of it and too few of us, not to mention too little funding being put into editions.
So, rather than fearing that crowdsourcing will cut into our expert domain and take work away from us or from those further down the job ladder than us, we should think of the work that could never be done without crowdsourcing. In some instances academic crowdsourcing might be causing creative destruction. More often than not, however, I believe we are engaging people in radical co-creation that releases otherwise inaccessible data that will, in turn, alter our research landscape.