StuartDunn8168Stuart Dunn examines the development of crowd-sourcing activities in academic contexts and identifies the potential for looking beyond the short-term benefits crowd-sourcing offers to a project’s completion. Particularly in the humanities, a more nuanced approach may be better suited, one which fosters reciprocal relationships and engages the shared interests amongst the public and academics.

Crowd-sourcing is a somewhat loaded term, particularly when it comes to impact and public engagement.  When Jeff Howe coined it in his 2006 Wired article, The Rise of Crowdsourcing, he was drawing a conscious parallel with the concept of out-sourcing, moving essential tasks in the manufacturing and service industries from costly European and US labour markets to ones in the Far East, India etc.  And in those early days, crowd-sourcing was very much about furthering the aims of for-profit business: design competitions, distributed production, micro-tasks that anyone could perform as long as they had the time, the inclination and enthusiasm and, very likely, access to the Internet. As Daren C. Brabham called it, crowd-sourcing was ‘an online, distributed problem-solving and production model’.

In many ways, this is the antithesis of what academia understands as public engagement and impact. To engage, and impact upon, the public, we seek to draw direct causal relationships between our research, publications and other outputs, and ways in which these make the world a better place – whether the public know it or not. Crowd-sourcing was about the public having impact, and reaping whatever reward the project offered – money, or the prospect of winning it, prestige, seeing your design on the mass-produced t-shirt – and not the public being impacted.

Crowd-sourcing in academic contexts is quite different. In a report recently funded by the AHRC and written by myself and Mark Hedges, Crowd-Sourcing Scoping Study: Engaging the Crowd with Humanities Research, we focus more on the fortuitous alignment of intellectual interests and outlooks which the public shares with academics. In the early days of so-called ‘citizen science’, this led to the spectacular successes of Galaxy Zoo, Old Weather, Ancient Lives etc.; which succeeded in martialling thousands to contribute in various ways, classifying, transcribing, and editing specialized academic content.  In our report we propose a four-facet typology for these activities which, in the future, will allow us to better understand the workflows involved in the creation of new knowledge by crowd-sourcing, as well as in the enhancement and production of online digital resources.

However, many still think of crowd-sourcing in business model terms whether implicitly or explicitly; especially as a means of undertaking complex digitization activities that an automated process could not achieve, for example the transcription of historic handwriting into machine-readable text. We all recognize that interest out there in the wider world will drive people to these  projects, and make them desire to contribute, but the resulting distributed effort involved inevitably comes down to its value as a commodity. Early successes in academic crowd-sourcing can be partially explained in one of two ways: either there is an alignment between the aims of the research group and large sections of the wider public (e.g. in the origins of the universe); or by the application of intelligent means of engaging the relatively small number of people who do the large majority of the work itself (e.g. by allowing them more of an editorial rather than transcribing role; giving them administrator status in project forums etc.) In the former case, the business model form of crowd-sourcing happens to work; in the latter it starts to break down, and must be replaced with a more nuanced approach.

A major reason for this is one of scale. Our report suggested that, in the humanities at least, the ‘long tail’ model is far less applicable: large numbers of people doing straightforward manual tasks rarely feature by themselves as success stories in humanities crowd-sourcing. If projects in this area wish to have ‘impact’, as well as be impacted upon, they need to reach out and recognize that the public wish to be collaborated with. Answers provided by so-called ‘super contributors’ to the survey in our report on the question of why they participate highlight that they wish to be part of a conversation, not simply a process:  “[I have a]  desire to be useful, I suppose, and sometimes a wish to see from farther in (if not precisely the inner circle!) how a project has been organized and to learn more about the content”; “working on such projects is an opportunity to actively participate in digital culture in a meaningful way”; and even “Appreciation for the democratic approach” featured.

Finally, we should recognize that there is an inevitable conflict between crowd-sourcing as a business model, and the institutional governance patterns of Higher Education. Funding for any one project is limited, exists under Full Economic Costing, will end one day, and must be reported on as a discrete package. The subjects, content, thought processes and assets that make up our intellectual culture are none of these things, and a passion shared by academics and the public for working on them are even less so. Rampant short-termism exists across the spectrum of research funding, and the emerging epistemology of crowd-sourcing as a means of sharing and collaborating with the public is especially vulnerable to it. Can it be a coincidence that at least three of the high-profile early success stories of humanities crowd-sourcing we identified – Your Paintings, Old Weather and Georeferencer – are not run by universities?

I conclude with three suggested ingredients for ensuring reciprocated impact of a humanities crowd-sourcing activity, all of which involve breaking with the idea of crowd-sourcing as a universal business model:

  1. Pick your battles. Not every research question or problem you encounter will be amenable to crowd-sourcing as a solution. I’ve blogged on this elsewhere, and the typology in our report seeks to set out a framework for understanding which workflows work and which don’t.
  2. Do not mistake large numbers for high impact. Most of the valuable work we saw while writing the report involved the self-selection of very small groups of people from the wider public, and their sustained intellectual engagement with the project. The thousands of volunteers which early crowd-sourcing projects point to don’t represent a good point of comparison for the humanities. Valuable public impact can happen in small areas. It would be wonderful if we could convince funders and promotion committees of this truth.
  3. Put a mechanism in place for your contributors to talk to each other, as well as to you. Having a user forum to support discussions and mutual problem solving is essential. This will allow a non-institutional community to form around your project, it will help sustain it beyond the end of your funding, and it will give your project’s discussions far more exposure to uptake on social media.

Stuart will be speaking at Digital Impacts: Crowdsourcing in the Arts and Humanities on the 9th April 2013, hosted by the Oxford Internet Institute. Register here for the one-day workshop showcasing digital crowdsourcing projects in the Arts and Humanities, discussing the impacts of such initiatives.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. 

About the author

Stuart Dunn is a lecturer in Digital Humanities at King’s College London. He graduated from the University of Durham with a PhD in Aegean Bronze Age Archaeology in 2002, conducting fieldwork and research visits in Melos, Crete and Santorini. Having developed research interests in GIS, Stuart subsequently became a Research Assistant on the AHRC’s ICT in Arts and Humanities Research Programme. In 2006, he became a Research Associate at the Arts and Humanities e-Science Support Centre at King’s College London, and then a Research Fellow in CeRch. Stuart is active in several areas, including digital geography, data visualisation and, lately, humanities crowd-sourcing.

Print Friendly, PDF & Email