LSE - Small Logo
LSE - Small Logo

Blog Admin

February 10th, 2014

The death of the theorist and the emergence of data and algorithms in digital social research.

15 comments | 13 shares

Estimated reading time: 5 minutes

Blog Admin

February 10th, 2014

The death of the theorist and the emergence of data and algorithms in digital social research.

15 comments | 13 shares

Estimated reading time: 5 minutes

Ben WilliamsonComputer software and data-processing algorithms are becoming an everyday part of Higher Education. How might this be affecting research in the social sciences and the formation of the professional identities of academics? Ben Williamson argues that these are important challenges for social science researchers in HE, asking us to consider how digital devices and infrastructures might be shaping our professional practices, knowledge production, and theories of the world.

Computer code, software and algorithms have sunk deep into what Nigel Thrift has described as the “technological unconscious” of our contemporary “lifeworld,” and are fast becoming part of the everyday backdrop to Higher Education. Academic research across the natural, human and social sciences is increasingly mediated and augmented by computer coded technologies. This is perhaps most obvious in the natural sciences and in developments such as the vast human genome database. As Geoffrey Bowker has argued, such databases are increasingly viewed as a challenge to the idea of the scientific paper (with its theoretical framework, hypothesis and long-form argumentation) as the “end result” of science:

Credit: AzaToth (Public Domain)

The ideal database should according to most practitioners be theory-neutral, but should serve as a common basis for a number of scientific disciplines to progress. … In this new and expanded process of scientific archiving, data must be reusable by scientists. It is not possible simply to enshrine one’s results in a paper; the scientist must lodge her data in a database that can be easily manipulated by other scientists.

The apparently theory-neutral techniques of sorting, ordering, classification and calculation associated with computer databases have become a key part of the infrastructures underpinning contemporary big science. The coding and databasing of the world does not, though, end with big science. It is becoming a major preoccupation in the social sciences and humanities too.

Big methods

Across multiple disciplines in the social science and humanities, “big data” are now being generated and mobilized. Catalyzed by huge government investment in big data research centres through the research councils, the production of social and human knowledge and theory is also now being affected by “big methods” enacted through software code, algorithms and the data they process.

For some enthusiastic commentators, such as Chris Anderson of Wired magazine, big data and its associated algorithmic techniques are bringing about “the end of theory” and disciplinary expertise:

This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.

Another recent Wired article ran with the provocative headline “Big data and the death of the theorist.” While such claims represent exaggerated obituaries for the social sciences, it is clear, as Emma Uprichard has explained, that big data software and its algorithmic techniques of analysis are increasingly challenging conventional views about the institutional practices and spaces of social scientific knowledge production.

Algorithmic expertise

Social science appears to be escaping the academy. Instead of social scientists, the new experts of the social media environment, argue Viktor Mayer-Schonberger and Kenneth Cukier, are the “algorithmists” and big data analysts of Google, Facebook, Amazon, and of software and data analysis firms. Algorithmists are experts in the areas of computer science, mathematics, and statistics, as well as aspects of policy, law, economics and social research, who can undertake big data analyses and evaluations. Facebook, for example, has a Data Science Team that is responsible for “what Facebook knows” and can “apply math, programming skills, and social science to mine our data for insights that they hope will advance Facebook’s business and social science at large.” Run by Facebook’s “in-house sociologist,” the Data Science Team aims to mobilize Facebook’s massive data resources to “revolutionize” understanding of why people behave as they do.

Similar methods are being deployed in politics. The think tank Demos has recently established a Centre for the Analysis of Social Media (CASM), a research centre dedicated to “social media science” which seeks to “see society-in-motion” through big data, as its research director Carl Miller explains:

To cope with the new kinds of data that exist, we need to use new big data techniques that can cope with them: computer systems to marshal the deluges of data, algorithms to shape and mould the data as we want and ways of visualising the data to turn complexity into sense.

Underlying “social media science” is a belief that the behaviour of citizens can be analysed and understood as a kind of data to inform new policy ideas. The emergence of “policy labs” that work across the social scientific, technological and policy fields—such as the Public Services Innovations Lab at Nesta, New York’s Governance Lab, and Denmark’s MindLab—is further evidence of how social science expertise is diversifying. For think tanks and policy labs the political promise of using sophisticated algorithmic techniques to analyze and visualize big data is to make societies and populations visible with unprecedented fidelity in order to improve government intervention.

Sociological software

The recent emergence of approaches such as “digital sociology” and “digital social research” reflect disciplinary anxieties about the relevance of social science at a time when commercial social media companies, R&D labs, and think tanks are staking their claim to social scientific expertise.

Many researchers are optimistic about the synergies between digital and social research methods. Emerging methods deployed by digital social researchers such as Lev Manovich include the use of Twitter and blogs to document everyday activities, the mobilization of search engine analytics to reveal massive population trends and social behaviours over time, the analysis of Instagram images to detect cultural and social patterns, the study of social network formation on Facebook, and so on. These platforms enable the continuous generation of data about social life and make possible new forms of social data, analysis and visualization.

As David Beer reports, the kind of software that can crawl, mine, capture and scrape the web for data has the potential to be powerful in academic research. Social media aggregators, algorithmic database analytics and other forms of what might be termed “sociological software” have the capacity to see social patterns in huge quantities of data and to augment how we “see” and “know” ourselves and our societies. Sociological software offers us much greater empirical, analytical and argumentative potential.

Remediating methods

For other researchers, the outlook is less hopeful. Lisa Gitelman argues that the capacity to mobilise data graphically as visualizations and representations, or “database aesthetics,” amplifies the rhetorical, argumentative and persuasive function of data. Moreover, software and big data seriously threaten established social research and are associated with a concentration of data analysis and knowledge production in a few highly resourced research centres, including the R&D labs of corporate technology companies.

Noortje Marres has suggested another way to think about the proliferation of new devices and formats for the documentation of social life. She argues that we need to acknowledge a “redistribution of social research” and to see social science as a “shared accomplishment” as the roles of social research are distributed between different actors. Such a redistribution of research would include human actors such as academic researchers, software developers, data analysts, commercial R&D labs, and bloggers and tweeters, but also a much broader set of actors such as databases, software, algorithms, platforms, and other digital devices, media and infrastructures that all contribute to the enactment of digital social research. The redistribution of research among these diverse actors, Marres argues, would entail a “remediation of methods” as social research is reshaped and refashioned through the use of devices and platforms.

The process of redistributing, remediating, or “reassembling social science methods” as  Evelyn Ruppert, John Law and Mike Savage have articulated it, means recognizing that digital devices are both part of the material of social lives and part of the methodological apparatus required for knowing those lives too. As they elaborate,

digital devices are reworking and mediating not only social and other relations, but also the very assumptions of social science methods and how and what we know about those relations.

Likewise, in an article on “algorithms in the academy,” David Beer has shown that software algorithms are increasingly intervening in social research through the “algorithmic normalities” of SPSS, GoogleScholar, LexisNexis, as well as emerging social media and data analytics devices, which frame information and codify the social world in certain ways—shaping the objects of analysis.

Datafied identities

Digital devices are also reworking and mediating HE institutions and academic researchers’ professional identities and personal lives. Roger Burrows has argued that the work of researchers in universities is now subject to “metricization” from an assortment of measuring and calculating devices. These include bibliometrics, citation indices, workload models, transparent costing data, research and teaching quality assessments, and commercial university league tables, many increasingly enacted via code, software and algorithmic forms of power. As a result, Deborah Lupton suggests, an academic version of the “quantified self” is emerging: a professional identity based on quantified measures of output and impact.

Writing in an article titled “#MySubjectivation,” the philosophy researcher Gary Hall argues that today’s social media are constitutive of a particular emergent “epistemic environment.” The epistemic environment of “traditional” academic knowledge production was based on the Romantic view of single authorship and creative genius materialized in writing, long-form argumentation, and the publication of books. New social media infrastructures, however, are reshaping the epistemic environment of contemporary scholarly knowledge production.

In the emerging epistemic environment of HE, academics are increasingly encouraged to be self-entrepreneurial bloggers and tweeters, utilizing social media platforms and open access publishing environments to extend their networks, drive up citations, promote their professional profiles, and generate impact. Commercial social media platforms such as Twitter, Facebook, LinkedIn and Academia.edu are becoming part of the everyday networked infrastructure through which academics create, perform and circulate research, knowledge and theory. As Hall states it, the emerging epistemic environment:

invents us and our own knowledge work, philosophy and minds, as much as we invent it, by virtue of the way it modifies and homogenizes our thought and our behaviour through its media technologies.

To put it more bluntly, academics are becoming data, as mediated through complex coded infrastructures and devices. Geoffrey Bowker has written that “if you are not data, you don’t exist”; the same is true for academics in Higher Education. The unfolding effects of data and algorithms on HE ought to be the subject of serious social scientific inquiry.

Whether we are confronting the “end of theory” and the “death of the theorist” as computer coded software devices and sophisticated algorithms increasingly mediate, augment, and even automate academic practice and knowledge production remains an open question for further research. Is academic work really being homogenized and manipulated by the media machines of Google and Facebook, and is disciplinary expertise and knowledge production being displaced to the “algorithmists” of private R&D labs and commercial technology firms?

For researchers in Higher Education the task is to be open and alert to the current redistribution of research across these new infrastructures, devices, experts and organizations, and to recognize how our knowledge, theories and understandings of the social world we are studying are being mediated, augmented and even co-produced by software code and algorithmic power.

This is a revised version of The End of Theory in Digital Social Research? which originally appeared on the DML Central blog.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Ben Williamson is a lecturer in the School of Education at the University of Stirling. His research focuses on the participation of think tanks, policy labs and cross-sector intermediaries in education policy networks, and on software code and algorithms in educational governance. He is currently directing Code Acts in Education funded by the ESRC and tweets as @BenPatrickWill. A version of the article originally appeared on the Digital Media and Learning Research Hub.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic communication | Evidence-based research | Higher education | Social Media

15 Comments