Intelligent machines and AI interfaces are increasingly embedded in a range of social contexts. In turn these machines are themselves deeply shaped by the social and cultural milieu of their human creators. Milena Tsvetkova makes the case that social scientists should recognise and engage with the social properties of these new technologies.
At 2:32 pm on May 6, 2010, the Dow Jones Industrial Average plunged nearly one thousand points and wiping out temporarily $1 trillion from the stock market, causing the first financial “flash crash”. Early Sunday morning on September 17, 2023 twenty vehicles passing through West Campus in Austin, Texas suddenly locked in a standstill, causing a massive traffic jam. In the six weeks before UK’s 2024 general election, ten users posted tweets that garnered 150 million views on X.
The common element between these events is that they all involve intelligent machines – algorithms, bots, or robots. The traders that immediately responded to an erroneous large sell order and caused the flash crash were high-frequency trading algorithms, the vehicles that failed to communicate and manoeuvre around each other ending up in a gridlock were self-driving, and the X accounts that inundated UK’s online conversations with hateful and controversial tweets were bots.
These anecdotes remind us that intelligent machines have permeated our daily lives, insidiously moulding our social reality, sometimes in undesirable ways. Recently, we have started hearing more and more warnings about the sudden rise of superintelligence bringing the end of humanity as we know it. Much of this fearmongering envisions AI as a singular godlike entity – omniscient and omnipotent. But the fact is, AI is a multitude – plural, diverse, and let’s be honest, not that intelligent just yet.
social scientists should approach intelligent machines as social actors on par with humans.
In a recent contribution, a team of researchers including myself proposed a radical idea: social scientists should approach intelligent machines as social actors on par with humans (open version available here). We, social scientists, can then adapt and reapply social science theories and empirical research methods to study today’s society of humans and machines. Instead of imagining doomsday scenarios for the distant future, we must work on understanding and solving the real social problems we are facing now.
Established social psychology and sociological theories such as outgroup bias, authority bias, and personification that describe the relationship between two individuals or two groups can be adapted to model the relationship between a human and a bot, or a group of humans and several bots. For instance, it has been found that when in groups, rather than interacting as individuals, people are more likely to feel the “us versus them” effect and compete with and bully robots more than other humans.
Similarly, social scientists can adapt and extend methods from network science and the study of complex systems to examine the collective dynamics and patterns that emerge in networks and communities composed of humans and bots or robots. Together with collaborators, I have done this to investigate the frequency and consequences of unplanned interactions between editing bots on Wikipedia. Others have studied the impact of bots on the spread of political misinformation and inflammatory content in social media networks.
social scientists can adapt and extend methods from network science and the study of complex systems to examine the collective dynamics and patterns that emerge in networks and communities composed of humans and bots or robots
There is considerably more work to be done. We, social scientists, ought to build up a new incremental and cumulative, theoretically informed and empirically grounded social science of humans and machines. The time for this is now, while existing bots and robots are still relatively simple, because even simple behaviours can produce unintended consequences. I urge social scientists to catch up with recent and ongoing advances in AI, which have already resulted in algorithms behaving in unexpected and unexplainable ways.
What should be done? First and foremost, we must improve training in computing and computational methods for social scientists. Algorithms, bots, and robots have become an indelible part of the social and social scientists should be able to speak “their language” to study and understand them. Second, we require new types of interdisciplinary research and researchers. Susan Calvin, Isaac Asimov’s famous robopsychologist can be an aspiration here: we would also benefit from robosociologists, roboanthropologists, robodemographers, roboeconomists, robogeographers, robohistorians, and robo-political scientists.
A social science that approaches intelligent machines as autonomous actors similar to humans will not only improve our understanding of the social world but also inform AI design and policy. Self-driving vehicles are trained on data of human driving and human traffic and hence, it is unsurprising that they end up in mutual paralysis when they co-appear in large numbers; training algorithms on data of human-machine and machine-machine interactions will help them integrate on the roads more smoothly.
A social science that approaches intelligent machines as autonomous actors similar to humans will not only improve our understanding of the social world but also inform AI design and policy.
Culture should also play a role when designing self-driving cars and personal assistant bots, among many other applications of this technology. People’s perception and judgement of machines depends on their age, environment, and personality traits, as well as nationality. Machines possess culture too: machines’ decision making and behaviour reflect their designers’ culture; machines’ decision making and behaviour also always take place in a specific cultural context. Social scientists should rise to the occasion and shape and lead the conversation about culture in AI design.
Increasing social connectivity and accelerating developments in AI make the study of social systems of humans and intelligent machines an undertaking that is challenging. The positivist approach, however, will be crucially important for a better human future. To prevent financial crashes, improve road safety, and reduce political misinformation … SOCIAL SCIENTISTS WANTED.
This post draws on the author’s co-authored article, A new sociology of humans and machines, published in Nature Human Behaviour.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: Yutong Liu & Kingston School of Art, Better Images of AI, Talking to AI 2.0, (CC-BY 4.0)
Thanks for the article.
This seems to be exactly what computational social science and the study of socio-technical systems does, so while I agree with the ideas, I am not sure they are ‘radical’? AAMAS conferences and CSSSA and ESSA conferences often feature work of this nature.
I would like to be a robopsychologist, though.