Following Meta’s decision to ‘get rid of fact-checkers’ and the US administration’s decision to ban TikTok unless it is sold to a US buyer, LSE’s Professor Shakuntala Banaji looks at what the changing social media landscape means for the proliferation of online hate and disinformation.
It’s been a strange week for those of us who work on disinformation, social media and hate. Forget week, it’s been a strange year. Some would say that what we’re seeing is just business as usual, with billionaires and far right populists colluding with each other. If they have political power and untold wealth, who cares about the casualties, right? So, things are ominous – or at least taking us closer to a world in which many cannot survive.
Political reaction and medical misinformation circulated on social media and endorsed by top government officials and leaders, alongside corruption and profiteering by governments around the world and private companies, are to blame for the loss of millions of lives that might otherwise have been saved. Climate change-related disinformation by fossil fuel conglomerates and politicians makes it more difficult to plan for and prevent environmental disasters such as the ones in Spain, Pakistan, Australia, Portugal, Greece and the US, that have that claimed many lives over the past two years.
A powerful minority are profiting massively from a rise in hate. Many in this minority are the same people elected to govern, with investments in multiple companies and the power to regulate the information sphere. While X owner Elon Musk lobbies for a place in Donald Trump’s inner circle and considers purchasing the US arm of TikTok after dismembering accountability and community on X (formerly Twitter), Mark Zuckerberg and his company Meta have revealed measures that put millions of people’s mental health, safety, and lives at greater risk than they presently are. That media billionaires have the kind of power which allows them to put people’s lives at risk has always been the case – notably Rupert Murdoch and News International have been known to interfere in election reporting, favour conservative candidates, use illegal surveillance methods, and suppress news that could prevent violence.
Last week, in a sycophantic show of support for Trump and his far-right supporters in the US, and cynically misusing the language of ‘free speech’, Zuckerberg and Meta’s new head of global policy Joel Kaplan unilaterally announced an end to its third party fact-checking programme for US content that has run since 2016. This was, they announced, to be replaced by an X-style community-notes system. Alongside this, Meta proudly declared that it would be loosening the moderation of hateful and dehumanising and disinformation, to allow, amongst other things, women to be called household objects, members of LGBTQIA+ communities to becalled mentally ill and racism against immigrants to thrive.
It’s not that we, the people, are all passively sitting back and letting this happen. Many scholars, rights organisations and journalists have been decrying these moves while users have collectively been deactivating their accounts in protest. In late November there was an exodus from X to competitor and relative newcomer, Bluesky; and in a gesture of defiance against the US administration’s threat to ban TikTok if Chinese owners ByteDance won’t sell its US operation to a US company, millions of US TikTokers moved to Chinese e-commerce platform Xiaohongshu, known in English as LittleRed Book or Red Note.
Observing the new English-speaking arrivals on Xiaohongshu over the past few nights, and their interactions with longtime Chinese users, has been a fascinating and humbling experience. While I am well aware of the many downsides of naiveté about Chinese state surveillance, and scholars such as Gu et al. and Jiannan Shi have written about the tactics that communities have to use to elude discriminatory algorithms or fight misogyny, the sheer volume of news and unexpectedly joyful connections being made by Chinese and US Xiaohongshu users has been uplifting. The initial rush of ‘This is me, this is where and how I live, hello from the US’ and ‘Welcome to China, this is how we do things here’ videos has been overtaken by videos expressing surprise and excitement at finding that China is not the grim, unspeakable morass of propaganda that successive US administrations, churches, media outlets and wealthy expat novelists have given ordinary citizens to believe.
The mass move has also sparked a new sub-genre of satire in which US users joke about their willingness to allow Chinese companies to harvest their data; and poke fun at Meta for removing fact-checking; and at the US government for lying about their motives in threatening a Tiktok ban. Many of the new Xiaohongshu users are content to use English and add Chinese subtitles but some are also switching to learn Mandarin on Duolingo and offering English tuitions to Chinese teenagers for free. There’s a certain irony to a highly capitalistic Chinese app (that aims to sell and to commodify participants’ labour) suddenly being used as a platform for free exchanges of love, learning and cultural reciprocity. It might not last – the love affair of Twitter quitters with Bluesky has ended with the advent of massive numbers of AI, bot and troll accounts that are tough to control –but it’s an interesting blip that might point to some hope for ‘people power’ online.
So, if there can be so much intercultural exchange caused – albeit unintentionally – why are these new moves by Meta, Musk and the US administration more dangerous than the situation that already existed?
Let’s start with the absurd notion that less fact checking means more freedom of speech. There have been long-standing, major problems of hate and disinformation in the online sphere, but little – if any – of this isdown to fact checkers. The vast majority of genuine third-party fact checkers (rather than the propagandists posing as fact checkers for far right regimes) have, like independent journalists, borne the brunt of aggression from Governments unhappy with their revelations. Much of this can be shown to be linked to violence against vulnerable groups offline. For instance, in 2017 in Myanmar, from 2013 in India, from 2017 in Brazil and from 2015-16 in the US and UK there was a surge of dehumanisation, disinformation and hate online similar to the sorts of propaganda produced by the Hitler and Mussolini regimes; and during colonial times. When big business supports colonisers and fascists, we all know where those processes lead.
Second, let’s examine the disingenuous claim that we need to loosen controls on hateful speech so that platform users can express themselves more freely. The recent spate of hate is aimed primarily against those who have historically been targeted for extreme offline violence and have struggled to get equal rights: queer and trans communities, immigrants, Muslims, Black and brown people, women and those involved in social justice work, such as environmental, anti-war, pro-Palestinian or trade union activism. This is not accidental. Rather, the surge of hateful disinformation is being encouraged and funded deliberately by right wing and conservative think-tanks, global political actors, with a massive infrastructure of fake news and disinformation penetrating deep into civil society, targeting specific demographics of voters. In August 2024, this social and technological engineering by powerful political interests on the far right demonstrated its efficacy by causing vicious racist riots across the UK.
Finally, let’s look at the history of why platforms had initially been forced to implement third-party fact-checking and better moderation of hate. Public anger at a series of scandals – Cambridge Analytica’s role in swaying voters towards Brexit being one, the public murder of George Floyd being another and the rapid embrace of Covid-19-related medical misinformation being a third – prompted platforms such as Twitter (now X), Facebook, Instagram and TikTok, as well as messaging apps such as WhatsApp and Telegram, to put in place more careful policies – and warnings – to deter the spread of misinformation. In the wake of an even greater rise in racist hate speech in backlash to the Black Lives Matter movement, integrity, trust and safety teams and moderators worked hard, albeit constrained by company policies to tread softly around the posts of followers of far right leaders such as Trump, Bolsonaro and Modi.
Publicly shamed and with former UK parliamentarian Nick Clegg then in charge of global policy, Meta actively courted and engaged an array of independent fact checking organisations, mainly in the global south, to support their moderation processes.
Online hate, misinformation and disinformation against minoritised groups continued to rise, worldwide, and to be used by the far right in WhatsApp and Telegram groups to organise lynchings, pogroms and social violence. But this, according to our research, was in spite of and not because of efforts of independent factcheckers.
Thus hate proliferated, alongside the continuation of longstanding AI-based systems’ (bots, algorithms) surveillance of and censorship of progressive voices. Words can barely describe the horrific cruelty that the people of Palestine have collectively endured over the past 16 months. The tone for media reporting and censorship about their plight was set by November 2023, however, with Meta being one of the first platforms to ensure its algorithms would not give fair coverage to accounts describing the US and Europe supported destruction of Gaza. In December 2023, Human Rights Watch reported that:
Meta’s policies and practices have been silencing voices in support of Palestine and Palestinian human rights on Instagram and Facebook in a wave of heightened censorship of social media amid the hostilities between Israeli forces and Palestinian armed groups that began on October 7, 2023. This systemic online censorship has risen against the backdrop of unprecedented violence, including an estimated 1,200 people killed in Israel, largely in the Hamas-led attack on October 7, and over 18,000 Palestinians killed as of December 14, largely as a result of intense Israeli bombardment.
We are in a very grim period when it comes to trust and integrity in the media and social media sphere. A powerful nexus of private companies and authoritarian politicians garbed as ‘free speech’ advocates are profiting from hate and some governments are really worried. To change any of the plethora of harmful outcomes, really swift and decisive measures need to be taken to break up media and tech monopolies by any remaining governments with an interest in democracy; and to draw attention to and curb the conflicts of interest that occur when billionaires and politicians are allowed to circulate dangerous lies, and make or break rules to suit their agendas, at the expense of ordinary people. Alongside these important fights, we can also work on increasing everyone’s critical media literacy, entrenching ethical attitudes to AI content, reducing the environmental harms of redundant data and data centres, and strive to rebuild public trust in each other with a sense of community that can buttress against racist and misogynist violence.
This post represents the views of the author and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.
Featured image: Photo by Annie Spratt on Unsplash