Fake news, its causes and what to do about are some of the key issues that we plan to address as part of the LSE Commission on Truth, Trust and Technology that we will launch later this year. The Commission will examine the wider crisis in the quality and credibility of information in the digital age. Fake news isn’t new, but since 2016 it has become a much more pressing concern and the subject of much analysis, including on our blog and in a policy brief by Damian Tambini. Our blog editor Emma Goodman attended last month’s Westminster Media Forum, which provided an opportunity to reflect on the progress made in discussions about what exactly fake news is, and what can and should be done about it.
An agreed definition?
Although there are many types of content that the term ‘fake news’ might be applied to, it was apparent at the forum that many now agree that the fake news we are tackling consists of knowingly fabricated stories, published either for political or financial purposes. The former includes state propaganda, such as Russian attempts at spreading disinformation, or highly partisan sites that spread false stories supporting or opposing a particular candidate. The now classic example of financially motivated fabrication would be the Macedonian teenagers creating networks of fake sites during the 2016 US presidential election campaign who simply just aim to generate ad revenue.
Misinformation on Facebook is most often used for financial gain, said the company’s Patrick Walker at the forum. However, the lines between financial and political can easily blur, with stories such as ‘Pope endorses Trump’ being apparently created for ad revenue, but then taken up by right-wing sites with political motives.
Fake news isn’t a new phenomenon (even common in 17th century France, apparently) but what is new is the scale at which it can spread, and the subsequent potential for democratic influence. Its effects seem particularly pertinent now due to the decline in trust in mass media and the crisis of trust in institutions following the 2008 financial crash, combined with the huge growth of social media and search companies like Facebook and Google.
Who is responsible? The ultimate responsibility lies with those who create the false news, but given that there are unlimited ways for people to do this, it is not an easy issue to tackle. Its traction comes from social media companies and search engines, and most potential solutions start there. Some ideas for limiting the effects of fake news were discussed at the Westminster Media Forum: none were brand new, but some interesting approaches emerged, including more concrete suggestions than are commonly brought up.
Media and digital literacy
A solution that has been frequently called for since fake news became an international concern is increased public media and digital literacy, so that people won’t be fooled so easily by fake stories. It is an attractive solution as it doesn’t pose any threats to freedom of expression or require restrictive regulation, and reflects a worthy desire to create a more informed society and electorate. It is particularly pertinent in an age when many people don’t fully understand the economics of online content, including how search engines and social media companies operate, and this is why researchers such as my LSE colleague Gianfranco Polizzi are arguing for an emphasis on critical digital literacy.
However, the big, as-yet unanswered, question is how? It is far from straightforward. At the Forum, both author Matthew D’Ancona and Channel 4’s Dorothy Byrne suggested that children should be learning digital and media literacy in primary school, and that this should be separate from their computer literacy studies. Media consultant Julian Coles described a course run by ChildNet International that asks teenagers to identify what’s behind the content that they consume, and to ascertain what the content providers’ motivations are.
This kind of education is undoubtedly valuable, but limited by the fact that it only targets children and adolescents, and it is hard to make it effective in a constantly changing tech environment. It is a long term solution that doesn’t help the many adults who might struggle to identify misinformation. Facebook and others are trying to alert users to the fact that not all ‘news’ that is shared is genuine and how to spot the fakes, but it is unclear how successful these efforts have been so far.
Regulating the tech giants
The business models of search engines and social media companies are based on selling attention. Unfortunately, higher quality content doesn’t necessarily attract more attention and lead to more money. Tech companies are already under pressure in Europe, with a recent record-breaking €2.42bn fine issued by the European Commission to Google’s parent company Alphabet for abusing its dominance in search to give advantage to its own shopping service.
Several companies have signed up to a voluntary code of conduct supported by the European Commission to tackle illegal hate speech online. Many agree that self-regulation is the ideal, but in order to incentivize them to self-regulate, D’Ancona suggested that tech giants might need to be threatened with litigation that will damage their business model. But, he added, it’s equally important not to wrap them up in red tape.
The New York Times’ Steven Erlanger stressed at the Forum that it is important for social media to accept that they are more than mere conduits, as to pretend that their platforms are “simply the road on which anyone can travel” is very dangerous.
Facebook’s Patrick Walker explained his belief that it doesn’t make sense to apply old media terminology to what Facebook does. Facebook is an entirely new type of company, and even without Facebook’s existence we would be facing these challenges, he said. The algorithm has come under a good deal of scrutiny but Walker reminded the audience that it is trying to solve a specific problem – the fact that the amount of information available is growing dramatically but your time isn’t.
It is important to remember that advocating for more ‘responsibility’ for platforms also means allowing them more power, noted Martin Moore of King’s College. Governments should be wary of responding too quickly with regulation, Moore said, adding that the German and Czech governments might have reacted too fast by proposing legislation to introduce large fines or creating new government units, and that such initiatives risked restricting free speech. It is crucial to understand fake news within a wider context as a symptom of broader structural problems.
Tackling the crisis of trust in traditional media
One of the structural problems facing the traditional media that allows misinformation to compete with real news for the public’s attention is the crisis in trust. This lack of trust also means that the term ‘fake news’ is being used more effectively as a pejorative label to undermine the legitimacy of established news media. The 2017 survey done for the Reuters Institute’s Digital News Project showed that trust in media in the UK had fallen 7% compared to 2016. Edelman’s annual report on trust found declines in the institutions of government, business, media and NGOs around the world, with the greatest drop for media.
This is partly a result of financial pressures, with many in the industry feeling that they have no option but to resort to clickbait to pursue advertising money, as the Guardian’s Matt Rogerson noted at the forum.
Various solutions were proposed at the forum to address this crisis:
- One, from RT’s Anna Belkina, was to welcome more media diversity and alternative viewpoints, and to explore why people want to believe things that aren’t factual.
- Maintain well-financed public service broadcasting, argued Channel 4’s Dorothy Byrne. Trust levels for broadcast news are far higher than for other types of journalism, she noted. Independent and effective regulation is key to building and maintaining trust, argued Impress’s Jonathan Heawood. Although as Steven Erlanger pointed out, good regulation doesn’t necessarily mean good coverage.
- Finding ways to better reach young people was also highlighted. Byrne said that C4 had seen 2 billion views of stories on Facebook last year, and she stressed that these are not all light-hearted entertainment stories, but rather stories about what’s going on in Iraq and Syria. She also noted that the explainer genre has been extremely successful, and suggested that this type of content could be more widely produced.
- Kite marking was cited as a possibility for making trustworthy news and information sites easier to identify. IPSO’s Matt Tee said that the regulator is already developing a kite mark for publishers to use. This would be available for use by any publisher that has signed up to IPSO, including, for example, the Daily Mail.
- The nature of programmatic advertising means that neither advertiser nor publisher knows which ads will be shown where, and therefore ads might be shown alongside fake news without the advertiser knowing. A potential solution to this could be more widespread ad-blocking, with white listed sites and advertisers to ensure that there still is a way to fund quality content with quality advertising, Laura Sophie Dornheim of adblocker company Eyeo suggested at the forum. This is a short term solution, however, and it’s important to be realistic about the amount of available ad inventory that this would offer.
What next?
It is reassuring to see the conversation about fake news evolving from a moral panic focused on the pernicious effects of social media, to one which takes into account a more holistic view of the structural problems of our information society and focuses on potential solutions.
In terms of what could be done here in the UK, academics from Bangor University who analysed, clustered and assessed the Commons Select Committee’s Fake News Inquiry’s 78 written submissions found that the majority demanded government regulation or self-regulation across six constituent elements, with media organisations, education and digital intermediaries attracting the most attention. As well as seeking to more fully understand the problem, Martin Moore suggested that the UK government explores interventions to encourage the production of trustworthy news and information, and to encourage transparent self-regulation of tech platforms.
This post gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science. Featured image used under CC2.0 from Marco Verch http://foto.wuestenigel.com/fake-news/
2 Comments