LSE - Small Logo
LSE - Small Logo

etheridm

November 24th, 2017

Search and ‘Fake News’

0 comments

Estimated reading time: 5 minutes

etheridm

November 24th, 2017

Search and ‘Fake News’

0 comments

Estimated reading time: 5 minutes

©Gareth Davies/SnapMediaProductions

This article is by LSE MSc student Jack Marks, and covers a recent talk by Jon Steinberg, Public Policy Manager at Google (pictured right). 

Jon discussed the challenges Google’s public policy agenda faces in an age of fake news, as part of Polis’ Media and Communication in Action series.

With the rise of so-called ‘Fake News’ people are asking about the role we expect the world’s premier search engine, Google, to play in counteracting its effects. Few people have been thinking about this more than Jon Steinberg, Google’s Public Policy Manager. In his talk for Polis he discussed the efforts Google is taking to tackle “Fake News”, while contextualising Fake News as part of a wider problem that Google has been face with since its inception.

Steinberg says that Google doesn’t see ‘Fake News’ as a new problem – or a particularly unique one. Instead Google sees ‘Fake News’ as a mutation of a problem the search engine has always been tackling. Steinberg explained how since Google’s inception, “spammy” websites have attempted to “Game the System” and come up at the top of common Google searches. In the past, these pages have done this by spamming key words on pages, by paying other websites to post links to their webpage (thus raising their credibility in Google Search’s algorithm) or by hiding invisible text on web pages, amongst other cheating methods. From a Google engineer’s perspective, “Fake News” is merely the newest iteration of Spam pages.

This conceptualisation of what “Fake News” is provides a different framework for how academics in Political or Communication Sciences might think of countering it. Beyond Fact-checking websites, there has been little to guide people in terms of a strategy. Looking at how Google has contended with spam web pages in the past could give some clues to new ways that ‘Fake News’ can be tackled.

Steinberg is optimistic about Google’s position to take on this problem. He points to the fact that people place greater trust in news they find in Google Searches than on Social Media sites. Looking forward, Steinberg envisages a closer relationship between Google and respected, “authoritative” news outlets who can be consistently relied on to report truthfully. He also pushed back against the oft repeated criticism of “filter bubbles”. He cited Google’s own research, as well as research by Ofcom and Reuters which found that “consumers of information online seek out information from more sources than people who just consume information offline.” Despite this, he does concede that new issues arise when trying to prevent misinformation coming up in searches surrounding breaking news events.

Steinberg described how Search’s incorporation of breaking news into its searches was inspired by the events of September 11th. Hearing alarming reports on radio or from friends, people were googling ‘Twin Towers’, ‘New York’, and ‘World Trade Center’ but instead of getting information about what was happening, they got Wikipedia entries and Tour guides. While this anecdote highlighted the need for Google Search to start incorporating breaking news, it also highlights the dangers of doing so. As reporting from reliable or verifiable sources is sparse during breaking events, there is increased vulnerability to sensationalist or false information making it to the top of Google Searches in the first few hours. As illustrated by “Fake News” articles about the recent Las Vegas shooting that saw false articles claiming the shooter was an Anti-Trump Democrat activist, Steinberg held-up his hands to say that Google does not yet “have all of the answers”. Nevertheless, he is confident in Google’s ability to improve.

Similarly, Steinberg addressed criticism that Google has at times been insensitive to its approach to the boundaries of free speech – adopting American sensibilities in countries with different understanding of where the right to free speech begins and ends. In this and on the “Fake News” issues, Steinberg described Google as a “maturing teenager”, conceding that “Google has to look forward and take into account different cultural perspectives” when tackling these issues.

A common thread across much of Q&A portion of the event was the focus on Google’s role in allowing free speech, and when and where it is appropriate for Google to curtail it. Here, Steinberg drew a distinction between “providing a platform for free speech”, and facilitating the monetisation of misinformation or hateful content. In his talk, he outlined how Google don’t provide adverts for sites publishing misinformation. So far they have taken action against 1.7 billion bad adverts. On their own website, Youtube, they have demonetised over 300 million videos for containing hateful speech, misinformation, or copyright infringements. By doing this, they hope to tread a fine line between not stifling free speech, but also not providing encouragement or means for the dissemination of the kinds of speech we that are commonly agreed to be detrimental to the public sphere. Steinberg says that “Free Speech and Ads are not the same thing. Whether or not somebody has the right to have their content online is different to whether or not we have an obligation to help them make money”.

Ultimately, Jon Steinberg gave insight into a company who in their meteoric rise, have found themselves in the eye of the storm of the most pressing issues confronting media and communication, and many would argue society at large. They are becoming increasingly conscientious of their role and obligations to society, though they do not feel that the remedies to these problems should be dictated by society. As a company built on rapid innovation, Google strongly believe that same innovation is needed to tackle these issues, and they want a free hand to do so, unconstrained by regulation or government oversight. When Steinberg pushed back against an audience member’s query about whether Google could or should be considered a media or publishing company, he said that it’s “a lot different to what Google does as a company”. The issue, of course, is that everything Google does is a lot different to what any other company has ever done.

When asked who decides what gets restricted on Google’s platforms, Steinberg answered frankly: “We do”. He’s right. Google, along with the other tech giants of Facebook, Twitter, and a handful of others are designing and policing the Public Sphere. They are not malicious in doing so, but nor are they selfless. It is a matter of self-preservation and profit-maximisation that is driving Google’s efforts to fight Fake News and misinformation. Without user trust, much of Google’s most profitable properties would collapse fast.  In its efforts to maintain this trust, they are setting the terms for public debate. It is for us to decide if that is right, or if a better system needs to be imagined.

***

This article by LSE MSc student Jack Marks.

For more information about the Polis Media and Communications in Action talks, please visit our website.

About the author

etheridm

Posted In: Events | Featured | Journalism | Media and Communications in Action