There is much discussion currently regarding the regulation of the internet and, in particular, the taming of social media. Technology is causing problems to the extent that society is troubled, but this is not a new problem. There is a common pattern of technology emerging to address a problem, followed by the discovery that the technology itself can be the cause of a different set of, often more severe, problems. Nuclear power stations addressed the need for bulk electricity without the need to burn fossil fuels and damage the environment. But then, when nuclear power stations themselves had problems (for example the Chernobyl disaster of April 1986), the local issue of air pollution from fossil fuels transformed to a global issue of radiation from released nuclear fuel (World Nuclear Association, 2018). New technology often enables us to go faster or do more at reduced cost. It should not be a surprise then that the problems arising from new technology happen more quickly and with a more significant impact than the problems that arose from the old technology it replaced.

A month ago, shootings at two mosques in Christchurch resulted in an irreversible change to New Zealand society. Our Prime Minister, Jacinda Ardern, handled the situation with amazing gentleness and diplomacy and diverted a desperate situation into an opportunity to support our immigrants and celebrate our diversity as a nation. The gunman filmed his attack and uploaded it to social media. What would have been a traumatic experience for the passers-by who witnessed the attack turned into a traumatic experience for the millions of Facebook users who could download the video, or have the downloaded video exposed to them.

Jacinda Ardern has openly criticised the social media networks and has called for a global approach to block harm on social platforms. In response, Sheryl Sandberg, Facebook chief operations officer, revealed that Facebook would, “restrict those who could use Facebook Live and build better technology to quickly identify versions of violent videos and images and prevent them being shared” (Wong, 2019). Is this enough, though? History shows that industries that fail to self-regulate in a way that’s deemed acceptable to society risk having regulation thrust upon them.

Before we consider ways of regulating the internet, and in particular, social media companies, let’s look at an example from the past. The first gasoline powered motor car appeared in Britain in 1895. The early cars didn’t have much of an advantage over travelling by horse, but the addition of gasoline opened new possibilities for travelling at speed. Pre-social media I could distribute a message by emailing a group or putting it up on a web site for all to see, but firstly I’d need valid email addresses for those I was sharing with, and anybody wanting to see content on my web site, would need the address of the site. It was less easy to come across content by chance or to be exposed to anything inadvertently. Of course, intelligent search engines made it easy to connect people with content (for good and bad purposes). Social media takes this exposure, accessibility and availability one step further and suddenly obnoxious content seen by tens of people can be open to millions to freely view.

Let’s return to the car analogy. By 1931, there were 2.3 million cars (up from 1 million in 1921) on the roads of Great Britain (Driver & Vehicle Standards Agency, 2019). Frighteningly the number of people killed in road traffic accidents every year at that time was over 7000. (To put this in perspective, the New Zealand road toll for 2015 was 319 and the number of vehicles on the road was approximately 3.85 million (Ministry of Transport, 2018)). In 1931 the first highway code was introduced, in 1935 the first driving licence was issued and in 1960 the Ministry of Transport test was launched to inspect motor vehicles for safety (Driver & Vehicle Standards Agency, 2019). Over the years, the number of cars on the roads has grown, but so has public awareness of the need to be considerate of other road users. Car manufacturers continue to add safety features, and an increasing number of well-placed road signs prepare road users for hazards before they encounter them.

So how do we apply this to safety around the internet and social media? Is there an equivalent of a highway code and road signs that we can use to encourage internet users to be more aware of actions that could be harmful? According to the history of the highway code on the UK Government website, “the very first edition of The highway code urged all road users to be careful and considerate towards others, putting safety first” (Driver & Vehicle Standards Agency, 2019). Now if we count emotional safety alongside physical safety, this is exactly what we want to achieve for the internet and social media sites. But just how much regulation is required? Given that most governments around the world are investing heavily in moving their citizens to using services online and discontinuing the paper equivalents, making every internet user pass a “driving test” could have a detrimental effect. Also, in setting up an internet driving license test, we would encounter the ethics issue that is currently rolling around the governance of data and governance of artificial intelligence communities. My ethical framework is unlikely to be the same as your ethical framework. By the time we’ve reached consensus on what society can agree on as international good practice it is likely that many years will have passed.

But how about an internet highway code? It would have to be an international publication. The internet crosses national borders with impunity despite desperate and repeated cries for data sovereignty. We will find it easier to agree on general good guidance than personal responsibilities of individuals. We could develop accompanying “road signs” to alert users approaching hazards.

Of course, we also need to encourage the equivalent of our car manufacturers – the social media companies – to continue to introduce safety features. We do not want, though, or need to be in a position where our only safety features come from the social media companies.



Alison Holt s the founder of Longitude 174 Limited, an information technology strategic planning and procurement business. She is a fellow of both the Institute of IT Professionals in New Zealand and the British Computer Society, a member of the New Zealand Institute of Directors and a Chartered IT Professional.  Alison is an expert in the governance of information technology and data and has worked in leadership roles for multiple organisations.  Her first book (The Governance of IT) was published by the British Computer Society in September 2013, and she is now working on the sequel, The Governance of Data.