As part of the Truth, Trust and Technology Commission process, we have been tracking policy responses to the information crisis from the UK Parliament, Government, institutions and think tanks, as well as some developments in Europe. These are listed here, and will be updated regularly.
UK Parliamentary inquiries
House of Commons Digital, Culture, Media and Sport Committee inquiry: Disinformation and ‘fake news’
The Committee’s interim report on Disinformation and ‘fake news’, was published in July 2018, and its final report was published in February 2019. Key recommendations to the Government include:
- A re-designation of tech companies as neither publisher nor platform and the establishment of clear legal liability with regard to harmful and illegal content published on their sites, via a a compulsory Code of Ethics , overseen by an independent regulator;
- The introduction of a levy on social media companies operating in the UK to fund the new independent system and regulation, and to allow the Information Commissioner’s Office to expand its work;
- The Competitions and Market Authority (CMA) should conduct a comprehensive audit of the operation of the advertising market on social media;
- Reform of electoral law to reflect changes in campaigning techniques, and the move from physical leaflets and billboards to online, microtargeted political campaigning. This would include defining digital campaigning, absolute transparency of online political campaigning, strengthened powers for the Electoral Commission, and the establishment of a statutory Code of Practice to manage the use of personal information in political campaigning;
- Putting pressure on social media companies to publicise any instances of disinformation, and to share information about foreign interference on their sites, and starting independent investigations into recent elections to explore what actually happened with regard
to foreign influence, disinformation, funding, voter manipulation, and the sharing of
- The establishment of digital literacy as the fourth ‘pillar’ of education, as well ensuring that the four main regulators in these areas produce a more united strategy in relation to digital literacy, and that social media companies are transparent about how they operate.
The Government published its response to the interim report in October 2018:
Following a meeting of the International Grand Committee on Disinformation and Fake News convened by the Select Committee, nine Parliamentarians signed a declaration on the principles of the law governing the internet:
The Committee’s final report, published in February 2019:
The Government response, published in May 2019:
House of Lords Communications Committee inquiry: The internet: to regulate or not to regulate?
The Committee’s final report, ‘Regulating in a digital world,’ was published in March 2019. It recommends the establishment of a ‘Digital Authority’ to co-ordinate regulators, continually assess regulation and make recommendations on which additional powers are necessary to fill gaps. It suggests ten principles on which regulation in the digital world should be based: parity with the offline world, accountability, transparency, openness, privacy, ethical design, recognition of childhood, respect for human rights and equality, education and awareness-raising, and democratic accountability, proportionality and taking evidenced-based approach
House of Lords Communications Committee inquiry: Growing up with the internet
The Committee’s report was published in March 2017 (disclosure: Chair of the LSE Truth, Trust and Technology Commission, Professor Sonia Livingstone OBE, was the Committee’s specialist advisor). Key recommendations included: Government to create a new Children’s Digital Champion to advocate on behalf of children; the UK to maintain legislation incorporating the standards set by the General Data Protection Regulation (GDPR), regardless of EU membership; the adoption by industry of a set of minimum standards; and digital literacy to be the fourth pillar of a child’s education.
The Government published its response in October 2017:
House of Commons Science and Technology Committee inquiry: Algorithms in decision–making
The Committee’s report was published in May 2018. It calls on the new Centre for Data Ethics & Innovation (see below) to examine algorithm biases and transparency tools, and to determine the scope for individuals to be able to challenge the results of all significant algorithmic decisions that affect them and, where appropriate, to seek redress for the impacts of such decisions. It calls on the Government to provide better oversight of private sector algorithms that use public sector datasets, and to look at how to monetise these datasets to improve outcomes across Government.
The Government published its response in September 2018:
UK Government initiatives
The details of the Digital Charter were not published by the start of 2019, beyond a broad outline policy paper. The Charter is described as ‘a rolling programme of work to agree norms and rules for the online world and put them into practice’ and as ‘based on liberal values that cherish freedom, but not the freedom to harm others’. Priorities under the work programme include disinformation, online harms and cyber security. The development of the Charter is being undertaken collaboratively with industry, business and civil society.
White Paper on Online Harms
The Government’s White Paper, published in April 2019, calls for a new system of regulation for tech companies with the goal of preventing online harm. In brief, the paper (which outlines government proposals for consultation in advance of passing new legislation) calls for an independent regulator that will draw up codes of conduct for tech companies, outlining their new statutory “duty of care” towards their users, with the threat of penalties for non-compliance including heavy fines, naming and shaming, the possibility of being blocked, and personal liability for managers. It describes its approach as risk based and proportionate. The White Paper is the joint responsibility of DCMS and the Home Office.
UK Council for Internet Safety (UKCIS)
This new organisation will bring together more than 200 organisations representing government, regulators, industry, law enforcement, academia and charities, working together to keep children safe online. This builds on the work of the UK Council for Child Internet Safety (UKCCIS) that was previously in operation.
In an answer in the House of Lords on 4 December 2018 to Baroness Benjamin about UKCIS funding, Lord Ashton of Hyde confirmed the five focus areas of the body: online harms experienced by children; radicalisation and extremism; violence against women and girls; serious violence; hate crime and hate speech.
(See also a paper by Dr Victoria Baines: ‘Online child sexual exploitation: Towards an optimal international response.’ Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3240998)
Centre for Data Ethics and Innovation
This is a new institution set up by Government, to ‘advise the government on how to enable and ensure ethical, safe and innovative uses of data, including for AI. It will work with, and advise, Government and existing regulators’. It will sit within DCMS for the first two years before being set up as a statutory body. It was established following a public consultation designed to inform its operations and priorities. For the 2019-20 period it is conducting two large-scale reviews, on targeting and bias, which the consultation identified as priorities.
Protecting the debate: Intimidation, influence, and information (Cabinet Office consultation)
This consultation aimed to crack down on threats and abuse towards those standing for election. It also looked at the issue of ‘whether the requirement to have imprints, which is added to election material to show who is responsible for producing it, should be extended to digital communications’.
It published this response to its consultation in May 2019:
Digital competition expert panel
Chaired by Professor Jason Furman, the expert panel’s objectives were to consider the potential opportunities and challenges the digital economy may pose for competition and pro-competition policy, and to make recommendations on any changes needed. This is a joint HM Treasury/Department for Business, Enterprise and Industrial Strategy initiative. The panel is reported in March 2019 following a public consultation, concluding that tech giants have become increasingly dominant and ministers must open the market up to increase consumer choice and give people greater control over their data.
Press sustainability: The Cairncross review
The review, chaired by Dame Frances Cairncross, was established to investigate the sustainability of the UK press market. To inform the review, the Department for Digital, Culture, Media and Sport commissioned academic research from Mediatique to look specifically at the changing state of the press market. The panel reported in February 2019, with recommendations including direct public funding for public interest news and the establishment of a new code of conduct between publishers and large tech companies,overseen by a regulator.
Described as a single flagship for Artificial Intelligence, machine learning and data science in defence to be based at Dstl (Defence, Science and Technology laboratory) in Porton Down. Countering fake news is included in the list of work that the Lab will engage in.
National Security Communications Unit
Announced in January 2018, this initiative has been tasked with ‘combating disinformation by state actors and others’, according to a Government spokesman.
It will continue its operations in 2019:
Information Commissioner’s Office (ICO)
The ICO’s investigation into data analytics in political campaigns led to the publication of a progress report in July 2018, to inform the DCMS Select Committee inquiry with which it overlapped. Based on its investigation, the ICO fined Facebook £500,000. A second report, Democracy Disrupted? Personal Information and Political Influence included a recommendation that the Government introduce a statutory Code of Practice for the use of personal data in political campaigns, and a third report in November 2018 repeated that call, arguing that self-regulation was inadequate and saying that the Code should include platforms, data brokers and the media.
The ICO has also published a call for views from stakeholders regarding the development of the proposed statutory code of practice for the use of personal data in political campaigns. This is the first stage in the ICO’s consultation on this matter:
In September 2018, Ofcom published a discussion document about online harmful content. This was based on its experience of regulating the UK communications sector and was intended to inform policy-making as it relates to online.
An accompanying speech by Ofcom Chief Executive Sharon White to the Royal Television Society provides context:
The Electoral Commission’s Digital campaigning: Increasing transparency for voters report calls for stronger powers for obtaining information about election campaign spending, greater fines for breaches of spending laws, more detailed and more punctual reporting on spending, and better labelling of digital campaign materials and ads. It was published in June 2018. www.electoralcommission.org.uk/__data/assets/pdf_file/0010/244594/Digital-campaigning-improving-transparency-for-voters.pdf
Commission on Fake News and the teaching of critical literacy skills in schools
Jointly run by the National Literacy Trust and the All-Party Parliamentary Group on Literacy, the Commission’s report was published in June 2018. Recommendations focus on the need for critical literacy to be taught in schools, including the use of a range of texts on a variety of platforms that illustrate political bias. It calls for media organisations and Government to work together to identify and enforce appropriate regulatory options to ensure that digital media platforms are effectively tackling the proliferation of fake news.
UK think tank and NGO responses
Doteveryone’s Regulating for responsible technology report was published in October 2018. It recommends the establishment of a new independent UK Office for Responsible Technology (ORT), which would have three functions: (1) to empower regulators; (2) to inform the public and policy-makers; and (3) to support people to find redress. Doteveryone proposes that the ORT’s anticipated cost (c. £37 million) would be met via a levy on industry, and by government investment.
The report builds on Doteveryone’s previous Green Paper, published in May 2018
Article 19 published Self-regulation and ‘hate speech’ on social media platforms, which recommended a model of self-regulation of social media, based on existing systems of press self-regulatory councils that are common throughout Europe.
Carnegie UK Trust (William Perrin and Professor Lorna Woods)
Via a series of blogs for the Carnegie UK Trust, Perrin and Woods propose legislation to create a duty of care based on (new) statute, so that social media service providers would be responsible for preventing harm of their users. The proposal would apply the same principles to online platforms that have traditionally been applied to corporate-owned public spaces, in order that harm can be prevented.
Global Partners Digital
The report A rights-respecting model of online content regulation by platforms calls for online platforms to establish a set of standards that would be monitored by an international, global multistakeholder oversight body, comprising representatives from online platforms, civil society organisations, academia and others. Platforms that failed to meet the standards would be publicly called out and provided with recommendations for improvement.
The report Tackling misinformation in an open society recommends mandated transparency for political advertising, equipping existing bodies (e.g. Office for Budget Responsibility, Office for National Statistics, House of Commons Library) with a mandate to inform the public, and cautions against over-hasty reaction.
Royal Society of Arts (RSA)
Focussing on contentious use, the Artificial Intelligence: Real public engagement report argues that the citizen voice must be embedded in public AI systems through public deliberation.
France’s National Assembly adopted two controversial ‘fake news’ bills in October 2018, which must be approved by the Senate before they become law. The bills enable a candidate or political party to seek a court injunction preventing the publication of ‘false information’ that might influence an election result during the three months leading up to a national election, and give France’s broadcast authority the power to take any network that is ‘controlled by, or under the influence of a foreign power’ off the air if it ‘deliberately spreads false information that could alter the integrity of the election.’ They are widely viewed as targeting Russian state-backed broadcaster RT. French minister of culture Françoise Nyssen has also announced her intention to create a council on press ethics.
The Network Enforcement Act, known as NetzDG, compels online platforms to provide ways for users to notify them of illegal content, and allows for fines of up to €50 million if they fail to remove ‘manifestly unlawful’ hate speech or other harmful content within 24 hours. They are required to publicly report on how they deal with notifications. The law has been criticised by NGOs for being overbroad and increasing the risk of censorship.
The Copyright Directive calls for a ‘link tax’ that aims to ensure that content creators receive are paid when their work is used by sharing platforms such as YouTube or Facebook, and news aggregators such as Google News. (The European Parliament has voted in favour but the final vote is due in January 2019.)
An interim Digital Services Tax has been proposed by the European Commission that would apply to revenues created from certain digital activities that escape the current tax framework entirely, for example, from selling online advertising space, from digital intermediary activities that allow users to interact with other users and that can facilitate the sale of goods and services between them, or from the sale of data generated from user-provided information. This has been proposed at 3%. It would be an interim measure until reform has been implemented. It is currently under negotiation, and has been criticised by the tech companies.
The European Commission convened a High Level Expert Group on Fake News and Online Disinformation that reported in March 2018. It focused mainly on non-regulatory responses, with recommendations including the creation of a network of Research Centres focused on studying disinformation across the EU, the continuation of the work of the Group by means of a multistakeholder coalition that will establish a code of practice for platforms, empowering users and journalists with tools they can use to flag and avoid disinformation, and increasing citizen media and information literacy.
Tackling online disinformation: A European approach – a communication in which the European Commission outlined its policy responses. Its aims included establishing a self-regulatory code of practice (see below), creating a network of independent fact-checkers, tackling cyber-enabled threats to elections in member states, media literacy work such as organising a European Week of Media Literacy, and exploring increased funding opportunities to support initiatives promoting media freedom and pluralism, quality news media and journalism.
In December 2018, Vice-President Andrus Ansip published a statement outlining the Commission’s action plan countering disinformation and setting out progress to date. New initiatives include the introduction of a rapid alert system with Member States so that ‘disinformation can be quickly countered with hard facts’, and an increase to the budget of the European External Action Service.
The self-regulatory Code of practice on disinformation was published by the European Commission in September 2018. Signatories, including Google, Facebook, Twitter and Mozilla, commit to act in five areas: disrupting advertising revenues of certain accounts and websites that spread disinformation; making political advertising and issue-based advertising more transparent; addressing the issue of fake accounts and online bots; empowering consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content; and empowering the research community to monitor online disinformation through privacy-compliant access to the platforms’ data.
This list will be regularly updated to reflect ongoing developments.