LSE - Small Logo
LSE - Small Logo

Blog Administrator

June 25th, 2018

Using artificial intelligence in news intelligently: towards responsible algorithmic journalism

1 comment | 4 shares

Estimated reading time: 5 minutes

Blog Administrator

June 25th, 2018

Using artificial intelligence in news intelligently: towards responsible algorithmic journalism

1 comment | 4 shares

Estimated reading time: 5 minutes

News organisations all across Europe are facing the same challenge: how to make use of artificial intelligence in a way that saves costs and increases users’ experience, without compromising on quality or the provision of diverse and relevant news. In order to share knowledge about the optimal use of data and algorithmic news recommendations, B. Bodó (University of Amsterdam), N. Helberger (University of Amsterdam), M.Z. van Drunen and J.K. Sørensen (Aalborg University) together initiated the 2018 Amsterdam Symposium on News Personalisation, which brought together journalists, editors, technologists and academics to discuss the issue. Here, M.Z. van Drunen, N. Helberger, J.E. Möller (University of Amsterdam) and M.B. Bastian (University of Amsterdam) report back from the symposium. 

Which values to promote?

A key issue that many news organisations seem to be grappling with is the impact of news personalisation on editorial values. Participants in the workshop did not take an exclusively commercial view of news personalisation, but considered it alongside journalistic considerations. Where commercial benefits were discussed, the main goal was not to serve users more, but rather to serve them better (thereby building longer relationships and attracting subscribers).

This editorial view (as opposed to the marketing view) of news personalisation poses a fundamental question: for which values should news personalisation algorithms be optimised? Answering this question requires organisations to revisit their editorial values and to determine how news personalisation fits into their mission and brand.

Diversity featured prominently in this discussion. This was both positive (because participants saw news personalisation as a promising tool for promoting diversity and pluralism), and negative (because participants felt that the criticism of large platforms’ use of news personalisation also extended to the use of news personalisation by traditional media organisations). Participants argued against such an indiscriminate condemnation of news personalisation.

The discussion also focused on values which have not yet played a large role in discussions about news personalisation. One such value was user empowerment. Technological progress and personalisation have created new opportunities for user interaction and have enabled users to have an input into the news that is selected for them. The discussion explored several implementation strategies, such as conversational and dynamic profiles, or more user-driven modes of personalisation. These solutions move away from passive personalisation, which infers users’ interests from their behaviour, by making it easier for them to be actively involved in the selection of the news they see.

Personalisation does not just impact editorial values, however. It also requires organisations to address the legal and ethical issues caused by using data and algorithms to target users, such as privacy, explainability, and transparency. Tackling these issues requires organisations to juggle what users want, what is technically possible, and what is legally required. This latter consideration proved to be a particularly pressing issue, as participants expressed interest in the extent to which transparency requirements in data protection and consumer law apply to personalisation in the media. Legal experts provided some certainty here by arguing that while news personalisation as a means of distribution may not fall under the General Data Protection Regulation (GDPR)’s media exemption, the GDPR’s right to an explanation primarily aims to protect users when algorithms take highly consequential decisions about them. Recommendations of news articles might therefore require simpler explanations than decisions with a more severe and immediate impact, such as an automated decision not to hire a person.

Translating values to code

Knowing which values to promote is only the first step. Interpreting and then embedding editorial values into personalisation algorithms also proved to be a thorny issue. Participants demonstrated the difficulties of doing so by discussing a simple concept at the centre of recommender systems: relevancy. Deciding that a personalisation system should recommend ‘relevant’ articles triggers hard questions like which criteria determine relevancy, how we can measure users’ interest beyond what they click into, and how to weigh users’ interests against what editors feel is relevant.

Participants emphasised the danger of using third party algorithms in this context. The values embedded in these algorithms might not match the editorial values of the media organisation which ends up using the algorithm. For example, an algorithm originally developed for eCommerce recommendations may import commercial success metrics which conflict with an organisation’s editorial values. Similarly, an algorithm developed for commercial media organisations might not meet the specific editorial needs of a public service broadcaster, which would be problematic.

Participants proposed several solutions. When using third party algorithms, the first step is to be aware of this problem. This involves identifying the values embedded in the algorithm so they can be removed or changed if necessary. Participants also suggested that media organisations might need to develop their own algorithms if they want to ensure that algorithmic values match their editorial values. This might be a significant barrier to implementing news personalisation, especially for smaller organisations. Importantly however, and in contrast with the traditional proprietary black box narrative, participants with technical expertise reported a relatively high degree of willingness among their colleagues to share experiences and insights on developing personalisation algorithms. This is a promising development.

Developing an in-house algorithm is not an excuse to sit back, however. Across the board, participants emphasised the need continually to measure the impact of algorithms on the media organisation’s editorial values. This involves both checking whether they actually promote the values of the media organisation that use them, and monitoring for unwanted side effects like a decrease in the diversity of the news the organisation provides to its readers.

Participants saw an important role for academia on this front. Apart from conceptual research into the meaning of editorial values, academics could also collaborate with the news media and use their expertise to develop the necessary measurement tools. The FairNews project, in which the University of Amsterdam, TU Delft and Dutch daily De Volkskrant work together to ensure algorithmic recommendations are implemented in a fair way, is a good example.

Bringing people together

There are also human challenges. Participants emphasised the importance of having an organisational structure that allows personalisation to succeed. Internally, this requires journalists and editors to engage with news personalisation and the technologists who build the algorithm. A failure on the editorial side to communicate what is required can put the entire process of embedding editorial values into the algorithm at risk. To convince hesitant journalists and editors, participants suggested news personalisation could first be introduced in addition to the services editors and journalists already provide. This way, the technology could prove its worth without creating the risk that existing services are replaced with automated and inferior alternatives.

This article gives the views of the author and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics and Political Science.

About the author

Blog Administrator

Posted In: Algorithmic Accountability | LSE Media Policy Project

1 Comments