Many journalists use the word “algorithm” but do they really understand it? An algorithm’s real-time feedback and tracking offer lessons for enhancing editorial decisions and journalism more generally but the way editorial decisions are currently made may need to change and become more transparent.
Building a Content Algorithm
Let’s examine the Facebook feed to understand an algorithm. It’s a powerful player in determining what content people do and do not read. It’s actually a group of algorithms, but for simplicity let’s treat it as one algorithm and let’s also assume the goals of the algorithm have already been decided.
The factors Facebook uses to decide what to show you in your feed are based on actions you have/will take. They range from how long you look at a particular story to which posts you ‘like’. The team decides what actions to focus they feel will get the results they want. Then they give each action a weight. Some actions by the user mean more than others.
The programmers on the team then get to work coding up an initial version, a draft. It’s an iterative process. Each ‘draft’ algorithm is tested on groups of increasingly larger size up until it’s widely released.
Based on feedback from the initial group of users, both real-time analytics and human comments, it’s apparent that some actions have too little or too much weight. The team may try adding new actions into the calculation or cutting others. Changes often have unintended consequences, it’s a trade-off to ensure it works in the real world.
It’s not all automated magic, manual override exists, even if the goal is to eliminate subjective human intervention. But that’s not the sexy artificial intelligence/data driven story a company like Facebook or Google want to tell. For example Facebook at one point discovered that some users were treating the ‘hide’ button like the delete or archive button in an email inbox 5% of users were doing 85% of the hiding. Facebook doesn’t normally make custom tweaks for users, but they made an exception for these ‘superhiders’.
What Editorial Decision Making Can Learn From Algorithms
In cyberspace, algorithms possess the benefit of real-time feedback loops. At a newspaper the public makes comments (nowadays electronically) after the editorial decisions have been widely released (printed/ put online). This feedback will be judged and acted on after some time (potentially never if resources or desire do not permit). With algorithms every action (click, eyeball, comment) can be incorporated back into the system.
Editorial content requires the nuance of human judgement, something machines continue to struggle with, it would seem to they have an advantage. But they need to evolve to take advantage of the opportunity of real-time feedback.
The editorial board for a reader is at the figurehead of a newspaper’s editorial content, writing in the name of the paper, it is more transparent than editorial meetings where other content decisions are made. A former editorial writer on a leading Canadian paper described his experience in the 1990’s:
“At the time I was on the board there were somewhere between two and three editorials written a day. On one week near Christmas some editorial writers were on vacation, others were sick and another had other functions he fulfilled. As a result I wrote 12 editorials that week, editorials that I would suggest to you had little premeditation other than filling up the available space.”
Editorials written without premeditation, yet sold as wise and thought out. The web is calling this bluff and the result is an erosion of brand authority, a bedrock of traditional media’s advantage.
It’s difficult to write an editorial on a topic like constitutional law without someone with an understanding of constitutional law involved. Yet it happens. Having a subject matter expert centrally involved each time would pay off over time. Fewer editorials of higher quality could be a path back to relevance while remaining feasible.
Traditional media doesn’t place the same value on real-time feedback. It’s a lot easier to hold on to the ‘father knows best’ even at times when it clearly is not enough. But tracking (analytics) is the easy part, interpreting and having management buy-in is the hard part. It can challenge existing ways of working and expose that the team lacks these skills.
Opening Up The Editorial Board
Editorial boards have historically been dominated by white, urban, middle aged, university educated, men. Making it transparent as to who is on the board and why will help build authority.
The New York Times has a page of who is on their editorial board but it was last updated in 2013. The Globe and Mail’s is from 2012. The ‘why’ is not explored, both pages are limited to standard bios.
The Guardian has a promising initiative with its Public Leaders Editorial Board. Nine leaders have been selected to provide expert insight over the next year. They will work with our editorial team to ensure we tackle the most important issues facing public services.
Tech companies are not beacons of transparency and generally avoid making public positions on issues. But whenever a major change is made to an algorithm these companies feel compelled to blog and answer forum question about it. It was initially fear driven, it may still be. Better to say something and balance proprietary secrets than wait for a potential backlash.
Newspapers can learn from the tech sector that opening up is what web users expect. In exchange they are willing to give you their time and data (indirect) and sometimes money (direct). This engagement / trust with / in the brand brings long term monetary gain.
Software engineers use version control, the ability to go back to previous versions. It’s a comprehensive archive that tracks all changes over time. Most newspapers have the editorial ‘data’ in their archives, but it remains in an early web format, accessible online but not searchable within the context of the current article. Using this for editorials would mean that users/readers have the ability to see all past editorial positions on the subject, ideally including date and relevance filters. With this added layer, editorials are presented with greater context and transparency. It’s a way to leverage the weight of an organization’s brand with its long history which no BuzzFeed or Huffington Post can do.
Algorithms produce real-time feedback whenever they are used (a data trail). As a minimum media organisations can have access to data on what actions are taking place on their site via Google Analytics.
A start is to make the reports widely available to staff. But this makes many managers uncomfortable. With suggestions comes critical feedback. Journalists also have access to a real-time form of data in the comments. But not all journalists read the comments. It can be for lack of time, frustration with trolls or lack of belief that they make a difference. Commenting is being removed from sites in part because many journalists question the underlying value. Yet they are one of the best sources of real-time feedback.
A tech company has data in its DNA. Customer feedback in all its forms is seen as potentially useful. Google and Facebook restrict feedback to checkboxes and pull downs and investing in forums and Q & A. Most companies are not tech giants, the norm is encouraging you use the FAQs and pull down menus while also allowing you to write an email or use a live chat. The best ones use the feedback to decide both decide on what customers want and to prioritize.
Eliminating commenting to control costs and liability lacks comes at a long term cost of learning more about your user and building a relationship with the brand. Closing comments after a certain period of time or not having comments open on some areas could be a better strategy to find a balance.
The algorithm approach is driving information in the early 21st century. It’s not an engineering or journalism question. Those able to bridge the two worlds will control content.
Making and Measuring News– Alison Powell