Martin Schmalzried, a Senior Policy and Advocacy Officer at the Confederation of Family Organisations in the European Union (COFACE), explores the power and control of private companies over internet access and usage. His piece follows a special workshop¹ convened by the Media Policy Project and Parenting for a Digital Future on ‘Families and “screen-time”: challenges of media self-regulation’ and the publication of a policy brief about families and “screen time”, authored by Alicia Blum-Ross and Sonia Livingstone. [Header image credit: Zenjazzygeek, CC BY 2.0]
The prevailing business model on the internet now relies on users’ time – the more time a user spends on a service, with content, on a game or an app, the more revenue they will generate for the content provider or game developer via exposure to advertising, exploitation/sale of data generated by the user or in-app purchases of virtual goods.
The internet is also seen by many as a providential tool, a great equaliser allowing for a number of core human rights to be realised including freedom of speech, helping alleviate inequalities by providing access to education and work/business opportunities.
Many stakeholders, including governments, companies and civil society, argue that we should simply apply a ‘laissez-faire’ philosophy to the internet and it will increasingly become a space of freedom. Unsurprisingly, the most vocal proponents of this are telecommunication companies with a ‘let the market decide’ approach.
Quality content and editorial control
Unfortunately, the internet is undergoing transformation as a result of a combination of factors including algorithms and online business models. Algorithms shape what people see online, enhancing the visibility of certain content inside a social networking newsfeed or a search engine’s result page, which inevitably skews a user’s online experience.
While there is no way round algorithmic sorting, the methods for sorting content raise many concerns. Which criteria are used to ‘boost’ the visibility of content (and to diminish that of others)? Is there any consideration of ‘quality’ or whether the content is ‘positive’?
Such considerations are especially important for services ‘designed’ for children, such as YouTube Kids – the videos that are prominently displayed as recommendations do not land there ‘by accident’. While there is no technical or practical limit to proposing alternative ‘sorting’ methods, there is certainly a strong resistance from platforms since sorting algorithms also allows them to optimise advertising content or increase user interactivity/stickiness – when searching for ‘cat’, for example, why would an unboxing video of a toy featuring a cat appear in first position instead of an educational video about cats?
Online business models that rely on users’ time to generate revenue also contribute to corrupting online content. Any content producer looking to make money will want to enhance the time users spend on their content/service. In extreme cases, this gives rise to click baiting techniques that rely on catchy pictures/videos/headlines to entice users to click and be redirected to pages filled with advertising. And whenever a content/service provider has to make an editorial choice about content, optimising viewer statistics, click-through rates, bounce rates or stickiness will be a top priority, often at the expense of ‘quality’.
Some would argue that quality goes hand in hand with a website’s stickiness or viewer statistics, but this is too often not the case. If producing and posting a ‘funny cat’ video will likely generate 5 million views and an educational video about nature will generate 500,000, which show will end up being produced? How will such a logic impact on creativity? How would a modern-day Shakespeare fare with such a business model, which seeks first and foremost to appeal to a mass audience as opposed to pursuing art for art’s sake?
When seeking to minimise risks and enhance opportunities for children online, these realities cannot be ignored. If we want children to truly benefit from online opportunities, we need to take a closer look at who/what gets in the spotlight on the internet, who is making the ‘editorial decisions’. Many would chuckle at this very idea since the internet is now supposed to be a level playing field with user-generated content taking over and ‘editorial decisions’ being limited to censorship of content that violates terms of service.
The fox and the crow all over again
And the solution to these problems is … education! If all parents had access to appropriate resources, advice and guidance about the online risks and opportunities, children’s online experiences would be optimal and all problems would be solved. No one would dare to voice such a claim, but some would come close. Private companies have many reasons to promote such an idea, as it is one of the most powerful arguments for delaying any policy or regulatory measures.
As a useful analogy, financial service providers would argue that financial literacy should be the focus to prevent over-indebtedness, and agro-business would argue that informing people about healthy eating habits should be a priority to tackle obesity and decrease chronic diseases … ignoring the fact that both industries engage in wildly counter-educational advertising campaigns enticing consumers to act impulsively (take out credit to get the holiday you rightfully deserve) or fulfil their needs for social recognition via food (drink a certain fizzy drink and you’ll be popular with your friends), thus increasing their earnings.
Private companies essentially resort to the tactic of the fox employed to get the cheese out of the crow’s mouth. By pretending that consumers/users are full of resources, smart and informed, it allows them to better manipulate them into forfeiting control over their data, consenting to unfair terms of service or extracting large sums of money through unethical business models such as ‘free to play’ or ‘in-app purchases’.
The same logic prevails in the issue of advertising to children. Advertisers happily provide ‘educational material’ via programmes such as MediaSmart to children, and flatter their intelligence and resilience to advertising only to overwhelm them with more insidious advertising techniques.
That being said, education is always a necessity in and of itself, regardless of any specific policy objectives, but a balance must be struck between the need to educate/inform/empower and the necessity to protect and shape the environment, to be as conducive as possible to positive experiences for all. Education should never be a substitute for necessary policy/regulatory measures.
A provocative metaphor would be: should we put more focus on training people how to avoid mines in a mine field, or focus on demining the field?
A simple yet complex question
So what kind of internet do we want for our children? The internet is said to belong to ‘no one’, and in the eyes of many, it is the ‘ultimate’ incarnation of a public good. It has been and still is at risk of government control, but it is arguably even more at risk of falling under the control of a select few dominant private companies.
Notes
¹ A summary of the related event on families and ‘screen time’ has been published by the Media Policy Project and Parenting for a Digital Future, and is available to read here.