LSE - Small Logo
LSE - Small Logo

Blog Administrator

June 13th, 2013

John Carr’s Response to Cammaerts: To Protect Children, We Need More Internet Controls

1 comment

Estimated reading time: 5 minutes

Blog Administrator

June 13th, 2013

John Carr’s Response to Cammaerts: To Protect Children, We Need More Internet Controls

1 comment

Estimated reading time: 5 minutes

John CarrDrawing distinctions between legal and illegal content hosted online is a difficult but important task for policymakers, businesses and the public. After a nudge from the prime minister, Maria Miller will next week bang together the heads of internet companies to implore them to deal more urgently with a range of internet safety issues. Internet safety expert John Carr OBE, responds to Bart Cammaerts’s recent post on the topic.

The recently delivered verdicts in the trials of child murderers Stuart Hazell and Mark Bridger sparked an enormous and at times very heated debate. In evidence it emerged that both men had an interest in violent child sex abuse, both had collections of child abuse images on their computers and both had used internet search engines to find the pictures which in turn fed their horrific habits. The suggestion was the images had played a part in the processes which drove these men to rape and murder two little girls. Inter alia, people therefore wanted to know what more can be done to eliminate such images altogether, or where deletion at source is not possible within a very short timescale, what could be done to reduce the volumes accessible in the UK, for example by blocking access to them. The Daily Mail was very vocal in this regard.

Moments of high emotion are rarely conducive to making the best possible decisions, probably about almost anything, but equally it would be a mistake to think that just because the Daily Mail says something is wrong it must mean it is right and vice versa. You have to deal with each argument on its merits. In his recent blog Bart Cammaerts got perilously close to abandoning that scholarly principle by saying, essentially, that if China and Iran use blocking, which they do, then we musn’t. If only life were that simple.

First, let me be clear that if a particular type of imagery or content is legal then in my view there is no justification for it to be censored, in the sense of it being deleted and removed from the internet, or anywhere else, thus rendering it inaccessible to everyone.

It may be perfectly acceptable to apply restrictions to access according to the age of the would-be viewer but that is not the issue here. You cannot “censor” that which ought not to be published in the first place because it is illegal. You might query whether or not it should be illegal but that is a separate point to be pursued with the law makers not the search engines or the hosting companies.

Thus if illegal content is found online then it is perfectly legitimate to seek its immediate deletion and, if that cannot be achieved sufficiently promptly, for example because the server housing it is in a different jurisdiction, one is entitled to take steps to block access to it as best one can. The UN appointed Frank de la Rue as Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. He made it clear in his report (para 32) that blocking measures can be justified if they address child pornography. More recently Article 25 (2) of the EU Directive on combating the sexual abuse and sexual exploitation of children and child pornography also expressly recognised blocking as a legitimate tactic that may be used in relation to child abuse images.

Bart Cammaerts is right to point to potential difficulties in those areas where countries have different laws about what is lawful content and what is not, but that is not the case here. I know of no jurisdiction where the publication or distribution of child abuse images via any medium is legal. But even if that were not the case a difference in national laws presents no insurmountable obstacle. If an author releases a book early in the USA and I go to Amazon and try to buy it I will be politely told I can’t because I am in the UK. I am “geo-located” and local rules are applied. This is trivially easy to do and practically every online business does it.  So there already is a degree of commercially driven “Balkanization” of the internet going on. Consequently it is disingenuous to suggest that trying to find ways to apply local law in respect of child abuse images is uniquely bad or dangerous or that it threatens any of the principles on which the internet was founded. The internet has to be based on common technical standards but social policy drivers are an altogether different kettle of fish, unless you think the US Supreme Court should be the sole arbiter of what any of us do online anywhere and everywhere on the planet’s surface and possibly also in space.

Child abuse images are important because typically a young person has been harmed in the making. We need to find the images as fast as we can in order to identify, locate, safeguard and get appropriate help to the child wherever he or she is and as quickly as possible. Crucially we also need to find the perpetrators in order to prevent them from re-offending, hurting the same or other children in the future.

Staying with this theme of preventing new acts of child abuse, even though the workings of the causal chain are imperfectly understood, there is unquestionably a correlation between engaging with child abuse images and being involved in the hands on abuse of children. CEOP argue there is a raised probability that a person who engages with child abuse images will go on to commit contact offences against their own or other people’s children. CEOP think the ratio could be as high as 55%. That puts a substantial premium indeed on rendering any discovered child abuse images inaccessible or limiting their accessibility as much as we can, again as quickly as we can. We need to reduce the chances of anyone new finding and perhaps getting involved with child abuse images for the first time. That initial offence is the one that will set too many on the road to perdition, putting more children in danger.

However, there is another extremely important reason for making the images inaccessible on as large a scale as possible as rapidly as possible. We owe it to the abused child. This takes us back to the earlier point about how best to help victims. It is bad enough that a young person has been raped or molested to create an image, but to know it is freely circulating on the internet can add greatly to the child’s distress, their sense of personal humiliation and loss of control. That could make achieving a satisfactory recovery a lot harder.

On a voluntary basis for several years the internet industry in the UK has been taking a range of measures to combat online child abuse images. In particular since 2004 they have been blocking access to urls known to contain child abuse images. However we now know that whatever it is we have been doing it isn’t working well enough. We need to step up a gear.

In 1995, arguably the UK internet’s Year Zero, the British police only knew about the existence of 7,000 child abuse images in total.  Following a series of freedom of information requests we learned that five local forces in England and Wales between them seized a whopping 26 million in a two year period ending April, 2012. One rough and ready estimate suggested this could mean, in the same period, perhaps over 300 million images were seized by all the police forces in England and Wales.  Also the police told us, at the last count, they knew 50,000-60,000 individuals in the UK had been involved in downloading child abuse images from the internet. Since records began, in no year have arrests in this space ever exceeded 2,500. Think about that. Do the math.

These astonishing numbers derive principally from the explosion in the use of Peer2Peer networks by paedophiles and other persons exchanging child abuse images. Url blocking is no use here and for whatever reason the scope for using hashes seems not to have taken off.

The (sort of) good news is the UK police, or at any rate CEOP acknowledged openly that within their existing levels of resources, and probably within any reasonably foreseeable resource scenario in these times of austerity, they simply cannot cope with the volumes of images and offenders. On EuroNews on 12th June a senior officer at Interpol said ‘No police service in any country in Europe is on top of this problem.’  He could just as easily have said in any country in the world. 

This is important because it means we can at last have an open and honest discussion about the realities of the situation not the myths. The emphasis, rightly, must shift towards looking at what more can be done at a technical level by the industry, particularly the larger, richer enterprises, to stem or reverse the tide. Here are some suggestions I would like to make:

  1. We need to find a way to do more to discourage the use of Peer2Peer networks for exchanging child abuse images. There are several possibilities.
  2. The wider deployment of hashing technologies such as Microsoft’s PhotoDNA could help here and also on digital storage and picture sharing sites.
  3. Too many companies use Error 404 messages when they block access to a known child abuse web site. This lacks transparency and ought to be replaced with a more direct message. Agreeing the words of the message and what will trigger its display obviously are key.
  4. How could deep packet inspection technologies and web crawlers be drawn into the fight?
  5. Using hashes can email service providers scan email attachments in the same way as AOL does (at least in the USA)?
  6. We need to tighten search engine rules to eliminate returns which are very likely illegal under UK law because they appear to advertise child abuse images.
  7. The IWF should name and shame domestic and overseas hosting companies that are persistent offenders. Their international sister body, INHOPE, should do the same. And CEOP? How do they decide whom to name and shame? What public interest is served by not naming companies that are regularly having to be contacted by the police either in relation to illegal images or grooming?
  8. People contest the proposition that hard core porn sites can lead fairly swiftly to child porn e.g. through links to other places with names like “barely legal”, “teen sex” and “ jailbait”.
  9. However, if the above proposition is true, and I am open to more research being carried out, it strengthens the argument for a default-on porn block. Whether this is done at the level of ISPs, WiFi providers or device manufacturers (better) or search engines (good but second best and redundant if former in place) is a choice to be made.
  10. In this context default-on without age verification makes no sense. Blowing away anonymity is the key to keeping these guys in check. There would be privacy issues for the rest of us but, if there is a will, these can be tackled.
  11. Of course I also favour default-on for the conceptually entirely distinct reason that it would keep that stuff away from kids but I have no interest or desire to stop adults visiting porn sites if that interests them. Also, I wonder if we can find another name for the sort of sites I have in mind? The word “porn” conjures up Playboy centrefolds c.1980. Many of today’s sites are a million miles away from anything like that.
  12. Irrespective of 8, are we satisfied that many of the major adult porn sites the search engines provide access to are wholly and always legal anyway? The fact that the police may not go after them may be more to do with resources than anything else.

Note: This article gives the views of the author, and does not represent the position of the LSE Media Policy Project blog, nor of the London School of Economics.

About the author

Blog Administrator

Posted In: Children and the Media | Filtering and Censorship | Guest Blog

1 Comments