As web moves to TV, child protection is key, but ISP-level filtering won't work

So, Digital and Culture Minister Ed Vaizey has backed MP Clare Perry's calls to create a firewall of Britain to support the seemingly reasonable aim of protecting children from pornography (and potentially keeping adults from materials classified under the Obscene Publications Act). With the web now moving further towards the TV, the suggestion is not much of a surprise.

While it's tempting to dismiss it as an attempt for the government to filter the web so it can block a future Wikileaks - especially after Vaizey's Network Neutrality misfire - the discussion of how to deal with the difference between TV, where you can't say certain words before 9pm, and the web, which knows no limits, needs to take place. And as The Register - telling people to calm down - points out, Vaizey has suggested he doesn't want to legislate but wants to act as broker between industry and ISPs.

Indeed Vaizey was cautious when the issue was first raised by Conservative MP Claire Perry in the Commons on November 23rd, afraid of what he called a 'Twitter Storm', but in yesterday's Sunday Times he said he wanted to see the ISP industry introduce measures soon. To recap what Perry was calling for:

"I am asking for a change in regulation that would require all UK-based internet service providers to restrict universal access to pornographic material by implementing a simple opt-in system based on age verification."

Yet - as anyone who understands the web's structure will know - there is no 'simple opt-in system'. So asides from the censorship problems of blocking entire websites - spelt out well by the Guardian today, which points out that sites like Flickr, YouTube, Blogger and Tumblr all have adult channels - is the practical fact that the kind of filtering Perry and Vaizey are calling for just has never been proven to work - indeed research below suggests it could slow down connections by up to 86% while wrongly blocking millions of child-safe websites, and letting millions more child-unsafe websites flood thru.

There are three ways to crudely filter content by age:

  • on websites themselves, putting responsibility on publishers; 
  • in browsers, putting responsibility on parents and those who control the web connection; 
  • and at ISP level, which requires the ISPs to track, review and filter all of their traffic thru some automated process.

Vaizey pointed out to Perry in the commons debate that a UK adult website was recently prosecuted for not providing sufficient adult content warnings on their front page (which in turn alerts browser blockers like CyberNanny). Perry responded that this is no help with foreign websites and suggested that most parents are either too busy to know how to install a filter in the browser - "through technological ignorance, time pressure or inertia or for myriad other reasons, this filtering solution is not working" - so the responsibility should be with the ISP.

To avoid parents having to take responsibility for what their children has access to (unlike alcohol, cigarettes, DVDs or TV in the home) Perry says the ISP should play a kind of gatekeeper nanny, filtering all content unless someone tells their ISP they are an adult, while presumably auto-filtering anything else that looks like it might be illegal under the Obscene Publications Act. And here is where many online have started to panic - it would surely just be a matter of time before other kinds of content got added - first suicide forums, racist hate sites, terrorism related content, then alleged copyright misuse perhaps. At such a point, the Internet would be a different place, subject to the whims of the government of the day. If a filter was in place it would be a challenge for MPs to avoid using it as a political tool, and it's hard to imagine in the long term during, say, student demonstrations them not blocking sites for protesters who 'may be planning violence', or sites which publish damaging leaked confidential documents.

But let's not get ahead of ourselves - right now, all that's happening is a meeting of ISPs and concerned parties, around a table, some time next month. And whatever the outcome of that, the simple issue is that ISP-level auto filtering doesn't work. As well as slowing down web connections considerably, ISP-level filters fail to block what they’re supposed to, and succeed in blocking what shouldn’t be.

It's a no brainer - how could can anyone other than a well-informed human distinguish between, for instance, scenes from Lars von Trier's Antichrist or Mel Gibson's Passion of the Christ and material currently banned under the Obscene Publications Act?

In one of the main studies into the area, ahead of trying to implement a similar Australia-wide firewall, Australia's OFCOM, the ACMA, did research into the accuracy and impact of ISP-level filtering, called “Closed Environment Testing of ISP-Level Internet Content Filtering” which showed five big problems with ISP filtering:

  1. All filters tested had problems with under-blocking, allowing access to between 2% and 13% of material that they should have blocked;
  2. All filters tested had serious problems with over-blocking, wrongly blocking access to between 1.3% and 7.8% of the websites tested;
  3. One filter caused a 22% drop in speed even when it was not performing filtering;
  4. Only one of the six filters had an acceptable level of performance (a drop of 2% in a laboratory trial), the others causing drops in speed of between 21% and 86%;
  5. The most accurate filters were often the slowest.

If you were one of the 3 - 18 million inaccurately blocked websites because of ISP filtering (based on 231m websites world), who would you sue for loss of business? The government? The ISP? Meanwhile websites that auto-publish content, like Netribution, as well as web forums, would be at risk of being blocked automatically from the actions of one user - only the web giants who could afford constant 24-7 moderation would be able to survive.

The fact that children and teenageers have access online to images and video beyond my wildest imagination when I was that age has long troubled me, and a serious debate between ISPs, web and browser companies, content producers and end users is a good thing - especially as the web moves to the TV. It also troubled me when working in a primary school last year which had a strict web firewall,that it offered unlimited access to YouTube - which is filled with adult content - but not the website we'd built for the school, or the Vimeo videos embedded in those pages (until we spoke to a filtering help desk for 30 minutes).

So it's an important issue, but what must be avoided - after the chaos of the Digital Economy Act - is for an MP with rudimentary technical understanding to push thru an invented 'solution' to a genuine problem that bears so little relationship with reality they end up creating a heap of new problems - and alienating the people whose support would be needed for a solution to work.

Because the only solution that I can imagine working is the crowd-model, the - gulp - Big Society answer. A huge federated opt-in crowd-built database run by parents, teachers and concerned people ticking off websites and video safe for different ages, based on common guidelines. And then browser and operating system makers could hardwire a very simple way for parents to turn ON a filter for their children not showing anything that isn't on the ever increasing list for that age group. Crude, but more dependable than any of the other controls - and at the same time not absolving the parent from their responsibility over what their child can do at home.