Filtering won't deliver for Aussie kids

The Labor Party went to the last election with a comprehensive plan for “cyber-safety” – that is, making the Internet safer for children. The centrepiece of this policy, and its most expensive component, is the controversial national ISP Internet filtering scheme, sometimes referred to as the “Conroy Curtain”. Plans for this scheme are advancing, and a live ISP trial was supposed to begin before Christmas. But does the scheme really hold water as a cyber-safety measure, or does it have a different motivation?

If this plan is implemented, it will saddle Australian Internet users with a two-tier system. The first tier, which is to be mandatory for all Australians, will involve a Government-controlled blacklist of prohibited sites that ISPs must block. The second tier, which is to be opt-out, involves a more aggressive filter that is to remove all material “inappropriate” for children. However, only the second, optional filter was floated before the election. The mandatory filter, along with the new censorship powers behind it, was not an election promise, and has no mandate from the electorate. Nor should it be part of a “cyber-safety” policy – blocking access to illegal material, and other adult-oriented material deemed unwanted by the Government, is unlikely to protect children online in any measurable way.

Filtering comes with a high price tag. Tens of millions of taxpayer dollars are budgeted, and many more will be required from the Internet service providers who will be required to install and maintain the technology. Since the consumer will ultimately end up footing the bill, Australians will want to know that the money is being spent to deliver significant benefits to Australian children. All the evidence, sadly, indicates that this will not be the case.

Although it is accepted that children do face some risks online, these risks are more complex that Government rhetoric sometimes indicates. Several studies, including the Government’s own research, indicate that so-called “content risks” – the risks associated with viewing unwanted content – come a distant third to “communication risks” and “e-security risks.” A report released last year by the ACMA, “Developments in Internet Filtering Technologies and Other Measures for Promoting Online Safety“, concluded that “the online risks have shifted from content risks associated with the use of static content to include communication risks associated with other users.” In fact, the ALP’s own policy document, “Labor’s Plan for Cyber-safety”, opens by highlighting reports of several growing risks online:

  • online identity theft;
  • cyber-bullying;
  • abuse of child avatars in virtual worlds;
  • computer addiction;
  • an increase in the number of registered profiles of sex offenders on MySpace;
  • online breaches of privacy such as the posting of sexual photos and sex videos by students.

None of these items falls into the category of “content risks”, yet a country-wide content filter is now the proposed solution. For this reason, many worry that a filter might actually do more harm than good by lulling parents into a false sense of security.

A comprehensive report just released by the Berkman Center for Internet and Society at Harvard University provides little justification for the Government’s increasingly untenable position. The report, of a task force acting on behalf of the attorneys-general of the United States, concludes that the risk levels are, in general, overblown. The authors state that governments should resist endorsing particular technological solutions: “Technology can play a helpful role, but there is no one technological solution or specific combination of technological solutions to the problem of online safety for minors.” Instead, “parental oversight, education, social services, law enforcement, and sound policies by social network sites and service providers” are the only ways to achieve an outcome for the safety of children. Tellingly, the task force received no submissions for ISP-level filtering products.

Filtering is not the low-hanging fruit of cyber-safety. The technology to effectively implement ISP filtering is wholly impractical, inasmuch as it can be even said to exist. So how do we explain a policy that outlines the risks children face, then allocates most funding to a scheme that does not even address a single one of these? “Confusion” is the only answer that can be offered.

Are our children under threat? Should we take action – any action – to forestall this emergency? If anything, the research shows that the dangers of online activity are exaggerated. Where risks do exist, they stem from interactions with other people, not from exposure to unwanted web pages. An automated filter, no matter how expensive, cannot help mitigate these risks. Only education and supervision will do.

In short: Politics is no substitute for parenting.

The Berkman Center report is available here.