The slippery slope of Facebook regulation

Politicians are using the recent 'Aboriginal Memes' controversy on Facebook to push for regulation. But is this really the best approach?

A Facebook page called “Aboriginal Memes” caused controversy last week with both the public and politicians demanding that Facebook take it down.

The page depicted meme images – photos of aboriginal men with captions referring to drugs, alcohol, rape and other racial slurs. Facebook reviewed the page, and eventually removed it for violation of its “Statement of Rights and Responsibilities”. The pressure to remove the page came from a GetUp campaign, which attracted nearly 12,000 signatures, a Facebook Group page “Make Facebook Shut Down Aboriginal Memes”, and from comments made by politicians Tony Abbott and Stephen Conroy.

Abbott  stopped short of asking that the page should be taken down. He did however make reference to a Coalition taskforce that will review social media and include “stronger take down powers for the regulator” (here referring to the Australian Communication and Media Authority). The review may be part of the Coalition’s review of online safety for children, which aims to find ways to “protect … children from the pitfalls of the internet and from the risks of social media”.

The review insists that it is not about cyber-censorship but somewhat bizarrely about “cyber-privacy”. But there is the rub. Abbott in particular has been a defender of free speech, insisting that any additional media regulation would be used to enforce political correctness and suppress “inconvenient truths”. Here he was left in the awkward situation of deciding if the Aboriginal Memes page was an “inconvenient truth” or racist.

Abbott’s dilemma is really at the heart of all discussions about the acceptability of any speech – online or off. On the one hand, people will argue that no platform should be given to people advocating racist, sexist and other discriminatory views. This is countered by the argument that this is a matter of free speech and censorship of any kind puts you on a slippery slope to accepting censorship of a whole range of other issues, some of which you might not find as acceptable.

The argument is also one about who gets to decide between the two. Certainly governments and their civil servants are not a favored option, even though it is statutory organisations like the Australian Human Rights Commission that currently adjudicate on racial discrimination issues. The Australian Government’s past attempts to introduce mandatory Internet filtering have met with large scale and vociferous opposition.

In the case of social media, we have the other interesting dimension that the core of social media is about voice and views of the group. As such, social media are perceived to operate in a relatively democratic and self-regulating way.

Whether this is really true and if so, represents an effective way of determining what gets taken down or not, depends on your view of the wisdom of crowds. 4,000 people “liked” the original Aboriginal Memes page and would argue that there was nothing wrong with it as it was supposed to be “funny”.

Regulation of the Internet, whether it comes from governments, companies or the crowd, is almost impossible to enforce. Even though the original Facebook page was taken down, it was back up at the time of writing. Copy-cat pages of the original site are also up on Facebook and the images can be found through Google search, on the original Meme Generator site and on other Facebook pages.

An aspect of these sorts of pages that hasn’t been discussed is that the primary aim is not necessarily to promote a racist message. This is as much about attention seeking through 'trolling'. It is provoking for the 'lulz' to get a reaction, rather than a specific interest in issues with Aboriginal people. This is the same phenomenon that is seen on sites like 4chan, where any taboo is fair game as long as it provokes a reaction.

When it comes to social media, we exert a choice over who we follow and whose pages we decide to look at. In the case of email, we have intelligent and adaptive software that filters offensive SPAM that is in part determined by how we as individuals filter our messages. Sites like Facebook and Twitter and Google will eventually incorporate this sort of functionality that will allow the individual to decide how much or how little they want to see.

Whether this leaves us wiser or happier or even safer, only time will tell.

David Glance is a Director at the Centre for Software Practice at The University of Western Australia