There has been much talk of trolls and cyber-bullies in the media in the past week as a result of the hospitalisation of New Zealand TV personality Charlotte Dawson. This followed an intense onslaught from a number of people on Twitter insulting and abusing her.
The reasons for this ranged from general dislike of Charlotte Dawson through to the supposed defence of freedom of speech following Charlotte Dawson identifying one of the abusers as Monash University employee Tanya Heti, who was later suspended from her job.
Since the incident, there has been much discussion regarding the nature of internet trolls and equating what they do to cyber-bullying. There have been further discussions about strategies in dealing with these trolls including the possibility of identifying those responsible and charging them under Australian criminal law.
As usual with these reports and discussions, a complicated set of interactions between a number of people has been summarised in a relatively simplistic way. As convenient as that is as a journalistic device, it hides the truth of the complexity of the interactions. It also obscures the fact that most of the people acting in uncivil ways and hurling the worst of abuse on the internet were relatively respectable “normal” people in real life (judged by reading their non-Charlotte Dawson related tweets). These were not people who “craved attention” as claimed by some.
What defines a troll?
Of course, there are trolls on the internet and have been since the very first email lists, news groups and forums. Shakespeare summed up the characteristics of a troll when he wrote:
“Marry, sir, they have committed false report; moreover they have spoken untruths; secondarily, they are slanders; sixth and lastly, they have belied a lady; thirdly, they have verified unjust things; and, to conclude, they are lying knaves”
There is a range of definitions for the term troll in its current sense but they all point to someone who intentionally acts to disrupt or provoke in order to cause an emotional response from their chosen victims. In this way they may use a variety of techniques to do this from masquerading as a member of a particular group or fabricating stories to mislead and derail a comment thread or topic on social media.
This is very different from someone hurling an insult at another person on Twitter – even when that insult is as dreadful as suggesting suicide. This is the characteristic of seemingly normal people driven to saying vile and stupid things, usually in moments of anger. They would not normally behave in this way in real life. Online however, people’s inhibitions are diminished through a process called the “Online Disinhibition Effect".
Psychologist John Suler has described six factors that contribute to people acting without fear of consequences and essentially voice whatever they are thinking. Although factors such as anonymity play a role in this, it is important to note that this occurs even when the individuals are clearly identifiable. What is important is that they “think” they are invisible and they are far enough removed to believe there is unlikely to be any immediate physical consequences. Another important factor is that online, authority is diminished. Celebrities on Twitter or Facebook become ordinary and equal. Elements of respect that may have influenced how others interact with that person disappear.
Dealing with abuse
It is the fact that people who show “toxic disinhibition” are essentially ordinary people rather than inherently bad that makes it all the harder to devise ways of preventing this behaviour or dealing with it when it happens. The practice of re-tweeting an abusive tweet is likely to make the situation worse if the tweet was sent in anger because it will be seen as a further provocation. For a real troll, it may actually work in stopping a troll (but probably not) because they may have achieved their goal of provoking a reaction. Re-tweeting to a large number of followers makes the troll victory all the better. Re-tweeting can also further escalate matters when followers decide to join the fray.
Dealing with an abusive tweeter by arguing back is also unlikely to help. This is especially the case when the argument is about politics or some ideology. Online disinhibition has the tendency to reduce the quality of argument to that of the school playground, only much more vicious and in 140 character exchanges.
In the past, moderators have proved to be the most effective solution to dealing with trolls and generally abusive people. Ignoring cruel comments is very hard but in the absence of external moderators, the only solution is to block people right at the start.
Twitter’s tools in particular for doing that are effective. Ultimately, social networks will have to find an automated way of filtering messages in the same way as we are protected from SPAM in our email inbox. So-called “sentiment analysis” will become effective enough to detect unacceptable messages and put them in a SPAM filter should anyone be brave enough to want to look.
David Glance is a Director at the Centre for Software Practice at The University of Western Australia