Skip to main content
If you click on a link and make a purchase we may receive a small commission. Read our editorial policy.

Is any progress being made against online anti-social behaviour? | Opinion

Microsoft announced a new strikes system for policing online behaviour – but for the most part, it feels like we're treading water on this increasingly damaging problem

If there's one topic that's remained resolutely evergreen in the games business for the past couple of decades, it's the topic of online anti-social behaviour and how best to police it.

Since the earliest days of online gaming, there have been a minority of players who ruin the experience for other people, and a discussion about how best to tackle that problem.

In the early days, that discussion was often somewhat philosophical in nature – questions of how to balance some high-minded ideas about freedom of expression against creating a welcoming environment for all players were debated by the operators of Quakeworld servers and gaming IRC channels at a length that would have given the founders of actual nations pause.

Today, the philosophical discussions have mostly been overridden by commercial realities – online anti-social behaviour costs money. It drives existing players away, makes it harder to acquire new players, makes word-of-mouth effects negative, and damages per-user revenues. Tackling it is a commercial imperative.

And yet, it's hard to pinpoint much actual progress in that regard in recent years. From the perspective of the average gamer, the issues around abusive behaviour and hate speech in online games seem to be getting worse, not better, with the only real improvement coming from the number of games which have effectively thrown up their hands in despair and disabled most communication features by default.

The aggressively unpleasant nature of online gaming communications has become a widespread public meme, a powerfully negative reputation that has spread well beyond the world of online gaming itself.

Any time a public figure (be they a celebrity or simply social media's latest unwilling whipping boy) starts blocking people or makes their accounts private, there's always a sneering refrain of "they wouldn't last two minutes in my Call of Duty lobbies!" – which, to any remotely normal human being following along, doesn't make the person in question look thin-skinned, but rather makes Call of Duty's online games sound like an absolutely disastrous hellhole.

The urgency of tackling this problem stands in sharp contrast to the lack of real progress on the issue. There's an inexorable demographic factor in play here – gamers are getting older, and the average 40 or 50-year-old is far less inclined to tolerate spending their leisure time having slurs screamed at them over a headset by strangers than they might have been when they were a few decades younger.

Moreover, those gamers have kids now, and those kids are getting into their teens – and the Gen X and Millennial parents who know perfectly well what kind of awful behaviour, unchecked racism, misogyny, and homophobia is happening in game lobbies are far less likely to give their kids unfettered access to those games than their own well-intentioned but largely unaware parents were. That essentially means that games which can't or won't control anti-social behaviour online risk losing large swathes from two major demographic groups, as well as reducing interest in online gaming overall – a pretty major incentive to try to get this issue under control.

This week's new development in the field is the announcement of a new enforcement system for Xbox, which will give players "strikes" for various kinds of anti-social behaviour ranging from offensive GamerTags to hate speech in chat.

The strikes on someone's account will be visible to them, and each number of strikes has a specific length of suspension associated with it, the objective being to ensure that players always have absolute clarity on the status of their account – what they did wrong, why and for how long they've been suspended, and what will happen next if the behaviour is not modified.

This system is a well-intentioned one and a fine idea in some regards. It assumes that one of the reasons for anti-social behaviour is that the perpetrators don't actually understand what the rules are and what they're being punished for – so by making this more transparent, such that enforcement actions are clearly understood and consistent, players will know what isn't acceptable and will be able to avoid such behaviours.

The issue isn't transparency or consistency, it's the existence of a minority of players for whom being abusive, causing upset, and wrecking the enjoyment of other players is actually a game they're playing

There's a decency to that kind of thinking that's quite laudable, and perhaps for some people it really is true. There are probably instances where people don't fully realise that behaviour acceptable within a small group of friends is wildly out of line in a public situation, for example.

In general, though, trolls and abusive players know perfectly well that what they're doing is against the rules; the issue isn't transparency or consistency, it's the existence of a minority of players for whom being abusive, causing upset, and wrecking the enjoyment of other players is actually a game they're playing, and the rules are simply there to be skirted or transgressed.

One reason why policing anti-social online behaviour is so hard is that there are actually a lot of different behaviours encompassed within this broad category. There are players who do anti-social things within the context of the game itself – griefing behaviours which often target their own teammates. There are so-called "heated gamer moments," which generally involve someone with an anger management problem screaming slurs into their microphone or hammering them into text chat when they are frustrated at how a game is going.

In general, I'd argue that game companies are far, far too forgiving of this kind of behaviour – right down to the use of a "boys will be boys" term like "heated gamer moments" to describe them – since tolerance of these abusive outbursts really sets the tone and tenor of communications in games and emboldens other players to push the boundaries of their behaviour even further.

This leads on to the more complex types of trolling, undertaken by people who care far less about playing the game than they do about extracting "lols" from its hapless players. A common one lately has been to exploit the both-parties ban policies of some online platforms, whereby both participants in an abusive conversation receive an equal punishment. A troll will aim to rile up another player into responding to them in anger, then report the conversation (which they initiated!) as abusive so that both players are suspended.

The only truly effective countermeasures are either prohibitively expensive in terms of labour or are ones which place constraints on the design of the games themselves

For the troll, that's the whole objective; they don't care about the actual game or their own suspension, having "won" their own personal game by getting a genuine player suspended.

The only truly effective countermeasures against these behaviours thus far are either prohibitively expensive in terms of labour – involving having many, many moderators and GMs overseeing online games – or are ones which place constraints on the design of the games themselves.

You can disable many aspects of in-game comms and generally end up with a much more pleasant experience for everyone; but this restricts the ability to pose complex coordination challenges to players, since they can't talk to one another easily.

Despite this limitation, we're already seeing this happening to some extent – online game designers increasingly have to bear in mind that much of their player base will have preemptively opted out of any form of voice chat. Comms wheels and other such systems have never been a truly effective replacement for free-form communications.

There is of course the option to make comms private, so people are only talking to their friends, but in many cases this excludes players even more, because it's quite common for adult gamers in particular to have major difficulty in assembling enough friends online at the same time to tackle a big raid or challenge in a game.

AI presents some potential opportunities for rapid enforcement and filtering, but they're very shallow solutions to quite a deep and complex problem, and there's a huge risk to companies grasping at AI as a holy grail and causing more problems than they're solving.

You could for example employ AI systems to detect slurs in voice chat, much as filters already do in text chat. Using AI to instantly mute someone who starts using slurs would clear the air a bit for everyone, but determined trolls would quickly find workarounds. You don't have to be using a common slur to be extremely abusive towards someone, and banning specific slurs just encourages trolls to get creative.

It's not hard to imagine a situation where a troll sends outrageously horrible (but not technically slur- or swear-containing) messages to a player who eventually snaps and responds with "fuck off," only for an AI system to ban the potty-mouthed victim while letting the troll move on to their next target with impunity.

Training AI systems to recognise these more complex patterns of abuse is possible but tricky. For this to work with any degree of precision, it would have to be focused on patterns of abuse, not just on identifying certain text phrases as "abusive" and would be more about identifying an overall abusive nature to a person's behaviour, rather than labelling a certain comment or conversation as such.

Training AI systems to recognise these more complex patterns of abuse is possible but tricky

I suspect that the coming years are going to see a lot of failed experiments in this field, with AI being used to label hate speech or abuse in ways that often end up creating more work for long-suffering moderators and safety teams, rather than lightening the load. An even bigger risk is that companies will wrongly think that AI is a cost-saver for their moderation and policing efforts, and that an AI solution can replace human oversight. Hardened trolls who already delight in finding loopholes and technicalities for their anti-social behaviours will run rings around unsupervised AI systems.

It's cold comfort, perhaps, that game companies struggling with this issue aren't alone; social networks have also seen a disappointing lack of progress on solving this problem (and even more worrying issues like child sexual abuse material or underage grooming).

Xbox's new approach of making the whole system more transparent is a welcome innovation; at the very least it makes clear that Microsoft is taking this issue seriously, and perhaps it will even be a step in the right direction. This issue is a problem for the entire industry, though, and ultimately the most effective solutions will probably require coordination among companies rather than a piecemeal approach.

There's an enormous amount of commercial interest in play here. The reputation of online gaming as a whole, as well as the ability of individual games to attract and retain players and to monetise their users, are all deeply impacted by the proliferation of anti-social behaviour.

The perpetrators of the worst abuses are a tiny minority, but the tolerance of more "casual" abusive behaviour by a larger minority clearly feeds the beast – and even with the advent of increasingly smart AI systems, the solutions to that are neither obvious nor easy, and will certainly not be cheap.

Sign up for the GI Daily here to get the biggest news straight to your inbox

Read this next

Rob Fahey avatar
Rob Fahey: Rob Fahey is a former editor of GamesIndustry.biz who spent several years living in Japan and probably still has a mint condition Dreamcast Samba de Amigo set.
Related topics