Skip to main content

Using AI to take the "emotional work" out of community management

Spirit AI's Dr Mitu Khandaker: "We are trying to make the lives of community managers and moderators easier"

Managing player behaviour online is one of the biggest challenges facing developers and publishers at the moment.

Earlier this year, we saw the logical conclusion of what happens when toxic behaviour goes unchecked after a disagreement in Call of Duty boiled over into the real world, leading to a swatting incident which left a 28-year-old father of two shot dead in his home.

While this is perhaps the most extreme example on record, it is indicative of the myriad problems facing online communities, and illustrates the desperate need for effective community management.

At this year's GDC, more than 30 companies -- including Xbox, Twitch, Blizzard and Riot -- announced the Fair Play Alliance, and pledged to work towards making safer, fairer, and healthier communities.

Artificial intelligence firm Spirit AI was among the companies looking to reshape online communities, and has long been developing the tools to make it possible.

Dr Mitu Khandaker

GamesIndustry.biz caught up with Spirit AI chief creative officer Dr Mitu Khandaker at Casual Connect in London last week to discuss how artificial intelligence can change the way we manage online communities.

"Going into broader AI philosophy questions, there's a lot of conversation about AI taking away people's jobs and things like that," says Khandaker. "But I think what the more interesting thing -- wherever you fall on that conversation -- that AI should do and can do, is take away the emotional work that people have to do in shitty jobs."

Enter Ally, the artificial intelligence designed to do just that. In essence, Ally can automate the complaints process in online games and communities, investigating abuse incidents and learning to understand the nuanced interactions of its members.

"Part of the goal of Ally is to reduce the pain points of two types of users," says Khandaker. "Firstly the player, because obviously we want to help create safer communities where people don't feel like they are going to be harrassed.

"But also we are trying to make the lives of community managers and moderators easier because often they have a really horrible job where they have to look at these awful logs and reports and delve into them and try to figure out what's going. Instead of that, the system automates a lot of things that are quite obvious and shows them on the dashboard."

“The more interesting thing… that AI should do and can do, is take away the emotional work that people have to do in shitty jobs”

This is the emotional labour Khandaker speaks of. It's more than just time-consuming; it's emotionally draining for community managers to sift through hours and hours of player interaction, especially when they are are abusive in nature.

But Ally isn't some objectivist moral arbiter ruling over communities and meting out justice. With Ally, Spirit AI has attempted to tackle one of the biggest problems with machine learning: understanding context.

With the addition of contextual understanding, companies using Ally can set their own parameters for acceptable behaviour within the community. Along with knowing the difference between banter among friends and genuine harassment from strangers, Ally can also learn the colloquialisms, shorthand and memes of any given community.

"This is the other thing with harassment... There might be certain keywords that we recognise as harassing, but if a friend is using them with me I might be fine with it," says Khandaker. "There might be something totally innocuous, but coming from someone I don't know and they are saying it in a certain way that a keyword based system wouldn't pick up.

"With some of the AI techniques we use, we can figure out from the context of the way it's being used in the sentence, that it is harassment basically, that they are being malicious in someway.

"You can understand if something is consensual or not. If I don't respond to something, that probably means I wasn't comfortable with it, or I could very explicitly say 'no', 'go away' or 'fuck off'... If I have expressed that to something that didn't seem a harassing word, but there is clearly something that has gone on there, that's an instance we could flag up and say 'this is clearly a case where the person is feeling targeted in someway' and try to understand that."

“Myself and other people I know have been targeted and it's a big topic of conversation. Why aren't online platforms doing more?”

Based on the parameters set out by platform holders, Ally can act accordingly. For example, any messages of a sexual nature being sent in a kid's game could result in that player automatically muted in an instant.

Other instances would see Ally building up a case against an individual player; a common occurrence in online communities is "flirt greetings", where a player randomly messages other members in a flirtatious way. Although this might go broadly unreported, Ally can pick up on these patterns of behaviour and flag the user in question. Of course, as Khandaker says, there are communities where something like sexting is perfectly acceptable, providing it's consensual, and that's something Ally is also capable of distinguishing.

"Communities have incredibly different needs, so it's just about setting up the system so the community managers can say 'okay, in this case, this is the kind of response that would happen'," Khandaker explains.

"We're doing the detection piece for now, and the sort of intervention is still up to the community manager. But in the future we're looking at how to automate different types of intervention. Riot has talked about this a lot actually, where educating users on what they did wrong actually lowers the rate of re-offending... It's better than banning them outright because they are just going to keep re-offending."

Khandaker and her colleagues first conceived the idea of an artificial intelligence watching out for platform users around the time of Gamergate.

"We were thinking about building systems that really contextually understands language, and what we can do with it," she says. "There's the idea of conversational AI, but at the same time -- for me particularly -- trying to understand online harassment in a really nuanced way, because honestly, it was a year into Gamergate.

“We're trying to help out companies that otherwise wouldn't otherwise have [the capacity]"

"Myself and other people I know have been targeted and it's a big topic of conversation. Why aren't online platforms doing more? I would like to see them embrace it. Facebook is doing some of this, trying to flag up toxic posts, but they have their own big team of data scientists and they're taking a particular approach. We really want to help out other platforms that maybe aren't able to do that themselves."

Therein lies one of the biggest obstacles facing widespread application of artificial intelligence to community management. While companies like Riot have a vested interest in tackling toxicity within League of Legends, they also have the resources to hire teams of psychologists and data scientists, a luxury not afforded to the smaller publishers and developers.

With Ally, Spirit AI hopes to make the tools that companies like Riot have poured millions of dollars into accessible for everyone else.

"We're trying to fill the gap where maybe games companies don't have the time, attention or resources to put into trying to figure out a solution for themselves, so we're trying to help out companies that wouldn't otherwise have [the capacity]," says Khandaker. "Technically the system is designed such that anywhere there is chat between people, it can slot in. We are absolutely interested in stuff like, 'what would Twitter look like?'"

However, it's still early days for Ally. There are big plans for its future, but right now the focus is not only making the system smarter and better equipped to understand abuse, but also measure player and overall community happiness.

"We're working very closely with current partners in order to understand what categories of abuse or positive sentiment they are looking for," says Khandaker. "We do a lot of work obviously in making sure we get to a very high level of confidence. We don't want to return lots of false positives basically because that's the most annoying thing as it creates more work for moderators. A lot of the work we have in our road map is really just improving that and understanding more types of harassment."

Read this next

Ivy Taylor avatar
Ivy Taylor: Ivy joined GamesIndustry.biz in 2017 having previously worked as a regional journalist, and a political campaigns manager before that. They are also one of the UK's foremost Sonic the Hedgehog apologists.
Related topics