Esports tournament platform FaceIt has revealed its new community management AI, which has already banned 20,000 players for toxicity.
Trained through machine learning to address toxicity at scale, Minerva has overseen a 20% reduction in toxic messages on Counter-Strike: Global Offensive matches.
Developed with the help of Google Cloud and Jigsaw, the programme has analysed over 200 million chat messages over the last few months, and marked seven million as toxic.
Minerva went fully automated in August, after months of training to minimise false positives. FaceIt reports that Minerva is able to take a decision in just a few seconds after a match has ended, and issue warnings or bans accordingly.
During its first six weeks of activity, Minerva issued 90,000 warnings and 20,000 bans for verbal abuse and spam. Minerva also drove down the number unique players sending toxic messages by around 8%.
"In-game chat detection is only the first and most simplistic of the applications of Minerva and more of a case study that serves as a first step toward our vision for this AI," said FaceIt in a blog post.
"We're really excited about this foundation as it represents a strong base that will allow us to improve Minerva until we finally detect and address all kinds of abusive behaviors in real-time."