A US-based university has conducted research into how livestream audiences interact with the host depending on their gender, and it shows - perhaps unsurprisngly - that there is an issue with sexism among Twitch's userbase.
Gendered Conversation in a Social Game-Streaming Platform, a report carried out by the Indiana University Network Science Institute, analyses messages posted across 200 male-hosted Twitch channels and 200 female-hosted ones. More than 70 million messages were studied and the results show female streamers are less likely to receive comments about their actual content, Polygon reports.
Instead, women who livestream via Twitch will receive more comments about their physical appearance, with some of the most common words used when posting under those 200 channels being "boobs", "babe" and "cute" - as well as more obscene terms. Conversely, male streamers found more comments about the games they were playing, with common words including "melee", "leaderboards" and "glitch".
"Female channels are characterised by words about physical appearance, the body, relationships and greetings," the report reads. "Male channels are characterised by game-related words words.
"Our analysis on both streamers and viewers shows that the conversation in Twitch is strongly gendered. "The streamer's gender is significantly associated with the types of messages that they receive. Male streamers receive more game-related messages while female streamers receive more objectifying messages."
The report is currently under peer review. While its results may not shock anyone in the industry, it does at least give an insight into how widespread sexism has become through platforms such as Twitch.
The authors also acknowledges that the study "does not investigate how streamers themselves engage viewers and the chat" due to the challenging task of analysing all audio and video feeds for 400 channels.
The report notes that "viligant user groups" that help moderate comments and steer the conversation back towards the streamed game do existed but more solutions need to be explored. Suggestions included developing methods for automatically detecting abusive, objectifying comments, as well as more "scalable communication and moderation techniques".