Moderating an online community is hard, often thankless work — and even harder when it’s done in a silo.
On Twitch, interconnected channels already informally share information about users they’d rather keep out. The company is now formalizing that ad hoc practice with a new tool that allows channels switch ban listsinviting communities to work together to block serial harassment and other disruptive users before they can cause trouble.
Speaking to sure naira, Twitch Product VP Alison Huffman explained that the company ultimately wants to empower community moderators by giving them as much information as possible. Huffman says Twitch has conducted “extensive” interviews with mods to find out what they need to feel more effective and make their communities safer.
Moderators have to make a lot of small decisions on the fly and the biggest is generally figuring out which users are acting in good faith – not intentionally causing trouble – and which are not.
“If it’s someone you see and you say ‘Oh, this is a slightly off-putting message, I wonder if they’re just new here or in bad faith’ – if they’ve been banned from one of your friends’ channels, it’s easier for you to say ‘yes, no, this is probably not the right person for this community,’ and you can make that decision easier,” Huffman said.
“That reduces the mental overhead for moderators and ensures that someone who is not a good fit for the community is more efficiently removed from your community.”
Within the creator dashboard, creators and channel mods can ask other channels they want to trade lists of banned users with. The tool is bi-directional, so any channel that asks for another streamer’s list will share theirs in return. A channel can accept all requests to share ban lists or only allow requests from Twitch Affiliates, Partners, and mutually followed channels. All channels can exchange ban lists with up to 30 other channels, making it possible to build a pretty robust list of users they’d rather keep out, and channels can stop sharing their lists at any time.
Image Credits: Twitch
Channels can choose to automatically monitor or restrict any account they learn through these shared lists, and they are restricted by default. Users who are ‘monitored’ can still chat, but they will be flagged so that their behavior can be closely monitored and their first message will be marked with a red box that also indicates where else they have been banned. From there, a channel can choose to ban them altogether or give them the all-clear and switch them to “trusted” status.
Twitch’s latest moderation tools are an interesting way for channels to enforce their rules against users who may be disruptive, but may not violate the company’s broader guidelines that prohibit overtly bad behavior. It’s not hard to imagine a scenario, especially for marginalized communities, where someone with bad intentions could intentionally harass a channel without explicitly violating Twitch’s rules against hate and harassment.
Image Credits: Twitch
Twitch acknowledges that harassment has “many manifestations”, but for the purposes of a Twitch suspension, that behavior is defined as “stalking, personal attacks, promoting bodily harm, hostile raids and malicious brigades of false reports”. There is a gray area of behavior outside that definition that is more difficult to capture, but the shared prohibition tool is a step in that direction. But if a user breaks Twitch’s platform rules — not just a channel’s local rules — Twitch encourages a channel to report them.
“We think this will also help with things that violate our Community Guidelines,” Huffman said. “Hopefully they will also be reported to Twitch so we can take action. But we do think it will help with the targeted harassment that we see impacting especially marginalized communities.”
Last November, Twitch added a new way for moderators to detect users trying to bypass channel bans. That tool, which the company calls “Ban Evasion Detection,” uses machine learning to automatically flag anyone in a channel who is likely to evade a ban, allowing moderators to follow that user and intercept their chat messages.
The new features fit Twitch’s vision for “layered” security on its platform, where creators stream live, sometimes to hundreds of thousands of users, and moderation decisions at every level must be made in real time.
“We think this is a powerful combination of tools to proactively discourage chat-based harassment [and] one of the things I love about this is that it’s a different combination of humans and technology,” Huffman said. “With prohibition evasion detection, we use machine learning to find users we believe are suspicious. In doing so, we rely on the human relationships and the trusted creators and communities they have already established to help deliver that signal.”
Twitch’s content moderation challenge is something of a melting pot, where dangerous streams can reach an audience and cause damage as they unfold in real time. Most other platforms focus on post content detection: something is posted, scanned by automated systems, or reported, and that content remains, comes down, or is tagged with some sort of warning to a user or platform.
The company is evolving its approach to safety and listening to its community, meditating on the needs of marginalized communities like the Black and LGBTQ streamers who have long struggled to create a safe space or visible presence on the platform.
In March, Color of Change called on the company to step up its efforts to protect black creators with a campaign called #TwitchDoBetter. The trans and wider LGBTQ community have also pressured the company to do more to end hate attacks — in which malicious users flood a streamer’s channel with targeted harassment. Twitch sued two users late last year for coordinating automated hate campaigns to deter future bad actors.
Ultimately, smart policies that are equally enforced and improvements to the toolkit available to moderators will likely have a greater day-to-day impact than lawsuits, but more layers of defense wouldn’t hurt.
“For a problem like targeted harassment, that’s not solved anywhere on the internet,” Huffman said. “And, just like in the non-internet world, it’s a perpetual problem – and it’s not a problem that has a single solution.
“What we’re trying to do here is just build a really robust set of tools that are highly customizable, and then put it in the hands of the people who know their needs best, namely the creators and their moderators, and just to adapt that set of tools to their specific needs.”