Twitch announced today that it will add new channel-level security features to help curb harassment on the platform. Now, creators and moderators can enable verified chat, requiring chatters to validate their phone and/or email before they can send messages. These settings can be toggled to enable verified chat for all accounts, first-time chatters, those under a certain account age or ones that haven’t been following the streamer for a chosen amount of time. These settings will be turned off by default until the channel opts in, and there are also settings to ensure that VIPs, subscribers and moderators can bypass verification. Unlike two-factor authentication, a user only needs to verify their phone or email once before they’re considered verified across all channels.
Twitch users can link up to five accounts to the same phone number, but if one is banned from a channel, all accounts verified with that phone number or email address will be banned too. The intention is to prevent people from creating multiple hate accounts under one phone number or email, so the streamer just has to block someone once, rather than five times. On a sitewide level, if a phone-verified account is suspended, linked accounts will also be suspended. While it’s possible to simply use another phone number, like a Google Voice account, this adds an extra layer of difficulty for bad actors.
Tensions are high in the Twitch community as underrepresented creators, particularly those who are Black or LGTBQ+, are facing targeted harassment through Twitch’s raid system. Sometimes, when a streamer goes offline, they’ll surprise another streamer by sending their fans over to check out their channel in a “raid.” This feature is designed to help established streamers support up-and-comers. But over the past several months, bad actors have used the raid feature to send bots that spew targeted harassment at creators during their streams.
In May, Twitch launched 350 new channel tags related to gender, sexual orientation, race and ability, which users requested to discover more representative creators. But some people weaponized tags to target marginalized streamers, and Twitch didn’t have comprehensive enough tools to curb this harassment — some creators even developed their own home-brewed safety tools, like a “panic button” that launches a series of chat commands. These streamers prompted Twitch to take action with the hashtag #TwitchDoBetter. Then, earlier this month, streamers like LuciaEverblack, ShineyPen and RekItRaven (who started the tag), launched #ADayOffTwitch, a day-long boycott of the site.
The #ADayOffTwitch action came with demands.
The #ADayOffTwitch action came with demands. Participating streamers wanted the ability to control incoming raids and asked Twitch to implement age restrictions, email signup limits and share a time frame for when comprehensive anti-harassment tools will be implemented. Soon after, the platform pursued legal action against two users linked to thousands of bot accounts used for hate raids.
Today’s announcement helps address one of these demands, but in an email to TechCrunch, Twitch said that it has been developing, testing and refining phone-verified chat long before the hate raids became so frequent. Still, community feedback from UserVoice and its Ambassadors Discord also helped inspire these additions, and Twitch said in a blog post that it will roll out other channel-level ban-evasion tools soon. It also noted that streamers already have the option to only accept raids from friends, teammates and followed channels. The service hasn’t made public its timelines for rolling out safety features, possibly because it can give bad actors more information about what Twitch is planning and how to evade it.
Creators can access these new settings by navigating to Dashboard → Settings → Moderation. Moderators can do so via “Manage Moderation Settings” in Chat.