Twitter Will Handle Abusive Accounts By Locking Them Down & It's A Great Step In The Right Direction
Well, it may have taken a while, but Twitter has announced some new reforms to its method of dealing with harassment and threats. In a post on Twitter's official blog Tuesday, Director of Product Management Shreyas Doshi laid it out — Twitter is introducing new rules on abusive accounts, a couple of new steps the social media giant is taking to weed out some of their, shall we say, problematic users. (Yeah, you know exactly what kind of user I'm talking about.)
Suffice to say, improved tools to combat threatening behavior on social media have been desired by activists and critics for a long time. To put it mildly, Twitter can be a rather intimidating place, for various reasons. Maybe you end up on the wrong side of a prominent user's foul mood, when just one testy tweet could leap to an epic dogpile. Or maybe you're running into some unwanted attention from members of a virulent social media cabal — Gamergate, anyone? Or, perhaps most distressingly, maybe you're the target of sustained, years-long harassment by just one frighteningly obsessed individual. Whatever the case, there are very good reasons to want something to change.
We believe that users must feel safe on Twitter in order to fully express themselves. As our General Counsel Vijaya Gadde explained last week in an opinion piece for the Washington Post, we need to ensure that voices are not silenced because people are afraid to speak up. To that end, we are today announcing our latest product and policy updates that will help us in continuing to develop a platform on which users can safely engage with the world at large.
While Twitter's reforms may not meet everyone's ideal expectations, they're a big step in the right direction. There are two major policy changes taking place, according to Doshi — Twitter's violent threats policy is getting an update, changing their prohibition on "direct, specific threats of violence against others" to "threats of violence against others or promot[ing] violence against others."
The promoting part is the big deal, as it opens up punishment for merely encouraging acts of harm, not just launching threats yourself. As Doshi described:
Our previous policy was unduly narrow and limited our ability to act on certain kinds of threatening behavior. The updated language better describes the range of prohibited content and our intention to act when users step over the line into abuse.
Furthermore, Twitter will be able to lock abusive accounts off of Twitter for specific lengths of time, a useful anti-harassment tool as compared to the more or less perfunctory, often all-too-brief suspensions of the past. This, Doshi said, will give Twitter "leverage in a variety of contexts, particularly where multiple users begin harassing a particular person or group of people."
The other big change is a little harder to pin down, because it's hard to know how exactly it'll play out in practice. Basically, Twitter is currently testing out a feature that identifies and reduces the visibility of abusive tweets, based on "a wide range of signals and context that frequently correlates with abuse." Thankfully, one of those considerations will be the account's age, meaning we might be spared from quite so many hectoring, screeching eggs in the future.
It's impossible to really know what impact these new policies and tools will have just yet — unexpected snags and unintended consequences and all that. But it's heartening that Twitter is making some moves on combating online abuse.