It's not a secret that social media platforms have been used to spread hateful messages. But with millions of users on any given site, putting an end to such abusive language is no small task. If you've ever wondered what happens when you report someone on Twitter, it largely depends on the situation, according to a spokesperson. And now, the social media platform is updating its rules surrounding hate speech amidst a rise in violence against religious minorities.
Twitter's online help center walks you through how to report abusive behavior, but a Twitter spokesperson tells Bustle that what happens behind the scenes isn't the same in every situation. "It depends on the context of the situation of the report itself and the reporter so we want to be mindful of that," the Twitter spokesperson says. "Understanding that, we want to act as quickly as possible too. And that does mean that sometimes we may make mistakes and when we do, we also know that we need to strengthen our appeals process so that people can let us know and we can remediate as quickly as possible." According to the Twitter spokesperson, transparency is of utmost importance to the social media platform.
Twitter communicates with people frequently and consistently, whether they are the ones who reported someone or they are the ones being reported. The constant communication between both is to ensure the platform understands everything that has occurred and can make an informed decision about whether a violation took place.
If Twitter determines its policy was violated after someone is reported, that person may be put into a read-only version of Twitter for a while before they're allowed to tweet again. They may also face permanent account suspension, depending on how severe their offense is.
Twitter's dedication to ensuring a safe and accountable environment is behind its most recent update addressing hate speech. Now, the site will ban any language that dehumanizes religious groups. A blog post from Twitter Safety, which describes the update, reads:
We create our rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world we operate within. Our primary focus is on addressing the risks of offline harm, and research* shows that dehumanizing language increases that risk. As a result, after months of conversations and feedback from the public, external experts and our own teams, we’re expanding our rules against hateful conduct to include language that dehumanizes others on the basis of religion.
According to the Twitter spokesperson, this is something the company had been considering for a long time, especially after reading through public feedback submissions regarding their policies. The spokesperson says they learned that many people had concerns about whether or not Twitter would be able to follow through on rooting out hate speech, and that's part of the reason it's updating its policy. "We wanted to see how this can help instill a sense of understanding with regards to how we address the work and how we address the content that's on our platform before we bring this over to more groups," the spokesperson explains. In other words, Twitter's latest update to its hate speech rules probably won't be its last.