Online Abuse Threatens Women Of Color The Most, A New Study Shows
While women around the world deal with abusive language online, a new study shows women of color are far more likely to receive threats and otherwise "toxic" messages. Social networks such as Twitter have offered only limited responses to anecdotal evidence of the widespread problem. But with a new approach and enormous dataset, Amnesty International's report on the abuse of women online aims to push Twitter toward significant change.
"We’ve been doing public stunts, we’ve been doing research, we’ve been publishing experiences of women on the platform," says Milena Marin, Amnesty's senior advisor for tactical research, in an interview with Bustle. "Now we want to put out the numbers to back all the rest of our research — to give them further evidence, from our point of view, [of] how this is a problem."
For the new report, which Amnesty put together with global artificial intelligence software product company Element AI, the organization enlisted about 6,500 volunteers worldwide to sort through tweets sent to 778 female politicians and journalists from the United States and the United Kingdom. In a single year, those women received 1.1 million "toxic" tweets — which averages out to an abusive or problematic tweet being sent to one of the women every 30 seconds.
Overall, 7.1 percent of tweets those women received were abusive or problematic, according to the study. Things were far worse for women of color, however.
Black women were most at risk, with the study determining they're 84 percent more likely than white women to experience abuse on Twitter. Latinx women are 81 percent more likely, while Asian women are 70 percent more likely.
But even these high numbers might not adequately convey what the abuse actually entails, Marin points out. She notes the case of a black British Labour politician, Diane Abbott. About 3 percent of tweets directed at Abbott during the year of the study were highly abusive — including rape threats, death threats, and racial slurs. And while 3 percent may not seem like a huge number, it came out to roughly 30,000 tweets in total.
Online abuse like the kind directed at Abbott became an issue for Amnesty because the organization tries to encourage women to step into public-facing positions. "If they have to face this vitriol, then many of them might not want to be in these roles," Marin tells Bustle.
One of the main goals of the research, as Marin explains, is to push Twitter to devote more effort — and funding — to stopping the abuse. She believes there are numerous ways this could happen; for instance, Twitter could employ more people to monitor the abuse, or be more transparent with the data it already possesses. Another option Marin sees would be for Twitter to actively partner with other organizations working towards the same goal rather than continuing to go at it alone.
"They have done certain things, like they updated their hateful conduct policy by saying that abuse is intersectional and it affects most protected minorities, such as women of color," Marin tells Bustle. "But [to get] from there to actually improving the issue for women, I think they still have a long way to go."
Another potential avenue to stop abuse that the project revealed was machine learning technology. If AI could pick out abusive language online, it could be done faster and more cheaply. Even the state of the art technology this research tested against the work of the human volunteers only correctly picked out abusive language with about 50 percent accuracy — so there's still room for development. Amnesty's hope, though, is that releasing more research on the subject will continue to highlight the problem for Twitter and other social networks.
"Every time we put out a report, it’s pressure for them to change, and that’s why we don’t want to let go of the pressure," Marin says. "We want to keep publishing information like this that will make them change."