A Troll-Detecting Algorithm Has Been Invented, Bringing Glad Tidings And Promising Better Internet For All
There are few Internet phenomenon trickier and more common than trolls. But now researchers think they might have developed an algorithm that can identify trolls before they have chance to do real damage to an otherwise productive comments section. Which would be a great gift to all of us. Well, all of us except the trolls. I'd say sorry, guys, but I'm not.
This new potential fix for Internet comments comes from a paper by three researchers from Cornell and Stanford (and some funding from Google). The researchers looked at data from comment hosting platform Disqus and analyzed the information of over 10,000 users who would eventually be banned from CNN.com, Breitbart.com, and IGN.com. Why? They hoped that they would be able to find patterns they could use to identify "future banned users," or FBUs.
“Antisocial behavior is simply an extreme deviation from the norms of the community," one of the study's authors, Cristian Danescu-Niculescu-Mizil, told The Daily Dot. “Therefore, what is considered antisocial depends on the particular community and on the topic around which the respective community is formed.” But regardless, certain patterns did emerge once they began going over the data.
It turns out that no matter what the site may be, or what the topic under discussion is, trolls apparently almost always display certain characteristics. According to the paper, nearly all FBUs express less positive emotion than regular commenters, write in way that is more difficult to understand, use more profanity, and are less likely to use "conciliatory language," meaning words like "perhaps" or "could."
All of which makes sense given that the whole mission of trolls is to elicit a strong response, not to engage in productive conversation.
Trolls' language in their posts also tended to degrade over time. They also posted more frequently than normal users, and tended to focus on a few comment threads, rather than casting a wide net. And harsh community feedback only served to exacerbate their behavior.
Interestingly, trolls also typically receive more replies than average users, suggesting that even though the mantra of the Internet is "Don't feed the trolls," lots of people can't help themselves and get drawn into unproductive arguments anyway. Which isn't overly useful, but I do get it; letting awful things some trolls say go unchallenged isn't exactly a good option either. There are no good options when it comes to trolls.
Except, perhaps, to get rid of them. Which is where the real beauty of this paper comes in.
Based on their research into the common characteristics of trolls, the researchers developed an algorithm that could identify a troll with 80 percent accuracy after just five to 10 posts. And considering this research found trolls manage to post an average of 264 posts before getting banned, that is a glorious, glorious thing.
Of course, the algorithm isn't the solution to everything. One in five users that it identifies is never banned, and it can't detect trolls who try to provoke fights while pretending to be reasonable. Rather, it's only useful at picking out the most egregious examples. But still, given how overrun the Internet currently is with trolls, I'm happy to take whatever we can get.
Maybe some day we will live in a world where people don't feel compelled just to anger and offend other people for fun — or at least get immediately banned if they do. And in the meantime, I'll just continue to be grateful I write for a website that doesn't have a comments section.