Life

YouTube's New Harassment Policy Bans Insults Based On Race & Gender Identity

by Caroline Burke
Cecilie_Arcurs/E+/Getty Images

On Dec. 11, one of the largest video-sharing social media platforms in the world announced its required rules of etiquette were changing. YouTube's new harassment policy contains a number of updates, most notably including a crackdown on hate speech and "veiled" threats. From now on, the platform has promised to prevent creators, public officials, and everyone in between from posting content that "maliciously insults" others based on protected attributes like gender identity, race, and sexual orientation among other things.

Matt Halprin, Vice President and Global Head of Trust & Safety for YouTube, outlined the changes for the platform in a statement posted to the site's official blog on Wednesday. First, he explained, the platform will take a "stronger stance against threats and personal attacks." Halprin wrote, "Moving forward, our policies will go a step further and not only prohibit explicit threats, but also veiled or implied threats ... No individual should be subject to harassment that suggests violence."

Beyond policing threatening language, the platform has also expanded its hate speech criteria. "We will no longer allow content that maliciously insults someone based on protected attributes such as their race, gender expression, or sexual orientation," Halprin explained. "This applies to everyone, from private individuals, to YouTube creators, to public officials."

Additionally, YouTube is establishing strict consequences for those who exhibit a "pattern of harassing behavior." Specifically, it will suspend any creators proven to "repeatedly brush up against" the harassment policy by pulling them from the YouTube Partner Program (YPP), the network through which people are able to monetize their channels. Halprin added, "We may also remove content from channels if they repeatedly harass someone. If this behavior continues, we’ll take more severe action including issuing strikes or terminating a channel altogether."

The third and final aspect of the harassment policy update is a renewed effort towards "addressing toxic comments." Specifically, the platform will roll out a new default feature where creators will be given the option to review a comment that YouTube deems "potentially inappropriate," and decide whether the comment can be made public or not on their page.

Halprin concluded,

As we make these changes, it's vitally important that YouTube remain a place where people can express a broad range of ideas, and we'll continue to protect discussion on matters of public interest and artistic expression...We’re committed to continue revisiting our policies regularly to ensure that they are preserving the magic of YouTube, while also living up to the expectations of our community.

The news of YouTube's harassment policy update comes several months after Vox journalist Carlos Maza, a YouTube creator, accused conservative host Steven Crowder of hate speech. YouTube then investigated a number of Crowder's videos, in which he blatantly mocked Maza's sexual orientation and race, and determined that Crowder's language, though "hurtful," did not violate the platform's policies. As BuzzFeed notes, Crowder's videos now explicitly violate the platform's updated policies, as could some of the videos on the current president's YouTube channel.

In practice, though, the harassment policy might not be as cut and dry as it sounds, at least when U.S. politics are involved. According to the BBC, for example, YouTube's policy team chose not to flag Donald Trump's ongoing mockery of Sen. Elizabeth Warren's heritage when he calls her "Pocahontas" at his rallies. Per the outlet, the policy team made this decision because POTUS' intentions appeared to be politically motivated rather than personally motivated. If you want to learn more about the new harassment policy, you can read the full statement by Halprin on everything that will change in the coming months.