An update to our harassment policy for YouTube


Over the last several years we have worked to improve the way we manage content on YouTube by quickly removing it when it violates our Community Guidelines, reducing the spread of borderline content, raising up authoritative voices when people are looking for breaking news and information and rewarding trusted creators and artists that make YouTube a special place. Today we are announcing a series of policy and product changes that update how we tackle harassment on YouTube. We systematically review all our policies to make sure the line between what we remove and what we allow is drawn in the right place, and recognized earlier this year that for harassment, there is more we can do to protect our creators and community.
Harassment hurts our community by making people less inclined to share their opinions and engage with each other. We heard this time and again from creators, including those who met with us during the development of this policy update. We also met with a number of experts who shared their perspective and informed our process, from organizations that study online bullying or advocate on behalf of journalists, to free speech proponents and policy organizations from all sides of the political spectrum.
We remain committed to our openness as a platform and to ensuring that spirited debate and a vigorous exchange of ideas continue to thrive here. However, we will not tolerate harassment and we believe the steps outlined below will contribute to our mission by making YouTube a better place for anyone to share their story or opinion.

A stronger stance against threats and personal attacks

We've always removed videos that explicitly threaten someone, reveal confidential personal information, or encourage people to harass someone else. Moving forward, our policies will go a step further and not only prohibit explicit threats, but also veiled or implied threats. This includes content simulating violence toward an individual or language suggesting physical violence may occur. No individual should be subject to harassment that suggests violence.
Beyond threatening someone, there is also demeaning language that goes too far. To establish a consistent criteria for what type of content is not allowed on YouTube, we're building upon the framework we use for our hate speech policy. We will no longer allow content that maliciously insults someone based on protected attributes such as their race, gender expression, or sexual orientation. This applies to everyone, from private individuals, to YouTube creators, to public officials.

Consequences for a pattern of harassing behavior

Something we heard from our creators is that harassment sometimes takes the shape of a pattern of repeated behavior across multiple videos or comments, even if any individual video doesn't cross our policy line. To address this, we're tightening our policies for the YouTube Partner Program (YPP) to get even tougher on those who engage in harassing behavior and to ensure we reward only trusted creators. Channels that repeatedly brush up against our harassment policy will be suspended from YPP, eliminating their ability to make money on YouTube. We may also remove content from channels if they repeatedly harass someone. If this behavior continues, we'll take more severe action including issuing strikes or terminating a channel altogether.

Addressing toxic comments

We know that the comment section is an important place for fans to engage with creators and each other. At the same time, we heard feedback that comments are often where creators and viewers encounter harassment. This behavior not only impacts the person targeted by the harassment, but can also have a chilling effect on the entire conversation.
To combat this we remove comments that clearly violate our policies – over 16 million in the third quarter of this year, specifically due to harassment.The policy updates we've outlined above will also apply to comments, so we expect this number to increase in future quarters.
Beyond comments that we remove, we also empower creators to further shape the conversation on their channels and have a variety of tools that help. When we're not sure a comment violates our policies, but it seems potentially inappropriate, we give creators the option to review it before it's posted on their channel. Results among early adopters were promising – channels that enabled the feature saw a 75% reduction in user flags on comments. Earlier this year, we began to turn this setting on by default for most creators.
We've continued to fine tune our systems to make sure we catch truly toxic comments, not just anything that's negative or critical, and feedback from creators has been positive. Last week we began turning this feature on by default for YouTube's largest channels with the site's most active comment sections and will roll out to most channels by the end of the year. To be clear, creators can opt-out, and if they choose to leave the feature enabled they still have ultimate control over which held comments can appear on their videos. Alternatively, creators can also ignore held comments altogether if they prefer.
All of these updates represent another step towards making sure we protect the YouTube community. We expect there will continue to be healthy debates over some of the decisions and we have an appeals process in place if creators believe we've made the wrong call on a video.
As we make these changes, it's vitally important that YouTube remain a place where people can express a broad range of ideas, and we'll continue to protect discussion on matters of public interest and artistic expression. We also believe these discussions can be had in ways that invite participation, and never make someone fear for their safety. We're committed to continue revisiting our policies regularly to ensure that they are preserving the magic of YouTube, while also living up to the expectations of our community.
Posted by Matt Halprin, Vice President, Global Head of Trust & Safety

Subscribe to receive free email updates: