It’s been a few weeks since we posted about our commitment to fighting hate speech, and in that time we’ve had many discussions internally about what it means to be active in this commitment. I’ve been working with a committee of Disqussers to consider our policies and actions: how we develop products and how we do business in accordance with our company values.

It’s exciting to take steps to making our part of the internet a more inclusive space -- but we know this will take time. We think about this in a couple ways: improving the Disqus product and clarifying and enforcing the Disqus policies. We have lots of ideas, and we’re starting to test some of them out. So, what are we working on?

Enforcing Policies with Publishers

The Disqus Platform supports a diversity of websites and discussions; with such a large network of publishers and commenters, having a policy against hateful, toxic content is critical. While we do periodically remove toxic communities that consistently violate our Terms and Policies, we know that this alone is not a solution to toxicity. Oftentimes these communities simply shift to another platform. Ultimately, this does not result in higher quality discussions, and it does not stop the hate. In order to have a real, lasting impact, we need to make improvements to our product. Which is why, if at all possible, we work with publishers to encourage discourse (even unpopular or controversial discourse!) while helping to eliminate toxic language, harassment, and hate.

While we prefer to improve the moderation and commenting experience, occasionally there is a publisher whose content (in addition to the comments) is in direct conflict with our Terms of Service, or who has opted not to moderate comments that violate our terms. Over the past several months, many passionate folks have reached out to us about severe violations of our Terms of Service. With the help of our community, we’ve been able to review and enforce our policy on dozens of sites.

We appreciate all of the help and feedback we’ve received and we are excited to continue to partner productively with users and organizations that are passionate about fighting toxic content and hate speech. To improve our efforts, we’ve built a Terms of Service Violations Submissions form. This form is a way for users to explicitly share with us when they’ve found a community to be in violation of our terms. In addition to reporting individual users (which helps moderators know who in their community is perhaps exhibiting toxic behavior), you can now report directly to us when you think there’s a publisher/site we should take a look at. When we are made aware of potential violations, we review them internally and make a decision about whether or not to allow the site to remain on our platform.

If a publication is dedicated to toxic or hateful discourse, no software or product solutions will help to combat hate speech or toxicity. In those cases where a site is determined to be in purposeful violation of our Terms of Service, we will assert our stance and enforce our policy.

New and Upcoming Features

We know that managing and moderating conversations can be a challenge. Our goal is to encourage quality discussions and alleviate the burden of moderation for publishers. Part of this is by deterring or better handling toxic commenting. This isn’t a small scale matter; we know that to have a meaningful impact across our network, we need to build solutions into the product. With that in mind, we’re committed to building tools to make the moderation experience easier and better for publishers (and commenters, too).

Here are some things that we’re working on:

  • More powerful moderation features. We’re working on two features right now, Shadow banning and Timeouts, that will give publishers more options for managing their communities. Shadow banning lets moderators ban users discreetly by making a troublesome user’s comments only visible to that user. Timeouts give moderators the ability to warn and temporarily ban a user who is exhibiting toxic behavior.
  • Toxic content detection through machine learning. We are working on a feature to help publishers identify hate speech and other toxic content and then handle this more effectively.
  • Commenting policy recommendations. While we already provide suggestions for how to create community guidelines, we’ve realized that we can be more proactive and more assistive to our publishers. We're working on helping our publishers expose their custom commenting and community guidelines by making them more visible to their readers and commenters.
  • Advertiser tools: Just like publishers do not want toxic content on their sites, we know that advertisers do not want their content to display next to toxic comments. Leveraging our moderation technology, we will provide more protection for advertisers, giving them more control over where they display their content.

We recognize that we have a unique opportunity and responsibility to make a difference here, and doing it right is important to us. We’re just getting started. Thanks to our passionate community for your continual input and advice. I look forward to keeping you updated on our progress.

 0b357396-34a4-41bd-a58d-7d94d8037b98.png