The world after the “Charlottesville rally” – 3 moderation methods to manage hate speech on dating sites

International news has recently been inundated with stories following the events of the much-covered “Unite the Right rally”,  in Charlottesville, Virginia, where white supremacists marched while chanting hateful slogans. In the face of this, Bumble and OKcupid have stepped up and made it clear that hate speech is not going to be accepted on their sites.

OKcupid banned Chris Cantwell who made himself infamous during the Charlottesville incident by, amongst other things, calling for an ethno-state and saying that the death of Heather D. Heyer was justified.

Bumble then followed the lead of OKcupid by joining up with the Anti-Defamation League. Their aim was to seek help in “identifying all hate symbols” in order to better protect their users.

 

It pays to do the right thing

It is heartening to see well-known and well-renowned brands step up and take responsibility for the digital tone and based on our experience, Bumble and OKcupid will be well rewarded for their intolerance towards hate speech.

In a survey run by Besedo concerning user behavior in online marketplaces, we saw a clear pattern; 72% of users who see inappropriate behavior on a site will not return. That is a staggering percentage, making it clear that users have a very low tolerance for hate speech.

 

How can I get rid of hate speech and supremacists?

Catching hate speech is never easy. Unfortunately, when it comes to hatred people get very creative. From masking and obfuscation, to slang and lingo, it is challenging to keep up with everything that should rightly be removed to ensure a positive, embracing community and fend off cyber-bullying from your platform.

But it is possible!

With properly applied content moderation measures you can keep the tone civil and your users safe from cyber-bullies.

To accomplish this, you need to make use of three vital content moderation methods:

 

Your first line of defense should be to set up profanity filters, including lists of all the common words used by hate groups and extremists.

Add to that, high quality, tailored machine learning models. Learning from previous moderation decisions, in tandem with your filters, they will be able to keep your site clean from anything except grey area cases. Using AI moderation, powered by machine learning, you will even be able to catch hate speech expressed through pictures; for example, in cases where users upload images of swastikas or memes with inappropriate words on.

For the grey area cases, a team of experienced content moderators should be applied. This team should have someone dedicated to research, as trends shift and wordings change. It is important that the team is continuously kept up to date so they know what to look for and are aware when seemingly innocent words uttered online suddenly hide a more nefarious meaning due to events in the real world. Context is everything.

Consider something non-heinous like the twitter hashtags #thedress, #whiteandgold and #blackandblue. There were more than 10 million tweets using these back in 2015, but for someone who had not seen the picture of the black and blue dress that appeared gold and white to some it would be impossible to understand what was discussed.

Now replace those twitter hashtags with a reference to an offensive meme and you will understand why it is so important for your moderation team to keep on top of trends and be very knowledgeable about Internet culture in general.

 

Embrace and include your community

Once you have put the right moderation processes in place, it is time for the final brick in the wall of your defense. Ask your community to report anything they see that they find is offensive and make sure that you have a team of highly experienced moderators in place to review and react to these reports. It is vital to ensure a quick response to reports like this as otherwise users will feel you do not care.

Allowing hate speech on your platform and not taking a stance against it, will swiftly erode the trust users have in your brand.

Ideally of course you want to do your very best to make sure that users do not encounter anything to report in the first place. Hence the importance of setting up your content moderation right from the start.

It is time for us all to work on creating a better digital world; one where people can engage fearlessly, without encountering hate speech and discrimination. To support that goal, our all-in-one moderation tool Implio, has two ready to go filters targeting discrimination and inappropriate language. And right now, we are offering a free version so you can start building a safer community today.

Want to learn more?
Join the crowds who receive exclusive content moderation insights.