✨ New ✨ The Digital Services Act: A fireside chat covering all angles Watch it here → ×

3 Tips on How to Deal With Hate Speech on Dating Sites


    Take a stance against hate speech and white supremacists on your dating site. Apply the proper content moderation methods and manage to cyberbully professionally.

    In the face of the “Unite The Right Rally” in Charlottesville, Virginia, back in 2017, companies such as Bumble and OkCupid stepped up and made it clear that hate speech will not be accepted

    OkCupid banned Chris Cantwell, who made himself infamous during the Charlottesville incident by, amongst other things, calling for an ethnostate and saying that the death of Heather D. Heyer was justified.

    Bumble followed the lead of OkCupid by joining up with the Anti-Defamation League. They aimed to seek help in “identifying all hate symbols” to protect their users better.

    A couple sitting on a couch, smiling and sharing snacks with each other.
    Photo by No Revisions on Unsplash

    It pays to do the right thing

    It is heartening to see well-known, and well-renowned brands step up and take responsibility for the digital tone. Based on our experience, Bumble and OkCupid will be well rewarded for their intolerance towards hate speech.

    In a survey by Besedo concerning user behavior in online marketplaces, we saw a clear pattern; 72% of users who see inappropriate behavior on a site will not return. That is a staggering percentage, indicating that users have a very low tolerance for hate speech.

    How can I get rid of hate speech on my app?

    Catching hate speech is never easy. Unfortunately, when it comes to hatred, people get very creative. From masking and obfuscation to slang and lingo, keeping up with everything that should be removed is challenging to ensure a positive, embracing community and fend off cyberbullying from your platform.

    But it is possible.

    With adequately applied content moderation measures, you can keep the tone civil and your users safe from cyber-bullies.

    To accomplish this, you need to make use of three vital content moderation methods:

    1. Automatic filters
    2. Machine Learning models
    3. Human expertise

    Your first line of defense should be to set up profanity filters, including lists of all the common words used by hate groups and extremists.

    Add to that high-quality, tailored machine learning models. Learning from previous moderation decisions, in tandem with your filters, they will be able to keep your site clean from anything except a grey area cases. Using AI moderation powered by machine learning, you can even catch hate speech expressed through pictures; for example, in cases where users upload images of swastikas or memes with inappropriate words on them.

    A team of experienced content moderators for the grey area cases should be applied. This team should have someone dedicated to research as trends shift and wordings change. The team must be continuously updated. Hence, they know what to look for and are aware when seemingly innocent words uttered online suddenly hide a more nefarious meaning. Context is everything.

    Remember the dress on Twitter?

    Consider something non-heinous like the Twitter hashtags #thedress, #whiteandgold, and #blackandblue. More than 10 million tweets were using these back in 2015. For someone who had not seen the picture of the black and blue dress that appeared gold and white to some, it would be impossible to understand what was discussed.

    Now replace those Twitter hashtags with references to an offensive meme. You will understand why it is so important for your moderation team to keep on top of trends and be very knowledgeable about Internet culture in general.

    Embrace and include your community

    Once you have put the right moderation processes in place, it is time for the final brick in the wall of your defense. Ask your community to report anything they see that they find offensive, and make sure that you have a team of highly experienced content moderators in place to review and react to these reports. It is vital to ensure a quick response to reports like this; otherwise, users will feel you do not care.

    Allowing hate speech on your platform and not taking a stance against it will swiftly erode users’ trust in your brand.

    Ideally, you want to do your very best to ensure that users do not encounter anything to report in the first place. Hence the importance of setting up your content moderation right from the start.

    It is time for us all to work on creating a better digital world where people can engage fearlessly without encountering hate speech and discrimination.

    Our all-in-one moderation tool Implio has two ready-to-go filters targeting discrimination and inappropriate language to support that goal. And right now, we are offering a free version so you can start building a safer community today.

    This is Besedo

    Global, full-service leader in content moderation

    We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

    Form background