✨ New ✨ The Digital Services Act: A fireside chat covering all angles Watch it here → ×

Content Moderation Is Not a Tool for Censoring Free Speech

Contents

    From dating sites and online marketplaces to social media and video games – content moderation has a huge remit of responsibility.

    It’s the job of both AI and human content moderators to ensure the material being shared is not illegal or inappropriate: always acting in the best interest of the end-users.

    If you’re getting the content right for your end-users, they will want to return and hopefully bring others with them. But content moderation is not a form of censorship.

    Person in a mask shushing the viewer with the finger.
    Photo by engin akyurt on Unsplash

    When every piece of content added to a platform is checked and scrutinized – content moderation is not censorship, nor is it policing.

    Come along, and we’ll show you the evidence.

    Moderating content vs. censoring citizens

    Content moderation is not a synonym for censorship. They’re two different concepts.

    In 2016, we looked at this in-depth in our article titled Is Moderation Censorship?  – which explains the relationship between content moderation and censorship. It also gives great advice on empowering end-users, so they don’t feel censored.

    But is it really that important in the wider scheme of things?

    Content moderation continues to make headline news due to the actions taken by high-profile social media platforms, like Twitter and Facebook, against specific users – including, but not limited to, the former US President.

    There’s a common misconception that the actions taken by these privately-owned platforms constitute censorship. In the US, this can be read as a violation of the First Amendment rights concerning free speech.

    The key is that the First Amendment protects citizens against government censorship.

    That’s not to say privately-owned platforms have an inalienable right to censorship. Still, it does mean they’re not obliged to host material deemed unsuitable for their community and end-users.

    The content moderation being enacted by these companies is based on their established community standards and typically involves:

    • Blocking harmful or hate-related content
    • Fact-checking
    • Labeling content correctly
    • Removing potentially damaging disinformation
    • Demonetizing pages by removing paid ads and content

    These actions have invariably impacted individual users because that’s the intent – to mitigate content that breaks the platform’s community standards. In fact, when you think about it, making a community a safe place to communicate actually increases the opportunity for free speech.

    Another way to think about content moderation is to imagine an online platform as a real-world community – like a school or church. The question is always: would this behavior be acceptable within my community?

    It’s the same with online platforms. Each one has its community standards. And that’s okay.

    Content curators – still culpable?

    Putting it another way, social media platforms are, in fact, curators of content – as are online marketplaces and classified websites. When you consider the volume of content being created, uploaded, and shared, monitoring it is no easy feat. Take, for example, YouTube. As of May 2019, Statista reported that more than 500 hours of video were uploaded to YouTube every minute. That’s just over three weeks of content per minute.

    These content-sharing platforms actually have a lot in common with art galleries and museums. The museum owners themselves do not create the items and artworks in these public spaces –they’re curated for the viewing public and given contextual information.

    That means the museums and galleries share the content but are not liable for it.

    An important point to consider is if you’re sharing someone else’s content, there’s an element of responsibility.

    As a gallery owner, you’ll want to ensure it doesn’t violate your values as an organization and community. And like online platforms, art curators should have the right to take down material deemed to be objectionable. They’re not saying you can’t see this painting; they’re saying, if you want to see this painting, you’ll need to go to a different gallery.

    The benefits of content moderation for your business

    To understand the benefits of content moderation, let’s look at the wider context and some of the reasons why online platforms use content moderation to help maintain and generate growth.

    First, we need to consider the main reason for employing content moderation. Content moderation exists to protect users from harm. Each website or platform will have its own community of users and its own priorities in terms of community guidelines.

    Content moderation can help to build that trust and safety by checking posts and flagging inappropriate content. Our survey in 2021 of the UK and US showed that one-third of users still felt some degree of mistrust even on a good classified listing site.

    Second, ensuring users see the right content at the right time is essential for keeping them on a site. Again, with the content of classified ads, our survey revealed that almost 80% of users would not return to the site where an ad lacking relevant content was posted – nor would they recommend it to others. This lack of relevant information was the biggest reason users clicked away from a website. Content moderation can help with this too.

    Say you run an online marketplace for second-hand cars, you don’t want it to be flooded with pictures of cats suddenly. In a recent example from the social media site Reddit, the subreddit r/worldpolitics started getting flooded with inappropriate pictures because the community was tired of it being dominated by posts about American politics and that moderators were frequently ignoring posts that were deliberately intended to gain upvotes.

    Moderating and removing inappropriate pictures isn’t censorship. It directs the conversation back to what the community originally was about.

    Thirdly, content moderation can help to mitigate scams and other illegal content. Our survey also found that 72% of users who saw inappropriate behavior on a site did not return.

    A prime example of inappropriate behavior is hate speech. Catching it can be a tricky business due to coded language and imagery. We have a few blog posts about identifying hate speech on dating apps, and there are three takeaways:

    1. Automatic filters
    2. Machine learning models
    3. Human expertise

    Three ways to regulate content

    A good way to imagine content moderation is to view it as one of three forms of regulation. This model has recently gained a lot of currency, which helps explain the role of content moderation.

    Firstly, let’s start with discretion. In face-to-face interactions, most people will tend to pick up on social cues and social contexts, which causes them to self-regulate. For example, not swearing in front of young children. This is personal discretion.

    When a user posts or shares content, they’re making a personal choice to do so. Hopefully, discretion will also come into play for many users: will what I’m about to post cause offense or harm to others? Do I want others to feel offended?

    Discretion tells you not to do or say certain things in certain contexts. Sometimes, we all get it wrong, but self-regulation is the first step in content moderation.

    Secondly, at the other end of the scale, we have censorship. By definition, censorship is the suppression or prohibition of speech or materials deemed obscene, politically unacceptable, or a threat to security.

    Censorship has government-imposed law behind it and conveys that censored material is unacceptable in any context because the government and law deem it to be so.

    Thirdly, we have content moderation in the middle of both of these.

    This might include flagging harmful misinformation, eliminating obscenity, removing hate speech, and protecting public safety. Content moderation is discretion at an organizational level – not a personal one.

    Content moderation is about saying what you can and can’t do in a particular online social context.

    Summary and key takeaways

    Okay, so what can Besedo do to help moderate your content?

    • Keep your community on track
    • Facilitate the discussion you’ve built your community for (your house, your rules)
    • Allow free speech, but not hate speech
    • Protect monetization
    • Keep the platform within legal frameworks
    • Keep a positive, safe, and engaging community

    All things considered, content moderation is a safeguard. It upholds the trust contract users and website owners enter into. It’s about protecting users and businesses and maintaining relevance.

    The internet’s a big place, and there’s room for everyone.

    Contact our team today to learn more about what we can do for your online business.

    This is Besedo

    Global, full-service leader in content moderation

    We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

    Form background

    Contents