It’s the job of both AI and human content moderators to ensure the material being shared is not illegal or inappropriate: always acting in the best interest of the end-users.
And if you’re getting the content right for your end-users, they’re going to want to return and hopefully bring others with them. But is content moderation actually a form of censorship?
If every piece of content added to a platform is checked and scrutinized – isn’t ‘moderation’ essentially just ‘policing’? Surely, it’s the enemy of free speech?
Well actually, no. Let’s consider the evidence.
Moderating content vs censoring citizens
Content moderation is not a synonym for censorship. In fact, they’re two different concepts.
Back in 2016, we looked at this in-depth in our Is Moderation Censorship? article – which explains the relationship between content moderation and censorship. It also gives some great advice on empowering end-users so that they don’t feel censored.
But is it really that important in the wider scheme of things?
Well, content moderation continues to make headline news due to the actions taken by high-profile social media platforms, like Twitter and Facebook, against specific users – including, but not limited to, the former US President.
There’s a common misconception that the actions taken by these privately-owned platforms constitute censorship. In the US, this can be read as a violation of the First Amendment rights in relation to free speech. However, the key thing to remember here is that the First Amendment protects citizens against government censorship.
That’s not to say privately-owned platforms have an inalienable right to censorship, but it does mean that they’re not obliged to host material deemed unsuitable for their community and end-users.
The content moderation being enacted by these companies is based on their established community standards and typically involves:
- Blocking harmful or hate-related content
- Labeling content correctly
- Removing potentially damaging disinformation
- Demonetizing pages by removing paid ads and content
These actions have invariably impacted individual users because that’s the intent – to mitigate content which breaks the platform’s community standards. In fact, when you think about it, making a community a safe place to communicate actually increases the opportunity for free speech.
“Another way to think about content moderation is to imagine an online platform as a real world community – like a school or church. The question to ask is always: would this way of behaving be acceptable within my community?”
It’s the same with online platforms. Each one has its own community standards. And that’s okay.
Content curators – Still culpable?
Putting it another way, social media platforms are in fact curators of content – as are online marketplaces and classified sites. When you consider the volume of content being created, uploaded, and shared monitoring it is no easy feat. Take, for example, YouTube. As of May 2019, Statista reported that in excess of 500 hours of video were uploaded to YouTube every minute. That’s just over three weeks of content per minute!
These content sharing platforms actually have a lot in common with art galleries and museums. The items and artworks in these public spaces are not created by the museum owners themselves –they’re curated for the viewing public and given contextual information.
That means the museums and galleries share the content but they’re not liable for it.
However, an important point to consider is, if you’re sharing someone else’s content there’s an element of responsibility. As a gallery owner, you’ll want to ensure it doesn’t violate your values as an organization and community. And like online platforms, art curators should have the right to take down material deemed to be objectionable. They’re not saying you can’t see this painting; they’re saying, if you want to see this painting you’ll need to go to a different gallery.
What’s the benefit of content moderation to my business?
To understand the benefits of content moderation, let’s look at the wider context and some of the reasons why online platforms use content moderation to help maintain and generate growth.
Firstly, we need to consider the main reason for employing content moderation. Content moderation exists to protect users from harm. Each website or platform will have its own community of users and its own priorities in terms of community guidelines.
“Where there is an opportunity for the sharing of user-generated content, there is the potential for misuse. To keep returning to a platform or website, users need to feel a sense of trust. They need to feel safe.”
Content moderation can help to build that trust and safety by checking posts and flagging inappropriate content. Our survey of UK and US users showed that even on a good classified listing site, one-third of users still felt some degree of mistrust.
Secondly, ensuring users see the right content at the right time is essential for keeping them on a site. Again, in relation to the content of classified ads, our survey revealed that almost 80% of users would not return to the site where an ad lacking relevant content was posted – nor would they recommend it to others. In effect, this lack of relevant information was the biggest reason for users clicking away from a website. Content moderation can help with this too.
Say you run an online marketplace for second-hand cars, you don’t want it to suddenly be flooded with pictures of cats. In a recent example from the social media site Reddit, the subreddit r/worldpolitics started getting flooded with inappropriate pictures because the community was tired of it being dominated by posts about American politics and that moderators were frequently ignoring posts that were deliberately intended to gain upvotes. Moderating and removing the inappropriate pictures isn’t censorship, it’s directing the conversation back to what the community originally was about.
Thirdly, content moderation can help to mitigate against scams and other illegal content. Our survey also found that 72% of users who saw inappropriate behavior on a site did not return.
A prime example of inappropriate behavior is hate speech. Catching it can be a tricky business due to coded language and imagery. However, our blog about identifying hate speech on dating sites gives three great tips for dealing with it:
Three ways to regulate content
A good way to imagine content moderation is to view it as one of three forms of regulation. This is a model that’s gained a lot of currency recently and it really helps to explain the role of content moderation.
Firstly, let’s start with discretion. In face-to-face interactions, most people will tend to pick up on social cues and social contexts which causes them to self-regulate. For example, not swearing in front of young children. This is personal discretion.
When a user posts or shares content, they’re making a personal choice to do so. Hopefully, for many users discretion will also come into play: will what I’m about to post cause offense or harm to others? Do I want others to feel offended?
Discretion tells you not to do or say certain things in certain contexts. We all get it wrong sometimes, but self-regulation is the first step in content moderation.
Secondly, at the other end of the scale, we have censorship. By definition, censorship is the suppression or prohibition of speech or materials deemed obscene, politically unacceptable, or a threat to security.
Censorship has government-imposed law behind it and carries the message that the censored material is unacceptable in any context because the government and law deem it to be so.
Thirdly, in the middle of both of these, we have content moderation.
“Unlike censorship, content moderation empowers private organizations to establish community guidelines for their sites and demand that users seeking to express their viewpoints are consistent with that particular community’s expectations.”
This might include things like flagging harmful misinformation, eliminating obscenity, removing hate speech, and protecting public safety. Content moderation is discretion at an organizational level – not a personal one.
Content moderation is about saying what you can and can’t do in a particular online social context.
So what can Besedo do to help moderate your content?
- Keep your community on track
- Facilitate the discussion you’ve built your community for (your house, your rules)
- Allow free speech, but not hate speech
- Protect monetization
- Keep the platform within legal frameworks
- Keep a positive, safe, and engaging community
All things considered, content moderation is a safeguard. It upholds the ‘trust contract’ users and site owners enter into. It’s about protecting users, businesses, and maintaining relevance.
The internet’s a big place and there’s room for everyone.
To find out more about what we can do for your online business contact our team today.
If you want to learn more about content moderation, take a look at our handy guide. In the time it takes to read, another 4,000 YouTube videos will have been uploaded!