🚀 Free eBook: Build vs. Buy – The Case for Outsourcing Content Moderation Download → ×

Contents

    Have you ever been overheard and misunderstood by a stranger? It’s not, thankfully, an everyday occurrence, but it is something that most of us have probably experienced at least once. Imagine you’re talking to a friend somewhere public, perhaps in a café or on public transport. Maybe the conversation turns to discussing a film you both like or joking about a recent political event. Suddenly, you realize that, a few meters away, someone has caught a few words midway through your chat and doesn’t look at all happy about how you’re speaking.

    Illustration of woman and robot communicating.

    Words don’t have to mean anything offensive in order to cause concern when taken out of their context – of being a fictional story, for instance, or an inside joke between friends. Language is always social, and seeing it from a different social vantage point can cause serious errors in interpretation.

    The social question in content moderation

    Being mistakenly overheard is a very small version of something that happens all the time on a much larger scale. Particularly in an age where cultural ideas spread virally, it’s not unusual for people to find it hard to keep up with what the conversations around them mean. For example, a new hit crime series on Netflix may leave someone confused, at least for a day or two, as to why they keep hearing people describing gruesome murders.

    If this kind of event can temporarily wrong-foot human beings, it’s a much more persistent problem for content moderation systems. After all, while office worker can ask their colleagues what is going on, content moderation generally can’t directly ask the user what they mean, even when human moderators are involved. Automated systems, meanwhile, can maintain full awareness of anything happening on-platform – but often have little scope to understand that in terms of the wider world.

    In one way, this situation is unsurprising: content moderation systems have evolved to meet specific business needs, such as protecting brand reputation, maintaining revenue, and protecting user safety, and state of art is extremely effective at achieving this.

    Tools from filter lists to machine learning are powerful when the aim is to create clear boundaries for acceptable speech.

    They are less well-suited, however, to these situations, which can cause the greatest friction with users when seemingly normal interactions are punished without any apparent explanation. No matter how well-trained and refined a business’s content moderation is, a system focused on the platform will always have the potential to be surprised by the world-changing around it.

    The cultural stakes of moderation

    As user-generated content becomes increasingly central to people’s daily lives, content moderation likewise becomes more responsible for effective behavior. The potential for disagreement between businesses and userbases grows ever more severe. In the extreme, this can lead to international media attention on a system trying to deal with content it was not specifically designed for.

    To put it another way, while protecting (for example) brand reputation may once have meant strictly enforcing rules for user interaction, expectations of user content are evolving, and the demands on businesses that rely on user content are becoming more subtle.

    Automated moderation, which doesn’t keep pace with cultural shifts, is becoming less palatable.

    One consequence is that we still rely on humans to help decide which side of the line to err on in ambiguous situations. While they can’t directly address users, human moderators outperform their machine colleagues when making subtle distinctions. However, this still leaves problems of establishing large enough teams to cope with the content volume and ensuring that the manual review process engages with the right content in the first place.

    In the longer term, the expertise of the content moderation community will have to be seriously applied to thinking about how to help create healthier, more human conversations – not just limiting those negative ones. We think that user-generated content presents a far greater opportunity for sustainable growth than it does a risk factor for brand damage; as we consult with our customers (and speak to you through blogs like this), we’re always keen to hear more about how the next, more subtle generation of content moderation tools might best be designed.

    Contents