In parts one and two of this blog post series about the evolution of language, we talked about how moderating user-generated content (UGC) echoes how people communicate. And also how the rapid evolution of language online is now making that job harder.
In short: there’s nothing new about setting rules for acceptable speech, but we have to get faster about how we do it.
However, it’s also worth considering how online communication doesn’t just build on how offline communication works but offers something new and different. The fact that email was created to be a digital equivalent of postal mail, for example, is right there in the name – but today, email offers much more than the post ever could, from uniquely personalized content to embedded video.
Across the internet, there’s a wealth of communication options, ranging from adding simple emoji to broadcasting yourself live to millions of viewers, which don’t have a direct offline equivalent. In a way, of course, pointing this out is stating the obvious; those otherwise impossible options are, to a large extent, precisely why the internet is so powerful and popular.
And yet, from a business perspective, it would be easy to look at this UGC and see it as something quite similar to cybersecurity. Cyber attackers are often locked in an arms race with security professionals, each trying to identify weaknesses first and develop more robust tactics than the other. Having a wide variety of options is also a range of potential ways to get around the policies that a platform might want to impose – whether it is stopping people from conducting business through other channels or monitoring for much graver abuse issues or hate speech.
Giving shoppers the power to post videos of products they purchase, for example, has clear benefits in building credibility. But conversely, users can use that feature to publish irrelevant or even maliciously untrue content. Or, building reaction gifs into an online dating messaging platform might enrich conversations and be used as an avenue for guerilla marketing.
The sheer variety at play here marks a real difference from the offline reality of (mostly) speech and writing.
While these concerns are well-founded, thinking about this kind of UGC in these terms runs the risk of missing how vital it is as an engine of growth for online businesses: the perception of danger might cloud sight of the benefits.
The most successful moderation approaches are about enabling interactions as much as blocking them; not an arms race, but teamwork.
New moderation for new communication
It’s becoming more widely understood that offering advice about examples of and benefits in return for positive behaviors on platforms is ultimately more effective than punishing negative behavior. This is something that research has shown, and it’s a method that large online platforms are increasingly turning to.
Here we might be looking at something fundamentally different from the long offline history of moderating speech, which has typically relied on limiting certain expressions and interactions.
When businesses make themselves open to users and customers communicating in richer ways, we think that the best approaches will focus on how moderation can empower users in ways that enable growth. An entirely conservative approach will only stifle the potential of audiences, customers, and users.
These new worlds of content will not be effectively moderated using tools and methods adopted to deal with purely text-based interactions. As users’ interactions become more complex, we will need human input to oversee and understand how those interactions work.
What Is Cyberbullying and How Can It Be Stopped?
Discover effective strategies to stop cyberbullying and learn how Besedo’s content moderation solutions can create safer online communities.
The DSA: An Executive Summary of the New Online Rules for Platform Businesses
Here’s a friendly guide, an executive summary if you will, to be compliant with the Digital Services Act for online businesses.
Content Filtering vs Content Moderation: The Key to Scaling
Content filtering is only part of the content moderation process, but it’s an important gatekeeper that allows platforms to scale. Let’s have a closer look.
Misinformation vs Disinformation: What Is the Difference and How Do They Interact?
Learn about misinformation and disinformation, how they interact, how false information spreads, and an unusual example from 1835.
Report: The Effects of Fraud and Poor Content Quality on Online Marketplaces
The new Besedo report highlights the effects of fraud and poor content quality on online marketplaces. Get easy-to-understand graphs, stats, and insights.
This is Besedo
A complete, scalable solution for better content moderation
Our content moderation service leverages the power of both AI and human expertise, ensuring users have an incredible experience. With our real-time accuracy capabilities, we help create a safer, more positive internet for all.