✨ New ✨ The Digital Services Act: A fireside chat covering all angles Watch it here → ×

3 Ways to Improve Content Moderation Consistency

Contents

    What is a well-performing content moderation team? Does it depend on the speed? Low error rates? Or how well do they communicate with each other and the rest of the company? This is all very relevant, but from a user point of view, the most important thing is probably consistency.

    If a user gets their content rejected, while 10 of the same type of content are live on the site the next day, they will not be happy.

    They will question the policies and the competency of the moderation team, diminishing their trust in the site. And as we all know, building and maintaining user trust is the crux of a successful site.

    Moderation mistakes can happen due to a lack of focus or inexperience. But more often than not, they are related to unclear policies.

    Here we are going to look at some of these policies and how you can make them less abstract and easier for your agents to enforce in a consistent manner.

    How do you know if a user is underage?

    Tinder recently banned people under 18 from using their site.  But how are they going to enforce that realistically?

    While Tinder sign-up uses the age of your Facebook profile, creating a fake profile is just a couple of clicks away. This means that enforcing an age limit will require content moderation efforts.

    If you are running a dating site or app, you most likely have some sort of age restriction in place. This policy also likely makes your moderating team want to pull their hair out.

    Spotting if someone is too young to use your site can be incredibly difficult, especially if you only have a profile picture to go by.

    The best thing to do is to set up a procedure for handling profiles suspected of being under the age limit.

    1. Check the profile description, is age mentioned there?
    2. Check messages to other members (set a limit here not to spend too much time). Is age mentioned in any messages?
    3. Image search picture. Any social media accounts where you might find the correct age?

    If nothing is found, you can add a 2nd party verification. If two agents believe that the person is under aged based on the information available, then the suspicion can be acted upon.

    The steps can look different, but ensuring an easy-to-follow procedure for your content moderation agents is important. That way, they know exactly what to do when coming across profiles belonging to potentially under-aged users.

    If you’re into the technical side, we keep an engineering blog over at Medium, where we deep-dive into topics like this.

    Is this really offensive?

    Inappropriate and offensive content can be hard to handle consistently.

    The reason?

    The definition of offensive can differ wildly from one person to the next.

    There is a way to improve your moderation efforts, and it only takes two steps:

    Step 1

    First, create a list of words that are not allowed on your site. This can be everything from swear words to racist slurs or drug names. Once you have this list, make sure it’s available to all agents or, even better, set up an automated rule in a tool like Implio ensuring that content containing these words is immediately removed from your site.

    Step 2

    Ensure your agent team is trained to think about intent when reviewing content. Instead of looking at whether something is inappropriate at face value, they should consider the content creator’s intent. This will mostly ensure that hidden racism, sexism, and other inappropriate content are caught.

    The second step can also involve a 2nd party verification, so another agent’s opinion is always required before action is taken.

    The too-good-to-be-true rule

    The third area we will cover in this article is non-obvious scams. Many scams are painfully obvious as soon as you have spent a day or two in a content moderator’s chair. But from time to time, you will come by some that are more subtle.

    To combat these, the best weapon is to bring out the “too good to be the true rule.” This means that the agent looks at what the listing offers and decides if it is plausible or just too good to be true. For this rule to function, the agent must have a good feel for a fair price for the item or service listed.

    This is where price databases come into play. For items where scams are frequent, it’s a great idea to build a price database. A good place to start is cars and electronics, specifically smartphones, as these are categories often targeted by scammers and with prices that are fairly static.

    Once you have a good database, you can automate a big part of the process. Automation could forward all items with a price lower than that listed in your database to be scrutinized by a skilled content moderator while allowing the rest to go live.

    Just remember to keep the database up to date.

    Summary and key takeaways

    We have only scratched the surface of a grey area policies, but most of them can be tackled using this advice: Always set up a process for how to handle them. And make sure to make that process as simple as possible.

    If you ensure that your moderation team can lean on simple processes for most of what they will encounter daily, you will cut down on the number of errors that occur.

    Your customers will be happier, your moderation team will be more efficient and consistent with better work satisfaction, and your website or app will gain and maintain user trust through reliable policy enforcement.

    Chances are you should talk to a content moderation expert. Ahem, that’s us, by the way ?

    This is Besedo

    Global, full-service leader in content moderation

    We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

    Form background

    Contents