3 Ways to Improve Content Moderation Consistency

What is a well performing moderation team? Does it depend on the speed? Low error rates? Or how well they communicate with each other and the rest of the company? Of course this is all very relevant, but from a user point of view the most important thing is probably consistency.

If a user gets their content rejected, while 10 of the same type of content are live on the site the next day, they are not going to be happy.

They are going to question the policies and the competency of the moderation team diminishing their trust in the site. And as we all know building and maintaining user trust is the crux of a successful site.

Moderation mistakes can happen due to lack of focus or inexperience. But more often than not they are related to unclear policies.

Here we are going to look at some of these policies and how you can make them less abstract and easier for your agents to enforce in a consistent manner.

 

How Do You Know If a User Is Under-Aged?

Tinder recently banned people under 18 from using their site.  But how are they going to realistically enforce that?

While Tinder sign-up uses the age of your Facebook profile, creating a fake profile is just a couple of clicks away. This means that enforcing an age limit will require moderation efforts.

If you are running a dating site or app, you most likely have some sort of age restriction in place. It is also likely that this is one of the policies that  makes your moderating team want to pull their hair out.

Spotting if someone is too young to use your site can be incredibly difficult, especially if you only have a profile-picture to go by.

The best thing to do is to set up a procedure for handling profiles suspected of being under the age limit.

It could go like this:

  • Check profile description, is age mentioned there?
  • Check messages to other members (set a limit here in order to not spend too much time). Is age mentioned in any messages?
  • Image search picture. Any social media accounts where you might find the correct age?

 

If nothing is found so far, you can add a 2nd party verification. If two agents believe that the person is under aged based on the information available, then the suspicion can be acted upon.

The steps can look different, but the important thing is to ensure an easy to follow procedure for your moderation agents. That way they know exactly what to do when coming across profiles belonging to users who are potentially under-aged.

 

Is This Really Offensive?

Inappropriate and offensive content can be hard to handle consistently.

The reason?

The definition of what is offensive can differ wildly from one person to the next.

There is however a way to improve your moderation efforts and it only takes two steps:

First of all, create a list of words that are not allowed on your site. This can be everything from swear words to racist slur or drug names. Once you have this list make sure it’s available to all agents, or even better set up an automated rule in a tool like Implio ensuring that content containing these words is immediately removed from your site.

The second step is to make sure your agent team is trained to think about intent when they review content. Instead of looking at whether something is inappropriate at face value, they should consider the intent of the content creator. This will mostly assure that hidden racism, sexism and other inappropriate content is caught. The second step can also involve a 2nd party verification so another agent’s opinion is always required before action is taken.

 

The Too Good to Be True Rule

The third area we will cover in this article is non obvious scams. A lot of scams are painfully obvious as soon as you have spent a day or two in a moderators chair. But from time to time you will come by some that are more subtle.

To combat these, the best weapon is to bring out the “too good to be true rule”. This means that the agent looks at what the listing offers and decides if it is plausible or just too good to be true. In order however for this rule to really function the agent will need to have a good feel for what a fair price for the item or service listed is.

This is where price databases come into play. For items where scams are frequent it’s a great idea to spend some time building a price database. A good place to start is cars and electronics, specifically smartphones, as these are categories often targeted by scammers and with prices that are fairly static.

Once you have a good database you can even automate a big part of the process. Automation could for instance forward all items with a price lower than that listed in your database to be scrutinized by a skilled moderator, while allowing the rest to go live.

Just remember to keep the database up to date.

 

Complex Issues Require Simple Processes

We have only scratched the surface of grey area policies, but most of them can be tackled using this piece of advice: Always set up a process for how to handle them. And make sure to make that process as simple as possible.

If you make sure that your moderation team can lean on simple processes for most of what they will encounter on a daily basis, then you will cut down on the number of errors that occur.

Your customers will be happier, your moderation team more efficient, consistent with better work satisfaction and your site/app will  gain and maintain user trust through reliable policy enforcement.

 

Find out how you can easily set up Implio to up your moderation consistency.

Want to learn more?
Join the crowds who receive exclusive content moderation insights.