Video games are a great escape from reality, allowing people to find solace in an imaginary world. But what about the nasty side of gaming, when bullying or grooming occurs?

Gaming has changed drastically over the last decade. For example, back in the mid-2000s, a player could just start a new game and go off on their own for as long as they wanted. If they wanted to talk to other people online, there were IRC channels or message boards to find others with similar interests. But today? Today it’s more common than not with in-game messaging, and we have instant messaging apps like Discord and voice chat apps such as TeamSpeak, Mumble, and Element.

Why Is Content Moderation Important?

Content moderation is important for many reasons. It helps keep your gaming platform safe by ensuring that only appropriate content is shared. Additionally, it can enhance the user experience by making sure that users are only seeing content that is relevant and interesting to them. Ultimately, content moderation helps create a positive and enjoyable experience for all gaming platform users.

All types of content should be moderated. This includes but is not limited to video, text, and images. Content moderation will include identifying inappropriate content, such as nudity, hate speech, and illegal activity.

It all boils down to giving everyone the best possible user experience.

Grooming in games starts with a simple question from another player, such as “are you home alone?”

What are the advantages of a solid moderation program? A solid moderation program can help protect your gaming platform from banning trends that can disrupt your customer base, diminish customer trust in your company/product, and protect you from potential lawsuits by both users and third parties.

Bullying and Grooming in Online Gaming

There are many ways that bullying and grooming can happen on online gaming platforms. These can include:

  • Players send abusive or threatening messages to other players.
  • Players create offensive avatars or in-game content.
  • Players engaging in hate speech or harassment in chat rooms or forums.
  • Players manipulate game mechanics to harass other players.

Grooming in games starts with a simple question from another player, such as “are you home alone?”

Content moderation is the best way to combat these issues and keep your gaming platform safe. By moderating user-generated content, you can remove offensive material, stop trolls and bullies from ruining the experience for everyone, and create a safe and welcoming environment for all players.

What Are the Effects of Bullying/Grooming in Online Gaming?

While many established gaming platforms have some form of content moderation to protect users from inappropriate or harmful content. However, left unattended, your game will risk becoming a breeding ground for bullying and grooming. Suffice to say, this will harm the victim’s self-esteem and mental health, most likely causing them more harm than missing out on gaming.

Victims of bullying and grooming will, and rightfully so, speak to others about their experiences. Once the word is spreading about these user experiences, well, you can stick a fork in your game’s reputation. Gamers will speak out in reviews, subreddits, and social media, leaving you facing an incredible uphill battle to save your platform’s credibility and reputation.

Keep your users safe and engaged with the game.

Imagine starting every morning by reviewing a few hundred NSFW images. After your first cup of coffee, you have some grooming to review and possible CSAM activity to talk to the police about.

Sounds awful.

Set yourself up to focus on game development rather than spending endless hours reviewing inappropriate behavior.

Content Moderation to Prevent Abuse in Gaming

In-app messaging and chats need supervision and moderation to be safe. That means that if you’re running a gaming platform, you need to have someone monitoring the conversations that take place within it. This will help keep things safe for all users, and it will also enhance the user experience by ensuring that people can communicate without being harassed or subjected to inappropriate content.

Not to toot our own horn, but companies like ourselves review and approve content before it is made public. While this sound like it might be a slow process, it actually happens with just milliseconds of latency, meaning people are unlikely to even notice. 

Suppose a user abuses an in-game messaging app. In that case, technology like Besedo’s will ensure others don’t get a chance to see the offensive messages or images being sent. Real-time moderation for profanity or nudity will ensure that we can keep chats clean and civilized.

That way, Besedo also works to keep the app user-friendly and respectful by enforcing the app’s terms of use. If you’re into podcasts, we highly recommend episode 120, called ‘Voulnet,’ of Darknet Diaries.


Content moderation is a vital step in keeping your gaming platform safe and user-friendly. By taking the time to review and approve all content before it goes live, you can avoid potential problems down the road.

Not only will this save you time and money, but it will also improve the overall quality of your user experience. So if you’re looking for a way to keep your gaming platform safe and improve your user experience, content moderation is the answer.

Get exclusive insights and guidance on how to tackle your content moderation challenges with Implio. Moderation expert and Besedo customer success manager, Alessio Coco, will take you through our AI capabilities, automated filters and manual interface in the tool.

Download Webinar

Fill out your email below to get your free copy of the webinar.


Moderation is a must for quality content and user safety, but it can feel like resource drain.

22% of classifieds sites spend more than 50 hours a week developing and building their own content moderation tools.

Imagine if that time was spent on building new features instead.

Implio was built to solve content moderation challenges for start-ups and at scale. It comes ready with AI, automated filters, an easy to use manual interface and insights to further improve your moderation procedures.

We want to give you a tour of Implio to show how it can free up developer resources while optimizing your content moderation process.

In this webinar, you will learn:

  • How AI can optimize your moderation without decreasing user experience.
  • How you can automate up to 80% of your content moderation using customizable filters.
  • How AI and filter automation can support your manual efforts for even better accuracy.
  • How to scale seamlessly with Implio.

Written by

Alessio Coco

Customer Success Manager at Besedo

Alessio loves to assist customers and is dedicated to helping them reach their goals by providing solutions to their day-to-day problems. He possesses extensive experience in e-commerce, classified marketplaces and hi-tech. Alessio is passionate about technology, eager to learn and is continuously looking to overcome the new exciting challenges that he faces.

Alessio has a strong technical and managerial background, and a strong drive to meet targets and objectives. His passion for problem-solving combined with an open mindset and high flexibility is his recipe for sustained success.

Content moderation is when the online platform screens and monitors user-generated content based on platform-specific rules and guidelines to determine whether or not the content should be published on the online platform.

Content moderation executed by humans

In other words, when a user submits content to a website, that content will undergo a screening process (what is known as the moderation process) to ensure that the content upholds the website’s regulations, and is not illegal, inappropriate, harassing, etc.

Content moderation as a practice is common across online platforms that heavily rely on user-generated content, such as social media platforms, online marketplaces, sharing economy, dating sites, communities, and forums, etc.

There are many different methods by which companies can decide how content should be moderated. These are often referred to as;

  • Pre-moderation
  • Post-moderation
  • Reactive moderation
  • Distributed moderation
  • Automated moderation

These methods require automation and a human touch to secure the best result.

Let’s look at how the methods are put into practical work by looking at human and automated content moderation. If you’re curious to learn more about the other types of content moderation, check out this article about the 5 moderation methods.

What is human content moderation?

Human moderation, or manual moderation, is when humans manually monitor and screen user-generated content submitted to an online platform. The human moderator follows platform-specific rules and guidelines to protect online users by keeping unwanted, illegal, inappropriate content, scams, and harassment away from the website.

What is automated content moderation?

Automated content moderation means that any user-generated content submitted to an online platform will automatically be accepted, refused, or sent to human moderation based on the platform’s specific rules and guidelines. Automated moderation is the ideal solution for online platforms that want to ensure quality user-generated content goes live instantly and that users are safe when interacting on their websites.

According to a study done by Microsoft, humans only stay attentive for 8 seconds on average. Therefore, online platforms cannot afford to have slow time-to-site of user-generated content, or they might risk losing their users. On the other hand, users who encounter poor-quality content, spam, scam, inappropriate content, etc., are likely to leave the site instantly.

So, where does that leave us?

For online platforms not to jeopardize quality or time-to-site, they need to consider automated content moderation.

We often refer to machine learning AI (AI moderation) and automated filters when talking about automated content moderation. But what are they really?

What is AI content moderation?

AI content moderation, or tailored AI moderation, is a machine learning model built from online platform-specific data to efficiently and accurately catch unwanted user-generated content. An AI moderation solution will automatically take highly accurate automated moderation decisions – refusing, approving, or escalating content.

One example that showcases the power of AI moderation is the Swiss online marketplace, Anibis, which successfully automated 94% of its moderation whilst achieving 99.8% accuracy.

AI moderation will be great for routine decisions if you have a high-quality dataset on which models can be built. It excels at dealing with cases that almost always look identical or similar. This usually includes the vast majority of items that are posted to online marketplaces, and as such most platforms can benefit from using AI moderation.

It should also be mentioned; that AI moderation can be built on generic data. These models can be effective but are not as accurate as a tailored AI solution as they don’t consider site-specific rules and circumstances.

What is Automated filter content moderation?

Automated filter content moderation is a set of rules to highlight and catch unwanted content automatically. The filters (or rules) efficiently find content that can’t be misinterpreted or are obvious scams.

Filters are also great for covering sudden rule changes where the AI has not gotten up to speed yet (Training takes some time and a quality data set). This was well illustrated when the Corona pandemic suddenly made masks and toilet paper problematic. This makes filters a solid complimentary automation tool for your moderation setup.

Automated filters can easily be created, edited, and deleted in our all-in-one tool, Implio – learn how to create filters here.

Do’s and don’ts of content moderation

Determining what to do and not to do in content moderation may vary from site to site. Many elements and factors need consideration to get the moderation set up best suited for your specific needs.

However, regardless if you’re running an online marketplace, social media platform, sharing economy site, etc., there are some things true about what to do and not to do when it comes to content moderation.

Do’s of content moderation

Do select the method that’s right for your needs

Start by looking at what kind of content your site hosts and who your users are. This will help you clearly understand your content moderation method and setup requirements. For example, the type of user-generated content found on Medium contra Facebook differs greatly from their users’ behavior. This makes their moderation methods and setups look different to fit their platform’s specific needs.

Do create clear rules and guidelines

Rules and guidelines must be clear for everyone directly involved with your online platform’s content moderation. Everyone from the data scientist developing your AI moderation to the human moderator reviewing content, regardless if they sit in-house or are outsourced to partners. Uncertainty in your rulebook can set your moderation efforts back from a financial and user experience perspective.

Do moderate all types of content

Whether you’re running an online marketplace, dating site, or social media platform, your users are key contributors to your platform. Making sure they enjoy pleasant experiences and are met with quality website content should interest you. To achieve this, you must ensure your content moderation is done right. In a perfect world, moderating all types of content on your site, from text and images to videos and 1-to-1 messages, would be ideal.

The reality is that this is not an approach possible for all online platforms; for financial and technical reasons. If that’s your case, as a minimum approach, make sure to identify your high-risk categories and content and start your moderation efforts there

Don’ts of content moderation

Don’t misinterpret what good content is

Quality content is key to building user trust and achieving a splendid user experience on your online platform, but it’s important to understand good content. Don’t make the mistake of misinterpreting good content and ending up rejecting user-generated content simply because it’s negative.

For example, a negative comment or review following a transaction can still be good content as long as no harsh language is used. Genuine content is what you want, as it enhances quality and user trust.

Don’t wait too long before you get started with moderation

If you’re in the early stages of establishing your online platform, getting started with content moderation might feel like it’s miles away. It’s not. Don’t get us wrong, perhaps it shouldn’t be your main priority right out of the gate, but you need to plan how to handle user-generated content from a moderation perspective when you scale.

As you’re growing and the network effect kicks in, you often see a rapid increase of content flooding into your site. You need to be prepared to handle that; if not, your big break might actually end up hurting you in the long run.

Don’t waste resources

Don’t reinvent the wheel. With multiple content moderation tools and solutions available in the market, it’s important that you prioritize your resources carefully. Besedo can help you with any content moderation need; send us a message.

Innovation and growth are what will boost your online platform to success, and this is where your dev resources will give you the most competitive advantage. Find your way to free up your resources for innovation without risking falling behind with your moderation efforts.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

Every feature we include in Implio has been carefully chosen based on feedback from stakeholders (internal and external) and after careful analysis of current and future needs within the industry (read more about how we plan our roadmap). As such it is always exciting when we launch something new since we know it is anticipated by our users and will increase their efficiency and quality of life when working in our tool.

Our developers work hard to ensure regular updates and feature additions to Implio. Here are the biggest improvements we released in 2017

Debuting almost an entire year ago, this particular feature helps manual moderators create a customized UI template in Implio. This allows users to display the necessary moderation information whichever way suits them best. For example, they could configure the layout to prioritize the image shown, user details, customer data, and moderation feedback – among other information.

Our second big feature of last year was the new Implio search tool. Never underestimate the power, speed, and usefulness of a good search function! The always-visible search bar is found at the top of each page within Implio. Users can search by keywords, exact quotes, and specific contact information – including email addresses and phone numbers.

The results can be ordered by relevance, newest first, oldest first; and displayed as a list or using images. We think this feature is going to be particularly useful for moderators as they review posts, or monitor accounts and items coming into Implio.

Implio search function

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.


In May we launched Implio’s updated manual interface. It was the culmination of months of hard work from our developers; especially our front-end team.

We spent a lot of time performing usability tests and getting client feedback; fine-tuning the new interface to make sure it benefits everyone.

Key improvements added to this version include:

  • Data is organized to follow the API’s structure, to make things more consistent.
  • Revisions of a single item are grouped together so that the moderator only reviews the latest version and can disregard previous ones.
  • Content can be edited directly within the page. Plus type and category can be changed using a simple drop-down menu.
  • A status bar helps you track your progress on the page
  • It’s also much easier (for a developer) to configure a number of settings for each client. These include the number of items displayed per page, and the ability to enable or disable pre-approved items in the queue.

Our fourth biggest Implio feature involved the rollout of different user role permissions. Each user role now comes with a specific list of permissions, allowing admins, automation specialists, moderators, and spectators full or restricted access to certain functionalities. As you’d expect, admins have the greatest level of authority, but being able to manage rules and search items will undoubtedly make moderators’ jobs a lot easier.

Implio user administration

Our final feature for 2017, launched just before Christmas. It was our geolocation filter – which we’ve covered in a dedicated blog post.

Essentially it’s used to detect inconsistencies between where users say they’re based, and where their IP address actually shows them to be; ideal for helping online marketplace owners protect their users from scammers.

Geolocation is fully integrated into Implio and is visible in the manual moderation interface. However, users can also create their own rules, helping them quickly compare information, making it easier for moderators to detect discrepancies.

So… what does 2018 hold? Don’t worry there’s a whole lot more where these came from! We already have a number of features, functions, and updates planned for the next 12 months.

Watch. This. Space.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

Content moderation experts analyze global classifieds to identify a “swear word index” to help online marketplaces protect users from common cyberbully attacks.

Besedo, content moderation experts, analyzed the top five most common swear words on global online classifieds (translated to English) and discovered the following top results:

  1. B****
  2. F*****g/Idiot (shared 2nd place)
  3. A**hole/F*** (shared 3rd place)
  4. F*g/Sh*t/Stupid (shared 4th place)
  5. D*ck/Slave/Son of a b*** (shared 5th place)

Percentages of individuals experiencing cyberbullying at some point in their lives has nearly doubled between 2007 and 2016. Besedo works to protect users and owners of online forums, dating websites and online marketplaces, where it is common for internet trolls to host cyberbullying attacks.
“Our bottom line is to create a safe, trustworthy online presence for our clients and their users, which takes what we describe as a content moderation SWAT team,” said Maxence Bernard, Head of Resource & Development at Besedo. “Simple integrations like our filter automation tool is a great stepping stone to creating that safe place to avoid common swear words from appearing online,” Frisk said.

While some online attacks are more obvious (including the top five most common swear words in classifieds), other inappropriate comments are more easily spotted by the trained moderator eye. To maintain a controlled, professional brand image and more importantly to protect your users from harassment, Besedo experts suggest site owners handle these cyberbully attacks as follows:

  • Remove swear words: Filter automation tools give site owners a sturdy base when beginning in content moderation practices. Filter automation allows site owners to build custom filters to easily target and repeal unwanted content.
  • Moderate comment sections: On top of filter automation, multi-lingual human moderators are the eyes in the room for comment sections. Trained moderators are a 24/7/365 operation equipped with cultural sensitivity, native level language skills and industry insights for high quality moderation outcomes. Agents are tested on their logic and linguistic skills to ensure proper delivery of ongoing quality moderation.
  • Review one-to-one chat conversations: Chat conversations can either be moderated by applying AI moderation models or it can be reviewed by a trained moderator when reported by users. When the latter happens, the conversation will be carefully examined to see if site rules have been broken. This way site owners can protect their users from harassment even in private one to one conversations, without being intrusive.

“We take cyberbullying very seriously at Besedo, which is why we work tirelessly to ensure our clients are well-educated on best content moderation practices so their users don’t feel the heat of the attack,” said CEO Patrik Frisk.

Implio is offered to smaller communities as a free tool, providing users with premade, ready to go filters in five different languages. Learn more about how to get started for free:
For inquiries on how Besedo can keep your users safe from cyberbully attacks and help create a trusted online marketplace, dating site or classifieds community, contact Sigrid Zeuthen: sigrid.zeuthen(at)besedo(dot)com.
About Besedo:
Besedo, leading content moderation experts, empowers online marketplaces to grow with trust by enabling their users to engage fearlessly with one another. Since 2002, Besedo has partnered with online marketplaces of all sizes, across the globe to help them create user trust, better quality content and better user experience in the digital world. Besedo is based in Stockholm, Sweden, and has offices in Colombia, Malaysia, Malta, and France. With over 500 employees of more than 20 different nationalities, Besedo is a truly global and multicultural company with real localization capabilities. Visit the website to learn more about services and solutions available:

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background