✨ New ✨ The Digital Services Act: A fireside chat covering all angles Watch it here → ×

Automating moderation decisions? What should your AI models look at to be accurate?

Contents

    AI automation can help you optimize your moderation processes immensely, but as with all new technology, there are good and bad ways of implementing automated moderation.

    One of the things you must pay attention to when investing or developing automation is which parameters the solution looks at before taking a decision.

    One obvious benefit of AI is its obvious speed when taking decisions. To put it into perspective an experienced moderator can handle around 400-600 decisions per hour whereas AI decisions are instantaneous.

    A less commonly recognized strength is AI’s ability to include a multitude of parameters in its decision-making and at a fraction of the time it would take a moderator to review the same information.

    For this strength to be properly utilized it’s imperative that the team developing the AI model know which parameters are relevant and could be important. If they don’t, the accuracyprecision  and recall of your AI moderation may be a lot worse than expected. And the user trust and experience of your site will suffer as a result.

    Important decision parameters for marketplace content moderation AI

    So, we’ve established that the parameters your AI take into account are very important, but which parameters in particular matter when you are creating an AI for marketplace content moderation?

    Let’s begin by saying as with all truly efficient AI these parameters should be customized to fit your particular service and setup. This means that to achieve the best possible AI moderation solution you, or the partner developing the solution for you, need to spend time investigating which parameters give relevant information that helps in decision making.

    That being said, at Besedo we have of course over the years identified a number of parameters that always help the AI do a better job. We’ll share some of the most general here, but with the caveat that for the best results an expert should take a look at your unique case and build the AI models around that.

    User attributes

    These can vary widely depending on the purpose and setup of your platform. For marketplaces, user attributes could be restricted to as little as user name and area, but there’s really no limit to the number of attributes. AI could look at anything from subscription status to gender as long as this information is available. The more information the AI gets to work with, the more accurate decisions it can take.

    User history

    What has the user done on the site before? Has their behavior changed, or do they have a history of taking unwanted actions? AI will take user history into account on a broader scale, comparing the behavior of users to that of other users and use that to take a decision on the item it’s reviewing. If users who generally behave in a certain way, usually post good content, then chances are that content by a user who behave similarly is also good. And vice versa.

    Ad history

    Has the ad been rejected previously? Which queue was it originally sent to and was it edited by the user? All this information helps the AI understand the context of the ad and take better decisions.

    Publishing time

    When was the ad published? If it’s published outside the regular publish time patterns it may be an indicator that the ad isn’t genuine.

    Ad Publishing frequency

    How often does the user publish ads? Are they publishing a large number of ads within a short timeframe or does the publishing pattern feel more organic? Some scammers use bots to publish a big chunk of ads at the same time across multiple platforms. Reviewing the ad publishing frequency will help define whether it’s a genuine user.

    Possible duplicates

    Has this exact ad been published before? In general, duplicate content isn’t great for the user experience or SEO so if the AI finds duplicate ads it can easily take a decision on it.

    IP address

    Where was the ad posted from? Is the IP Address consistent with the one usually used on the account? Is it consistent with the IP address used when the user account was created?

    Contact details

    What phone number and other contact details are listed on the account? Is everything consistent with previous information and with other details on the account such as IP and email? Have other users with similar contact details behaved as they should?

    Email

    What email address is associated with the ad? Has other similar email addressed been used to publish unwanted content?

    Ad title

    What’s the title or headline for the ad. Is it relevant to the rest of the content? Does it contain words usually used in unwanted content (like swearing, illegal or even phrases often used by scammers)?

    Ad body text

    How is the ad described in the body text? Is the text relevant? And as with ad title, does it contain words that are associated with unwanted or bad content? Does the language of the body text correspond to one accepted by the site?

    Based on the information provided by these, and often as many as several hundred more parameters, the Besedo AI takes a complex decision and boils them down to an easily actionable output. The result is a simple yes or no answer for each item sent through the content moderation API.

    Learn how to moderate without censoring

    Why moderating content without censoring users demands consistent, transparent policies.

    Untitled(Required)

    Maintaining high moderation quality with AI models

    A good AI moderation model is built on experience and knowledge. Make sure that the AI solution you are using, in-house or outsourced, is developed for its purpose and that it’s constantly improving. User behavior on online marketplaces is constantly changing, which means you may have to update your AI model regularly to maintain high moderation quality.

    Stay on top of your AI’s performance, and make sure to make use of the data generated by your manual moderation as well. As your site evolves and user behavior change, you will need new, high quality, labelled data and your manual team is the best source for that.

    Use the data generated by manual moderation to improve and optimize your AI models on a continuous basis.

    Do you want to learn more about how we build our AI models for moderation? Get in touch with a content moderation expert, or read about the 6 reasons why our moderation AI is unique.

    This is Besedo

    Global, full-service leader in content moderation

    We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

    Form background

    Contents