User content is the lifeblood of marketplaces. It can feel counterproductive to remove listings, posts or profiles, but the reality is that some content just isn’t good for your site or users. Whether it be spam, scam or just low quality that makes it undesirable, you need to curate what users post. However, while you need to manage and remove inappropriate content to protect your users and maintain a high-quality inventory, the goal should always be to limit the volumes you reject.
What’s the main purpose of refusal reasons?
Refusal reasons play a huge role in decreasing the number of items you must reject, by giving you better insights into where and why issues arise on your site and what policies users generally clash with when publishing content.
If properly implemented, refusal reasons are also very helpful in providing on-point feedback to your users that help educate them, increasing the chance of them getting their content approved next time they post. An example of feedback could be an email stating “We do not accept the publication of ads referring to counterfeited items or piracy, streaming app, etc.”
The success of marketplaces relies on good quality content and good user experiences. This cannot be achieved without proper feedback via refusal reasons.
Is there a template for the best refusal reason implementation?
The ideal refusal reason setup varies from marketplace to marketplace depending on the audience, location, and inventory. This means that each setup will be unique, however, there are some elements that marketplaces universally must consider when it comes to refusal reasons. Having worked with players of all sizes across the globe for more than 15 years, we’ve been able to build a framework of best practices that work as a guideline for solid refusal reason management.
To help you implement best practices for refusal reasons, we spoke to our head of filters Kevin Martinez. He’s worked with most of our clients advising them on how to best manage and implement their refusal reasons. Here are his 4 tips for better refusal reasons.
What should you think about when deciding a refusal reason?
A refusal reason must be easy to understand for the user attempting to post his ad, but also by the agents doing manual moderation. If the refusal reason is too convoluted, the user will not understand the feedback they receive, and you run the risk of them repeating the offense and getting their content rejected again. This obviously causes a very negative user experience most likely resulting in increased churn.
If the refusal reason isn’t clear to the agents doing manual moderation it will both impact the quality of feedback your users receive and decrease the value of the insights you’re generating through refusal reason analysis. Furthermore, if your refusal reasons aren’t clear it will likely cause confusion and reduce the efficiency of your agents by preventing them from taking fast decisions.
Another advice for building good refusal reasons is to approach it from a content quality view rather than focus on item categories or services.
Create refusal reasons that can be applied broadly while still representing the same issue and where the educational email sent to users makes sense regardless of what item they were trying to post.
For instance, a modified PS4 that can run illegally downloaded games, fake Nike shoes and a cloned smartphone, could be refused with the same refusal reason: counterfeits and piracy. The feedback email sent to users who get their ad refused, for this reason, will need to be general enough to cover all cases included in the refusal reason category, but also informative enough to educate the user properly. It can take a bit of tweaking before you get it right, but it’s an important exercise if you want to have an efficient refusal reason setup.
How many refusal reasons should you have?
The fewer the better.
A high-performing and well-trained agent working in an optimized content moderation tool can handle about 600+ ads per hour. Even the best agent, however, will be significantly slowed down by a bad refusal reason setup. Too many, or vague, refusal reasons will cause confusion and efficiency loss. For example, if you expect Cannabis to be refused as drugs, paracetamol as medicines and that fat-burners should be refused for pharmaceutical products, you create confusion for the agents to use the proper refusal reason per items.
Grouping items, representing the same content issue, decrease the seconds an agent need to spend when deciding the action for each content piece.
On the other hand, it’s also important to do not have too few refusal reasons. Otherwise, the data you get will be too broad giving you no actual meaningful insights. For instance, if all bad items are refused for forbidden items without any distinction (weapons, drugs, profanity, etc.) you cannot take proper action to improve/maintain the quality of your site as you will be unable to pinpoint the main issues to focus on.
Any refusal reasons that are universal to all marketplaces?
In general, refusal reasons are unique and tailored to the specific marketplace or community, however, there’s one refusal reason that’s sadly universal to all marketplaces: SCAM. The refusal reason setup for scams can be replicated across marketplaces because it should, practically, always follow the same structure. Anything refused as scam should not result in an educational feedback email to the user, and the content piece should not go live.
Scam should also always be the top priority refusal reason to use in cases where a content piece breaks the rules in multiple ways. For instance, a scammer trying to sell something illegal, should not be refused with the illegal refusal reason, but always with the scam category. Using the wrong category for scam content helps educate the scammers in how they can circumvent your system and get their content published next time.
One other example of universal refusal reasons is duplication, we have yet to see a marketplace where there’s been any reason to have duplicate content, and the educational email is always straight forward and easy to understand.
What is an example of a good or bad refusal reason?
A bad refusal reason is too generic, an example could be a refusal reason called forbidden items/services. A refusal reason like that isn’t giving the user detailed feedback on why their content wasn’t approved and as such doesn’t allow them to improve.
Good refusal reasons, on the other hand, target specific bad content and group them by family. For instance (drugs with medicines or weapons with fireworks). The title of each refusal reason should also be very explicit and consequently logical to use for the agents and easy to understand from an end-user perspective.
This is the approach we use in the Besedo Layers. Apart from helping agents to be more efficient, and improve the user experience for those adding content to your site, it also makes it easy to narrow down your site’s biggest content threats when analyzing the refusal reason data.
Getting more out of your refusal reasons
Whether you are looking to improve your refusal reasons, or need to build the framework from scratch, your first step should be listing your overall needs. Look at what you want to get out of the data, what information your users need, and then balance that with what your agents realistically can handle while maintaining high efficiency.
Once you’ve pinpointed these needs, you can start building the framework. List all the refusal reasons you believe you’ll need, then start trimming it by grouping similar ones.
Finally, test the new structure with all involved stakeholders, agents, end-users and those who are going to use the data gathered from the refusal reasons. As with most things in content moderation, refusal reasons need to be tweaked from time to time, but it’s important that they don’t change too often. Otherwise, you risk reducing agent efficiency, confusing end-users and collecting data that can’t be compared with previous samples, making long-term data comparison and content moderation strategy near impossible.
If you need help or advice with your refusal reason setup or content moderation strategy in general feel free to reach out for an informal chat about how Besedo can help solve your needs.
Building Trust and Safety: Why It Matters and How to Get It Right
Discover the importance of trust and safety for websites and apps, learn effective strategies, and explore case studies to ensure a secure user experience.
Sharing Economy vs. Online Marketplaces: Key Differences and Opportunities
Learn the differences between sharing economy companies and online marketplaces. Plus a look at successful sharing economy companies and content moderation.
Content Moderation Glossary
Get in the know with our ultimate glossary of content moderation. From UGC to AI-powered moderation, we’ve got you covered. Learn the lingo now!
Digital Services Act (DSA): What It Is and What It Means for Content Moderation
We explain what you need to know everything you need to know about this new law in an easy-to-understand way. Stay ahead of the game in 2023, from transparency and accountability to prohibiting dark patterns.
Doxxing: How to Protect Your Platform and Users
From high-profile doxxing incidents to the potential consequences for victims and businesses, our post covers everything you need to know about this serious threat to online privacy and security.
Creating Trust and Safety in UX Design: Balancing Convenience and Security
Learn how to enhance UX design with trust and safety. Discover tips and best practices for creating secure user experiences that build trust.
Announcing Our Reporting Feature: Download and Visualize Your Data
Announcing Besedo reporting! Download and import your data into your favorite business intelligence tool to create all sorts of graphs, charts, and data magic.
The Advantages of Outsourcing Content Moderation
Discover the advantages of outsourcing content moderation, including cost savings, improved efficiency, access to expertise, scalability, and an improved user experience.
What Is User-Generated Content (UGC)?
Learn everything there is about user-generated content (UGC) and how it’s used. We also take a look at great real-world examples of UGC, and how it affects businesses worldwide.
The Job Scams Epidemic
Learn more about how hackers use brands to harvest personal details. We share how you can fight back using content moderation on your job board.
This is Besedo
Global, full-service leader in content moderation
We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.