🚀 Free eBook: Build vs. Buy – The Case for Outsourcing Content Moderation Download → ×

Contents

    When Facebook CEO, Mark Zuckerberg recently came under fire for the company’s admittedly tepid approach to political fact-checking (as well as some revelations about just what constitutes ‘impartial press’), it became clear that where content moderation is concerned, there’s still a big learning curve – for large and small companies.

    So given that a company like Facebook with all of the necessary scale, money, resources, and influence, struggles to keep on top of moderation activities – what chance do smaller online marketplaces and classified sites have?

    When the stakes are so high, marketplaces need to do everything they can to detect and remove negative, biased, fraudulent, or just plain nasty content. Not doing so will seriously damage their credibility, popularity, and ultimately, their trustworthiness – which, as we’ve discussed previously, is a surefire recipe for disaster.

    However, we can learn a lot from the mistakes of others by putting the right moderation measures in place. Let’s take a closer look at the cost of bad content and at ways to prevent it from your online marketplace.

    The cost of fraudulent ads

    Even though we live in a world in which very sophisticated hackers can deploy some of the most daring and devastating viruses and malware out there – from spearphishing to zero-day attacks – there can be little doubt that the most common scams still come from online purchases.

    While there are stacks of advice out there for consumers on what to be aware of, marketplace owners can’t solely rely on their customers to take action. Being able to identify the different types of fraudulent ads – as per our previous article â€“ is a great start, but for marketplace owners, awareness goes beyond mere common sense. They too need to take responsibility for their presence – otherwise, it’ll come with a cost.

    Having content moderation guidelines or community that give your employees clear advice on how to raise the alarm on everything from catfishers to Trojan ads is crucial too. However, outside of any overt deception or threatening user behaviors, the very existence of fraudulent content negatively impacts online marketplaces as essentially, it gradually erodes the sense of trust that they have worked so hard to build. Resulting in lowered conversion rates and, ultimately, reduced revenue.

    One brand that seems to be at the center of this trust quandary is Facebook. It famously published a public version of its own handbook last year, following a leak of its internal handbook. While these take a clear stance on issues like hate speech, sexual, and violent content; there’s little in the way of guidance on user behavior on its Marketplace feature.

    The fact is, classified sites present a unique set of moderation challenges – that must be addressed in a way that’s sympathetic to the content forms being used. A one-size-fits-all approach doesn’t work. It’s too easy to assume that common sense and decency prevail where user-generated content is concerned. The only people qualified to determine what’s acceptable – and what isn’t – on a given platform are the owners themselves: whether that relates to ad formats, content types, and the products being sold.

    Challenging counterfeit goods

    With the holiday season fast approaching, and two of the busiest shopping days of the year â€“ Black Friday and Cyber Monday – just a few weeks away, one of the biggest concerns online marketplaces face is the sale of counterfeit goods.

    It’s a massive problem: one that’s projected to cost $1.8 trillion by 2020. It’s not dodgy goods sites should be wary of; there’s a very real threat of being sued by an actual brand for millions of dollars: if sites enable vendors to use their name on counterfeit products: as was the case when Gucci sued Alibaba in 2015.

    However, the financial cost is compounded by an even more serious one â€“ particularly where fake electrical items are concerned.

    According to a Guardian report, research by the UK charity, Electrical Safety First shows that 18 million people have mistakenly purchased a counterfeit electrical item online. As a result, there are hundreds of thousands of faulty products in circulation. Some faults may be minor; glitches in Kodi boxes and game consoles, for example. Others, however, are a potential safety hazard – such as the unbranded mobile phone charger which caused a fire at an apartment in London last year.

    The main issue is the presence of fraudulent third-party providers setting up shop on online marketplaces; advertising counterfeit products as a genuine article.

    Staying vigilant on issues affecting consumers

    It’s not just counterfeit products that marketplaces need to counter; fake service providers can be just as tough to crack down on too.

    Wherever there’s misery, there’s opportunity. And you can be sure someone will try to capitalize on it. Consider the collapse of package holiday giant, Thomas Cook, a couple of months ago – which saw thousands of holidaymakers stranded and thousands more have their vacations canceled.

    Knowing consumer compensation would be sought, a fake service calling itself thomascookrefunds.com quickly set to work gathering bank details, promising to reimburse those who’d booked holidays.

    While not an online marketplace-related example per se, cases like this demonstrate the power of fake flags planted by those intent on using others’ misfortune to their own advantage.

    Similarly, given the dominance of major online marketplaces, as trusted brands in their own right, criminals may even pose as company officials to dupe consumers. Case in point: the Amazon Prime phone scam, in which consumers received a phone call telling them their bank account had been hacked and they were now paying for Amazon Prime – before giving away their bank details to claim a non-existent refund.

    While this was an offline incident, Amazon was swift to respond with advice on what consumers should be aware of. In this situation, there was no way that moderating site content alone could have indicated any wrongdoing.

    However, it stands to reason that marketplaces should have a broader awareness of the impact of their brand, and a handle on how the issues affecting consumers should be aligned with their moderation efforts.

    Curbing illegal activity & extremism

    One of the most effective ways of ensuring the wrong kind of content doesn’t end up on an online marketplace or classifieds site is to use a combination of AI moderation and human expertise to accurately find criminal activity, abuse, or, extremism.

    However, in some cases, it’s clear that those truly intent on making their point still can find ways around these restrictions. In the worst cases, site owners themselves will unofficially enable and advise users on ways to circumvent their site’s policies for financial gain.

    This was precisely what happened at the classifieds site Backpage. It transpired that top executives at the company – including the CEO, Carl Ferrer – didn’t just turn a blind eye to the advertisement of escort and prostitution services; but actively encouraged the rewording and editing of such ads to give Backpage ‘a veneer of plausible deniability’.

    As a result of this, along with money laundering charges, and for hosting child sex trafficking ads; not only was the site taken down for good, but officials were jailed – following Ferrer’s admission of guilt for all of these crimes.

    While this was all conducted knowingly, sites that are totally against these kinds of actions, but don’t police their content effectively enough, are putting themselves at risk too.

    Getting the balance right

    Given the relative ease with which online marketplaces can be infiltrated, can’t site owners just tackle the problem before it happens? Unfortunately, that’s not the way they were set up. User-generated content has long been regarded as a bastion of free speech, consumer-first commerce, and individual expression. Trying to quell that would completely negate their reason for being. A balance is needed.

    The real problem may be that ‘a few users are ruining things for everyone else’, but ultimately marketplaces can only distinguish between intent and context after content has been posted. Creating a moderation backlog when there’s such a huge amount of content isn’t a viable option either.

    Combining man & machine in moderation

    While solid moderation processes are crucial for marketplace success, relying on human moderation alone is unsustainable. It’s for many sites just not physically possible to review every single piece of user-generated content in real-time.

    That’s why online content moderation tools and technology are critical to helping marketplace owners identify anything suspicious. When combining AI moderation with human moderation, you’re able to find the balance between time-to-site and user safety efficiently; which is what we offer here at Besedo.

    Ultimately, the cost of bad content – or more specifically, not moderating it – isn’t just a loss of trust, customers, and revenue. Nor is it just a product quality or safety issue. It’s also the risk of enabling illegal activity, distributing abusive content, and giving extremists a voice. Playing a part in perpetuating this comes at a much heavier price.

    This is Besedo

    Global, full-service leader in content moderation

    We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

    Form background

    Contents