🚀 Free eBook: Build vs. Buy – The Case for Outsourcing Content Moderation Download → ×

Contents

    Reviews can make or break a business. The same applies to online marketplaces, classifieds, and even dating sites. And they don’t just impact these platforms – they affect how people see the brands that advertise on them, individual vendors, and those looking for love and companionship.

    However, in a world where User-Generated Content (UGC) is so prevalent, anyone from anywhere can leave a good or bad review. And have it seen in a very public way.

    While bad reviews can hurt businesses and brands, fake positive ones can damage reputations too.

    Confused? It’s a tricky area to navigate.

    Let’s consider how reviews can build trust and how online marketplaces can address these moderation challenges.

    Photo by Ravi Sharma on Unsplash

    Reviews build consumer trust

    As discussed in previous articles, trust is at the epicenter of the digital economy. As consumers, we take trust leaps when deciding if a particular online product or service is suitable for us. This is why reviews matter so much – they help us form opinions.

    In a practical sense, many of these sentiments (which can largely be attributed to economist and TED speaker Rachel Botsman) are grounded in our search for social proof, which forms one of the key cornerstones of the ‘Trust Stack’ – which encompasses: trust in the idea, trust in the platform, and (as is the case here) trust in the user.

    Because the three have an interdependent relationship, they reinforce each other – meaning that user trust leads to trust in the platform and idea, and vice versa.

    If it sounds improbable that consumers are more likely to trust strangers, then consider the numbers. Stats show that 88% of consumers trust online reviews as much as personal recommendations – with 76% stating that they trust online reviews as much as recommendations from family and friends.

    Needless to say, they factor in a great deal. Therefore, customer reviews are essential indicators of trust – which is why bad reviews can negatively impact businesses.

    “One particular hotel in New York State, US, even stated in its small print that visitors would be charged $500 for negative Yelp reviews”

    Brand backlash

    While on some marketplaces, a 3.5 out of 5 for average service might be deemed acceptable – for many businesses, a slip in how they’re reviewed is perceived to have disastrous consequences.

    Some companies have fought back at negative reviews, but instead of challenging customers over their comments or trying to figure out where they could do better, they’ve actively tried to sue their critics.

    One particular hotel in New York State, US, even stated in its small print that visitors would be charged $500 for negative Yelp reviews. While some service providers have slated – and even looked to sue – Yelp for how it has prioritized reviews with the most favorable first.

    Yikes!

    But why are overly positive reviews that detrimental? Surely a positive review is what all companies are striving for? The issue is inauthenticity. A true reflection of any experience rarely commands 5 stars across the board, and businesses, marketplaces, and consumers are wise to it.

    Authenticity means “no astroturfing”

    Many companies want to present themselves in the best possible light. There’s absolutely nothing wrong with that. However, when it comes to reviews of their products and services, consumers would be forgiven for being suspicious if every rating is overwhelmingly positive.

    In many cases, it seems they probably are. Creating fake reviews – a practice known as astroturfing – has been relatively widespread since the dawn of online marketplaces and search engines. But many are now wise to it and actively doing more to prevent the practice.

    Google has massively cracked down on companies buying fake Google reviews designed to positively influence online listings – removing businesses that do from local search results. Similarly, Amazon has pledged to stop the practice of testers being paid for reviews and reimbursed for their purchases.

    Astroturfing isn’t just frowned upon, and it’s also illegal. The UK’s Competition and Markets Authority (CMA) and the US Federal Trade Commission have strict rules over misleading customers.

    In Britain, the CMA has taken action against social media agency Social Chain for failing to disclose that a series of posts were part of a paid-for campaign; and took issue with an online knitwear retailer posting fake reviews.

    While some may consider astroturfing a victimless crime, when you consider shoppers’ faith in online reviews and the fact that their favorite sites may be deliberately trying to mislead them, it’s clear that there’s a major trust issue at stake.

    For classified sites, dating apps, and online marketplace owners, who have spent long building credibility, gaining visibility, and getting users and vendors on board, a culture where fake reviews persist can be disastrous.

    But when so many sites rely on User-Generated Content, monitoring and moderating real, bad, and fake reviews is an enormous undertaking – and often costly.

    Manual vs. Automated content moderation

    While many fake reviews are often easy to spot (awkwardly put together, with bad spelling and grammar), manually moderating them becomes unsustainable – even for a small team of experts when they appear at scale.

    That’s why new ways to detect and prevent are starting to gain traction. For example, many sites and marketplaces are starting to limit review posting to those who’ve bought something from a specific vendor. However, as per the Amazon example above, this practice is easy to circumvent.

    A more reliable method is automated moderation – using machine learning algorithms that can be trained to detect fake reviews and other forms of unwanted or illegal content on a particular classified website or marketing. Using filters, the algorithm is continually fed examples of good and bad content so that it can automatically identify between the two.

    It’s a process that works well with manual moderation efforts. When a user review is visible, a notification can be sent to the moderation team, allowing them to make the final judgment call on a review’s authenticity.

    In a world where online truths can often be in short supply, companies – brands or marketplaces – that are open enough for customers to leave honest, reasonable reviews stand a better chance of building trust among their users.

    While it’s clear businesses have a right to encourage positive online reviews – as part of their marketing efforts – any activities that attempt to obscure the truth (no matter how scathing) or fabricate a rose-tinted fake review can have an even more negative impact than a humdrum review itself.

    This is Besedo

    Global, full-service leader in content moderation

    We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

    Form background

    Contents