Reviews can make or break a business. The same applies to online marketplaces, classifieds, and even dating sites. And they don’t just impact these platforms – they affect how people see the brands that advertise on them, as well as individual vendors and those looking for love and companionship.

However, in a world where User-Generated Content (UGC) is so prevalent, the fact is anyone from anywhere can leave a good or bad review. And have it seen in a very public way.

While bad reviews can hurt businesses and brands, fake positive ones can damage reputations too.

Confused? It’s a tricky area to navigate.

Let’s consider how reviews can build trust and how online marketplaces can address these moderation challenges.

Photo by Ravi Sharma on Unsplash

Reviews build consumer trust

As discussed in previous articles, trust is at the epicenter of the digital economy. As consumers, we take trust leaps when deciding if a particular online product or service is suitable for us. This is why reviews matter so much – they help us form opinions.

In a practical sense, many of these sentiments (which can largely be attributed to economist and TED speaker Rachel Botsman) are grounded in our search for social proof, which forms one of the key cornerstones of the ‘Trust Stack’ – which encompasses: trust in the idea, trust in the platform, and (as is the case here) trust in the user.

Because the three have an interdependent relationship, they reinforce each other – meaning that user trust leads to trust in the platform and idea; and vice versa.

If it sounds improbable that consumers are more likely to trust complete strangers, then consider the numbers. Stats show that 88% of consumers trust online reviews as much as personal recommendations – with 76% stating that they trust online reviews as much as recommendations from family and friends.

Needless to say, they factor in a great deal. Therefore, customer reviews are essential indicators of trust – which is why bad reviews can negatively impact businesses.

“One particular hotel in New York State, US, even stated in its small print that visitors would be charged $500 for negative Yelp reviews”

Brand backlash

While on some marketplaces, a 3.5 out of 5 for average service might be deemed acceptable – for many businesses, a slip in the way they’re reviewed is perceived to have disastrous consequences.

Some companies have fought back at negative reviews, but instead of challenging customers over their comments or trying to figure out where they could do better, they’ve actively tried to sue their critics.

One particular hotel in New York State, US, even stated in its small print that visitors would be charged $500 for negative Yelp reviews. While some service providers have slated – and even looked to sue – Yelp for how it has prioritized reviews with the most favorable first.

Yikes!

But why are overly positive reviews that detrimental? Surely a positive review is what all companies are striving for? The issue is inauthenticity. A true reflection of any experience rarely commands 5 stars across the board, and businesses, marketplaces, and consumers are wise to it.

Authenticity means “no astroturfing”

Many companies want to present themselves in the best possible light. There’s absolutely nothing wrong with that. However, when it comes to reviews of their products and services, if every single rating is overwhelmingly positive, consumers would be forgiven for being suspicious.

In many cases, it seems they probably are. Creating fake reviews – a practice known as astroturfing – has been relatively widespread since the dawn of online marketplaces and search engines. But many are now wise to it and actively doing more to prevent the practice.

Google has massively cracked down on companies buying fake Google reviews designed to positively influence online listings – removing businesses that do from local search results. Similarly, Amazon has pledged to stop the practice of testers being paid for reviews and reimbursed for their purchases.

Astroturfing isn’t just frowned upon, it’s also illegal. The UK’s Competition and Markets Authority (CMA) and the US Federal Trade Commission have strict rules over misleading customers.

In Britain, the CMA has taken action against social media agency Social Chain for failing to disclose that a series of posts were part of a paid-for campaign; and took issue with an online knitwear retailer posting fake reviews.

While some may consider astroturfing a victimless crime, when you consider shoppers’ faith in online reviews and the fact that their favorite sites may be deliberately trying to mislead them, it’s clear that there’s a major trust issue at stake.

For classified sites, dating apps, and online marketplace owners, who have spent long building credibility, gaining visibility, and getting users and vendors on board, a culture where fake reviews persist can be disastrous.

But when so many sites rely on User-Generated Content, the task of monitoring and moderating real reviews, bad reviews, and fake reviews is an enormous undertaking – and often costly.

Manual vs. Automated content moderation

While many fake reviews are often easy to spot (awkwardly put together, with bad spelling and grammar), manually moderating them becomes unsustainable – even for a small team of experts when they appear at scale.

That’s why new ways to detect and prevent are starting to gain traction. For example, many sites and marketplaces are starting to limit review posting to those who’ve bought something from a specific vendor. However, as per the Amazon example above, this is a practice that is easy to circumvent.

A more reliable method is automated moderation – using machine learning algorithms that can be trained to detect fake reviews and other forms of unwanted or illegal content on a particular classified website or marketing. Using filters, the algorithm is continually fed examples of good and bad content to the point that it can automatically identify between the two.

It’s a process that works well with manual moderation efforts. When a user review is visible, a notification can be sent to the moderation team, allowing them to make the final judgment call on a review’s authenticity.

Ultimately, In a world where online truths can often be in short supply, companies – whether they’re brands or marketplaces – that are open enough for customers to leave honest, reasonable reviews stand a better chance of building trust among their users.

While it’s clear businesses have a right to encourage positive online reviews – as part of their marketing efforts – any activities that attempt to obscure the truth (no matter how scathing) or fabricate a rose-tinted fake review can have an even more negative impact than a humdrum review itself.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

The biggest challenge facing technology today isn’t adoption, it’s regulation. Innovation is moving at such a rapid pace that the legal and regulatory implications are lagging behind what’s possible.

Artificial Intelligence (AI) is one particularly tricky area for regulators to reach consensus on; as is content moderation.

With the two becoming increasingly crucial to all kinds of businesses – especially to online marketplaces, sharing economy and dating sites – it’s clear that more needs to be done to ensure the safety of users.

But to what extent are regulations stifling progress? Are they justified in doing so? Let’s consider the current situation.

AI + Moderation: A Perfect Pairing

Wherever there’s User Generated Content (UGC), there’s a need to moderate it; whether we’re talking about upholding YouTube censorship or netting catfish on Tinder.

Given the vast amount of content that’s uploaded daily and the volume of usage – on a popular platform like eBay – it’s clear that while action needs to be taken, it’s unsustainable to rely on human moderation alone.

Enter AI – but not necessarily as most people will know it (we’re still a long way from sapient androids). Mainly, where content moderation is concerned, the use of AI involves machine learning algorithms – which platform owners can configure to filter out words, images, and video content that contravenes policies, laws, and best practices.

AI not only offers the scale, capacity, and speed needed to moderate huge volumes of content; it also limits the often-cited psychological effects many people suffer from viewing and moderating harmful content.

Understanding The Wider Issue

So what’s the problem? Issues arise when we consider content moderation on a global scale. Laws governing online censorship (and the extent to which they’re enforced) vary significantly between continents, nations, and regions.

What constitutes ‘harmful’, ‘illicit’ or ‘bad taste’ isn’t always as clear cut as one might think. And from a sales perspective, items that are illegal in one nation aren’t always illegal in another. A lot needs to be taken into account.

But what about the role of AI? What objections could there be for software that’s able to provide huge economies of scale, operational efficiency, and protect people from harm – both users and moderators?

The broader context of AI as a technology needs to be better understood – which itself presents several key ethical questions over its use and deployment, which vary in a similar way – country-to-country – to efforts designed to regulate content moderation.

To understand this better, we need to look at ways in which the different nations are addressing the challenges of digitalisation – and what their attitudes are towards both online moderation and AI.

The EU: Apply Pressure To Platforms

As an individual region, the EU arguably is leading the global debate on online safety. However, the European Commission continues to voice concerns over (a lack of) efforts made by large technology platforms to prevent the spread of offensive and misleading content.

Following the introduction of its Code Of Practice on Disinformation in 2018, numerous high profile tech companies – including Google, Facebook, Twitter, Microsoft and Mozilla – voluntarily provided the Commission with self-assessment reports in early 2019.

These reports document the policies and processes these organisations have undertaken to prevent the spread of harmful content and fake news online.

While a thorough analysis is currently underway (with findings to be reported in 2020), initial responses show significant dissatisfaction relating to the progress being made – and with the fact that no additional tech companies have signed up to the initiative.

AI In The EU

In short, expectations continue to be very high – as evidenced by (and as covered in a previous blog) the European Parliament’s vote to give online businesses one hour to remove terrorist-related content.

Given the immediacy, frequency, and scale that these regulations require, it’s clear that AI has a critical and central role to play in meeting these moderation demands. But, as an emerging technology itself, the regulations around AI are still being formalised in Europe.

However, the proposed Digital Services Act (set to replace the now outdated eCommerce Directive) goes a long way to address issues relating to online marketplaces and classified sites – and AI is given significant consideration as part of these efforts.

Last year the EU published its guidelines on ethics in Artificial Intelligence, citing a ‘human-centric approach’ as one of its key concerns – as it deems that ‘AI poses risks to the right to personal data protection and privacy’ – as well as a ‘risk of discrimination when algorithms are used for purposes such as to profile people or to resolve situations in
criminal justice’.

While these developments are promising, in that they demonstrate the depth and importance which the EU is tackling these issues, problems will no doubt arise when adoption and enforcement by 27 different member states are required.

Britain Online Post-Brexit

One nation that no longer needs to participate in EU-centric discussions is the UK – following its departure in January this year. However, rather than deviate from regulation, Britain’s stance on online safety continues to set a high bar.

An ‘Online Harms’ whitepaper produced last year (pre-Brexit) sets out Britain’s ambition to be ‘the safest place in the world to go online’ and proposes a revised system of accountability which moves beyond self-regulation and the need to establish a new independent regulator.

Included in this is a commitment to uphold GDPR and Data Protection laws – including a promise to ‘inspect’ AI and penalise those who exploit data security. The whitepaper also acknowledges the ‘complex, fast-moving and far-reaching ethical and economic issues that cannot be addressed by data-protection laws alone’.

To this end, a Centre for Data Ethics and Innovation has been established in the UK – complete with a two-year strategy setting out its aims and ambitions, which largely involves cross-industry collaboration, greater transparency, and continuous governance.

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

Expansion Elsewhere

Numerous other countries – from Canada to Australia – have expressed a formal commitment to addressing the challenges facing AI, data protection, and content moderation. However, on a broader international level, the Organisation for Economic Co-operation and Development (OECD) has established some well-respected Principles on Artificial Intelligence.

Set out in May 2019. as five simple tenets designed to encourage successful ‘stewardship’ of AI, these principles have since been co-opted by the G20 in their stance on AI.

They are defined as:

  • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  • AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  • Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

While not legally binding, the hope is that the level of influence and reach these principles have on a global scale will eventually encourage wider adoption. However, given the myriad of cultural and legal differences, the tech sector faces, international standardisation remains a massive challenge.

The Right Approach – Hurt By Overt Complexity

All things considered, while the right strategic measures are no doubt in place for the most part – helping perpetuate discussion around the key issues – the effectiveness of many of these regulations largely remains to be seen.

Outwardly, many nations seem to share the same top-line attitudes towards AI and content moderation – and their necessity in reducing harmful content. However, applying policies from specific countries to global content is challenging and adds to the overall complexity, as content may be created in one country and viewed in another.

This is why the use of AI machine learning is so critical in moderation – algorithms can be trained to do all of the hard work at scale. But it seems the biggest stumbling block in all of this is a lack of clarity around what artificial intelligence truly is.

As one piece of Ofcom research notes, there’s a need to develop ‘explainable systems’ as so few people (except for computer scientists) can legitimately grasp the complexities of these technologies.

The problem posed in this research is that some aspects of AI – namely neural networks which are designed to replicate how the human brain learns – are so advanced that even the AI developers who create them cannot understand how or why the algorithm outputs what it does.

While machine learning moderation doesn’t delve as far into the ‘unknowable’ as neural networks, it’s clear to see why discussions around regulation persist at great length.

But, as is the case with most technologies themselves, staying ahead of the curve from a regulatory and commercial standpoint is a continuous improvement process. That’s something that won’t change anytime soon.

New laws and legislations can be hard to navigate. Besedo helps businesses like yours get everything in place quickly and efficiently to adhere to these new legislations. 

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

What online marketplace trends are we going to see in 2020? Here’s what 8 industry experts predict for the coming year.

2019 is quickly coming to an end, and what an exciting year it has been for online marketplaces!

Marketplaces have been disrupting the way we shop with the rise of the conscious consumer looking for more ways to be sustainable.

As users’ expectations grow and user experience has become highly prioritized, marketplaces have been creative in finding ways to add value and services to their platforms. Convenience has been the keyword for online marketplaces to acquire and retain their customers.

Which trends will shape the marketplace industry in 2020? What can you expect to change in the online marketplace industry in the coming year? We gathered eight predictions from marketplace experts and professionals to see what’s coming up in the industry.

Classified sites and marketplaces face ongoing disruption: transactional models, metasearch and aggregation competitors, programmatic advertising, IBuying in property, and more. It doesn’t matter if you’re an automotive, property, recruitment or general site — you face changes and challenges.

As Bill Gates said: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next 10. Don’t let yourself be lulled into inaction.”

In 2020, we’ll see more exploration of new business models, research and development, investments in tech initiatives and tech businesses, and acquisitions. Consolidation and spinoffs are likely.

Marketplace and classified sites will spend time and money trying to figure out what’s next, and how to participate.

The biggest challenge may be to spend just enough to learn what’s necessary; implement services and tools to increase revenue and position for the future, but avoid overcommitting.

Jonathan Turpin, Principal, AIM Group



In 2020, I expect to see continued global expansion of regulations that restrict or prohibit problematic user-generated content in ways that cannot be effectively implemented by the regulated services. To respond to these impossible regulations, services will turn off or curtail their user-generated content functionality. In its place, some services will migrate towards offering professionally-produced content to readers. To fund that professionally-produced content, the services will institute paywalls. Thus, 2020 will see a continued broad retrenchment of UGC, which will be partially replaced by an expansion of paywalled content databases.

Eric Goldman, Professor, Santa Clara University School of Law, Co-Director, High Tech Law Institute & Supervisor, Privacy Law Certificate



The recent overhyped technology revolutions (VR/AR, AI, chatbots, crypto) haven’t quite panned out the way many had expected and I don’t expect anything revolutionary in 2020.

On the other hand, the technologies are steadily evolving – and so is the marketplace industry. We see new and exciting marketplaces being built all over the world and I expect this trend to continue.

Here are a few things I’m looking forward to in 2020:

1. Marketplaces are getting more complex from the operational perspective. Simple Craigslist-style platforms are in the past (and have been for quite a while) and the users expect much more from marketplaces today than they did even a few years ago. More features, powerful automation, better algorithms across the board.

2. The omnichannelization of ecommerce continues. The competition and the alternative sales channels aren’t going anywhere and marketplaces will need to plan for it. This may mean setting yourself apart or embracing the alternatives and offering your vendors a convenient cross-platform selling toolset.

3. More B2B marketplaces overall. The B2C space is where all the hype’s and the space has been pretty crowded, but there are many more untapped opportunities in B2B. We’ll be seeing more B2B platforms emerge in 2020.

4. Smaller payment processors consolidating and getting into marketplace payments. The payment processing industry has been hyper-competitive for quite a while now, but the platform payment space is still relatively open, especially on a regional level where the large marketplace processors like Stripe aren’t yet available.

Martin Boss, Founder, MultiMerch systems


In 2019, more than 20,000 people have reached out to Sharetribe to create their own online marketplace. We’re seeing increasing demand for service marketplaces for highly skilled workers. A particularly interesting trend in this area are B2B service marketplaces focusing on extremely specific industries. A good recent example of a highly successful unicorn in this field is RigUp, an Austin-based marketplace for on-demand services and skilled labor in the energy industry, which recently raised a $300 million Series D round led by Andreessen Horowitz. Some of Sharetribe’s customers are working on concepts like matching information security specialists with big companies or making it easy for construction companies to make their staff available for other construction companies while they are between projects. There are tons of industries that rely heavily on contractors who are highly skilled in very specific tasks. Marketplaces can provide lots of value in these industries by aggregating and verifying these contractors and making it easy for companies to access this pool of talent.

Juho Makkonen, co-founder and CEO, Sharetribe


As users’ expectations keep on moving ahead of what marketplaces have traditionally been able to offer, users these days anticipate to be able to instantly spot what they’re looking for in one, convenient click.

Marketplaces will need to ensure a smooth and seamless user experience across their platforms to satisfy users. That implies providing value-added services, improve user safety and deliver a highly user-friendly experience.

I expect to keep on witnessing niche verticals and regional players popping up to compete with the more established platforms answering more specific needs to personalize their platforms.

Finally, the rise of a more conscious consumerism as an aftermath of the ecological crisis is a vast opportunity for marketplaces to take on to continuously expand.

Patrik Frisk, CEO Besedo



2019 was the year which saw the rise of truly global change in a way how we think and behave, mainly because of ecological crisis topic getting a significant traction. There is a new & growing generation of people who prefer “share” and “re-use” instead of “buy new”. At the same time, this same generation is completely online, from their birth. Online is their natural habitat.

These are the main reasons contributing to continuous growth of digital marketplaces, which we will see in 2020. Growth in terms of visitors, goods offered and transactions performed.

However, those “born to be online” visitors will not accept any failure in user experience. On the contrary, they expect the web (or an app) to match their nature and provide multimode (text, voice, image) seamless and (most importantly) personalized search and product discovery experience.

Be ready, work on these topics if you do not want to see these new visitors to bounce elsewhere.

Michal Barla
Co-founder & CPO at Luigi’s Box


With the marketplace battlefield quickly shifting from desktop to mobile, fluid and mobile-optimized UX is already a dominant trend and will only strengthen in the following year. Millennials and Gen-Z generations are now used to the paradigm of “Now or Never”. They expect to get a taxi, watch a film, or buy stuff online almost instantaneously. This puts a heavy emphasis on an optimal mobile interface.

Classifieds and marketplaces have, ever since the dawn of the Internet, been relying on the text and filter search as the primary search mechanisms. Cluttered mobile screens and small keyboards frustrate users. This is bound to change as the AI-based technologies are quickly gaining track.

AI-powered Visual Search & Browsing, Visual Recommending, and Automatic Image Tagging simplify and speed up the ad search for users, as well as enrich the content and make standard search tools more effective. The bottlenecked mobile search is looking at a complete transformation where instead of text, the image or even video becomes the dominant way of the website navigation.

Marketplaces that manage to implement AI solutions in the right way are seeing large improvements in user engagement, matchmaking between sellers and buyers, while at the same time increase the time-on-site and number of ads viewed. Implementing AI technologies is not an easy task, but if done right, the pay-off is immediate.

Davor Anicic
Co-founder & CEO at Velebit AI



Last year
 I said that 2019, in terms of the online real estate market would be as action packed as the finale of Game of Thrones. Little did I know how right I would be in that analogy. 2019 turned out to be simply… more of the same. Most of the trends started in 2018 simply grew to the next level in 2019 when most players accepted that those are the things to focus on.

  • The loud trends for “big data” settled down and presentations about the endless possibilities of big data mining grew into discussions about how hard it is to consistently collect and keep data relevant, before one could turn it into usable insights or revenue.
  • Verticalisation continued, as big players continued to either purchase or launch their own verticals. Predominantly in the car space but real estate is coming more and more on the radar. A lot of attention was devoted to the new home segment too.
  • The discussion about the potential movement into transactions moved from “discussion” to “action phase”. Most players seem to realize that there is no “if”. The discussions of whether transactional marketplaces in real estate are the future. That question seems to have earned its answer. The outstanding questions remained:
    • How? Is the fixed fee broker or ibuyer the answer? Or any other models?
    • Who? Can anyone emerge who will be able to deliver a transactional model repeatedly on multiple markets? So far Purplebricks launched ahead and seemed to hit an international wall which forced it to step back. Who will be able to repeat transactions on multiple markets is yet to be seen.
  • I mentioned a trend in which classifieds seem to realize that they need to be distributors of traffic to verticals, rather than destinations in themselves. That trend seems to continue with a number of players seemingly realizing – it’s easier said than done. Redistributing the traffic requires in-depth knowledge about the user which represents that traffic – and that – is not always the case in businesses which never before needed to know who’s behind that click. This realization seems to drive another wave of “data collection” projects among the big players with a lot of traffic.

Nevertheless – I believe 2019 was a very interesting year with a lot of developments. Maybe they weren’t too loud, mostly under the surface not yet visible to end-users. But they are going on – and the battle for the dominance of the transactional online real estate space is only beginning. Stay tuned…

Andrzej Olejnik, Founder & CEO at Homsters.com

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background