Every month we collect insights; from the clients, we work with, through external audits, and from mystery shopping on popular marketplaces across the world. The goal is to understand current global trends; within online marketplace scam, fraud, and other content challenges and to track how it evolves and changes over time.

The information is shared with clients and internally in our operations with our teams. Recurring trends are also used in the training of new content moderation specialists and to build new generic filters for Implio and to support the training of AI models.

Here’s an overview of some of February’s moderation trends:


Courier frauds increased by 107%

In February we saw a concerning increase in “courier frauds” with 107% more compared to normal levels. Courier fraud is a scammer pretending to be interested in buying an item, then asking the seller to register at a fake courier site. Once the victim has registered, they’re asked to share their credit card information. To circumvent moderation, scammers often redirect the conversation of the marketplace and the scam is performed through offline communication platforms like WhatsApp. However, with good moderation processes and awareness of how the fraudsters operate, users can be protected.


New console releases are still a major scam driver.

Together with cell phones, which remains the top targeted category for scammers with 39% of all scams, consoles are still leading the challenge by constituting 24.66% of fraudulent cases. Most scams in these 2 categories are tied to the release of the new iPhone and the launch of PlayStation 5. After rumors started floating around of a new Nintendo Switch release in 2021, we’ve also begun seeing scams related to the popular handheld console.


Marketplaces now a hub for exam cheats.

As lockdowns make physical tests an impossibility, we’ve seen a surge of offers to take tests and exams on behalf of others.

While the offers themselves may be genuine, the practice is unethical and if discovered could lead to students being expelled and a devaluation of the educational system. As such we generally recommend removing listings advertising these sorts of services.


Valentine’s Day scammers tried to be extra cuddly.

During the lead-up to Valentine’s Day, we did an audit of 6 popular, non-client marketplaces and saw a worrying number of scams. In particular, puppy scams were abundant. In one instance 90% of all puppy listings were fraudulent. The issue isn’t only limited to Valentine’s Day either.

search trend for buy puppy

Due to pandemic enforced social distancing and recurring lockdowns, there’s been a rise in pet purchases over the past year and scammers are taking advantage. As such it pays to stay vigilant and keep an extra focus on pet-related listings and categories.

With this quick overview of current trends, we hope to provide you with the tools needed to focus your content moderation efforts where they’re most needed. If you would like input specifically for your site, feel free to reach out.

Every year people flock to online marketplaces to look for presents for their significant others. This year’s ‘lockdown’ Valentine’s Day will be no different, in fact, more people than ever are likely to shop online. As such it’s important that marketplaces remain vigilant and have the right processes in place to protect their end-users from fraudulent products and sellers during a spike in sales. Especially this year where there will be an influx of buyers, who aren’t accustomed to online shopping and as such may more easily fall victim to scammers.

To give a snapshot of the risks and how dangerous it is to purchase your Valentine’s gift online this year, we’ve investigated listings of popular items on online marketplaces. After analyzing nearly three thousand listings in the run-up to this year’s Valentine’s Day, we’ve found that:

  • 13% of items reviewed showed signs of being fraudulent or dangerous
  • Of particular concern were the newly launched PlayStation 5 and offers of puppies, where 30% and 25% of listings across all marketplaces were found to be fraudulent, respectively
  • 22% of listings for popular consumer tech, such as the iPhone 12 were also deemed fraudulent
  • 1 in 10 Louis Vuitton Perfume listings were found to be either scams or counterfeit goods
  • 17% of Creed perfume listings on one popular auction website were found to be fraudulent

The research shows that even after any filtering and user protection measures that these marketplaces have, a significant number of the products for sale are leaving shoppers open to losing their money or receiving fake goods.

When it comes to typical romantic gifts such as perfumes and beauty products, the buyer’s inability to touch and see items whilst online shopping means that it is easier for scammers to get away with selling fake items. 14% of Louis Vuitton Perfume listings were found to be either scams or counterfeit goods. Creed perfume is also a popular target and on one popular auction website we found that up to 17% of Creed perfume listings were fraudulent.

Perhaps most worrying, however, are the listings for puppies. Pets are always a careful purchase, and much more meaningful to the couples that get them for Valentine’s Day than consumer products. Out of the 250 listings for puppy purchases that Besedo reviewed in January and February, 25% were found to be scams.


How to protect your users during Lockdown Valentine’s Day and beyond?

In our experience, Valentine’s Day scams start picking up 2-3 weeks before the 14th of February and taper off on the day.

While many of the items targeted by scammers during the Valentine’s period overlap with goods they’d generally use to prey on unsuspecting victims, there are things you can do to increase security for users under the duration of the event.

  • Monitor popular electronics extra vigilantly
  • Publish targeted guidelines teaching users how to spot and avoid scams
  • Put extra focus on onsite chat messages between users. Scammers may use 1to1 messages to send fake online greeting cards that link to malicious programs or to flirt with lonely and vulnerable users to get personal information or monetary gains.
  • Rentals for romantic getaways. Although this year with many countries in lockdown we expect to see less activity here.

On top of scammers, there’s also a risk of rising in services or goods you may not want on your marketplace.

Keep an eye out for:

  • A rise in sex toys and adult movies and services
  • Detective services aimed at catching significant others in the act of cheating

Whether you want to allow these services on your site depends on your audience, but it’s worth monitoring to maintain control.

To learn more read the article 4 ways to keep your users safe online this valentine’s day.

Download the Valentine’s Day Scam Infographic

If you need help improving your content moderation processes, get in touch with our team.

From dating sites and online marketplaces to social media and video games – content moderation has a huge remit of responsibility.

It’s the job of both AI and human content moderators to ensure the material being shared is not illegal or inappropriate: always acting in the best interest of the end-users.

And if you’re getting the content right for your end-users, they’re going to want to return and hopefully bring others with them. But is content moderation actually a form of censorship?

If every piece of content added to a platform is checked and scrutinized – isn’t ‘moderation’ essentially just ‘policing’? Surely, it’s the enemy of free speech?

Well actually, no. Let’s consider the evidence.


Moderating content vs censoring citizens

Content moderation is not a synonym for censorship. In fact, they’re two different concepts.

Back in 2016, we looked at this in-depth in our Is Moderation Censorship? article – which explains the relationship between content moderation and censorship. It also gives some great advice on empowering end-users so that they don’t feel censored.

But is it really that important in the wider scheme of things?

Well, content moderation continues to make headline news due to the actions taken by high-profile social media platforms, like Twitter and Facebook, against specific users – including, but not limited to, the former US President.

There’s a common misconception that the actions taken by these privately-owned platforms constitute censorship. In the US, this can be read as a violation of the First Amendment rights in relation to free speech. However, the key thing to remember here is that the First Amendment protects citizens against government censorship.

That’s not to say privately-owned platforms have an inalienable right to censorship, but it does mean that they’re not obliged to host material deemed unsuitable for their community and end-users.

The content moderation being enacted by these companies is based on their established community standards and typically involves:

  • Blocking harmful or hate-related content
  • Fact-checking
  • Labeling content correctly
  • Removing potentially damaging disinformation
  • Demonetizing pages by removing paid ads and content

These actions have invariably impacted individual users because that’s the intent – to mitigate content which breaks the platform’s community standards. In fact, when you think about it, making a community a safe place to communicate actually increases the opportunity for free speech.

“Another way to think about content moderation is to imagine an online platform as a real world community – like a school or church. The question to ask is always: would this way of behaving be acceptable within my community?”

It’s the same with online platforms. Each one has its own community standards. And that’s okay.


Content curators – Still culpable?

Putting it another way, social media platforms are in fact curators of content – as are online marketplaces and classified sites. When you consider the volume of content being created, uploaded, and shared monitoring it is no easy feat. Take, for example, YouTube. As of May 2019, Statista reported that in excess of 500 hours of video were uploaded to YouTube every minute. That’s just over three weeks of content per minute!

These content sharing platforms actually have a lot in common with art galleries and museums. The items and artworks in these public spaces are not created by the museum owners themselves –they’re curated for the viewing public and given contextual information.

That means the museums and galleries share the content but they’re not liable for it.

However, an important point to consider is, if you’re sharing someone else’s content there’s an element of responsibility. As a gallery owner, you’ll want to ensure it doesn’t violate your values as an organization and community. And like online platforms, art curators should have the right to take down material deemed to be objectionable. They’re not saying you can’t see this painting; they’re saying, if you want to see this painting you’ll need to go to a different gallery.


What’s the benefit of content moderation to my business?

To understand the benefits of content moderation, let’s look at the wider context and some of the reasons why online platforms use content moderation to help maintain and generate growth.

Firstly, we need to consider the main reason for employing content moderation. Content moderation exists to protect users from harm. Each website or platform will have its own community of users and its own priorities in terms of community guidelines.

“Where there is an opportunity for the sharing of user-generated content, there is the potential for misuse. To keep returning to a platform or website, users need to feel a sense of trust. They need to feel safe.”

Content moderation can help to build that trust and safety by checking posts and flagging inappropriate content. Our survey of UK and US users showed that even on a good classified listing site, one-third of users still felt some degree of mistrust.

Secondly, ensuring users see the right content at the right time is essential for keeping them on a site. Again, in relation to the content of classified ads, our survey revealed that almost 80% of users would not return to the site where an ad lacking relevant content was posted – nor would they recommend it to others. In effect, this lack of relevant information was the biggest reason for users clicking away from a website. Content moderation can help with this too.

Say you run an online marketplace for second-hand cars, you don’t want it to suddenly be flooded with pictures of cats. In a recent example from the social media site Reddit, the subreddit r/worldpolitics started getting flooded with inappropriate pictures because the community was tired of it being dominated by posts about American politics and that moderators were frequently ignoring posts that were deliberately intended to gain upvotes. Moderating and removing the inappropriate pictures isn’t censorship, it’s directing the conversation back to what the community originally was about.

Thirdly, content moderation can help to mitigate against scams and other illegal content. Our survey also found that 72% of users who saw inappropriate behavior on a site did not return.

A prime example of inappropriate behavior is hate speech. Catching it can be a tricky business due to coded language and imagery. However, our blog about identifying hate speech on dating sites gives three great tips for dealing with it:


Three ways to regulate content

A good way to imagine content moderation is to view it as one of three forms of regulation. This is a model that’s gained a lot of currency recently and it really helps to explain the role of content moderation.

Firstly, let’s start with discretion. In face-to-face interactions, most people will tend to pick up on social cues and social contexts which causes them to self-regulate. For example, not swearing in front of young children. This is personal discretion.

When a user posts or shares content, they’re making a personal choice to do so. Hopefully, for many users discretion will also come into play: will what I’m about to post cause offense or harm to others? Do I want others to feel offended?

Discretion tells you not to do or say certain things in certain contexts. We all get it wrong sometimes, but self-regulation is the first step in content moderation.

Secondly, at the other end of the scale, we have censorship. By definition, censorship is the suppression or prohibition of speech or materials deemed obscene, politically unacceptable, or a threat to security.

Censorship has government-imposed law behind it and carries the message that the censored material is unacceptable in any context because the government and law deem it to be so.

Thirdly, in the middle of both of these, we have content moderation.

“Unlike censorship, content moderation empowers private organizations to establish community guidelines for their sites and demand that users seeking to express their viewpoints are consistent with that particular community’s expectations.”

This might include things like flagging harmful misinformation, eliminating obscenity, removing hate speech, and protecting public safety. Content moderation is discretion at an organizational level – not a personal one.

Content moderation is about saying what you can and can’t do in a particular online social context.


So what can Besedo do to help moderate your content?

  • Keep your community on track
  • Facilitate the discussion you’ve built your community for (your house, your rules)
  • Allow free speech, but not hate speech
  • Protect monetization
  • Keep the platform within legal frameworks
  • Keep a positive, safe, and engaging community

All things considered, content moderation is a safeguard. It upholds the ‘trust contract’ users and site owners enter into. It’s about protecting users, businesses, and maintaining relevance.

The internet’s a big place and there’s room for everyone.

To find out more about what we can do for your online business contact our team today.

If you want to learn more about content moderation, take a look at our handy guide. In the time it takes to read, another 4,000 YouTube videos will have been uploaded!

Self-regulation is never easy. Most of us have, at some point, set ourselves New Year’s resolutions, and we all know how hard it can be to put effective rules on our own behavior and stick to them consistently. Online communities and platforms founded in the ever-evolving digital landscape may also find themselves in a similar predicament: permitted to self-regulate, yet struggling to consistently provide protection for users. Governments have noticed. Different standards and approaches to online user safety during the last two decades has left them scratching their heads, wondering how to protect users without compromising ease of use and innovation.

Yet, with the pandemic giving rise to more consumers using these platforms to shop, date, and connect in a socially distanced world, the opportunity for fraudulent, harmful, and upsetting content has also risen. As a result, the era of self-regulation – and specifically the ability to use degrees of content moderation – is coming to an end. In fact, during the first lockdown in 2020, the UK fraud rate alone had risen by 33%, according to research from Experian.

In response, legislation such as the Online Safety Bill and the Digital Services Act is set to change the way platforms are allowed to approach content moderation. These actions have been prompted by a rapid growth in online communities which has come with a rise in online harassment, misinformation, and fraud. This often affects the most vulnerable users: statistics from the British government published last year, for example, suggest that one in five children aged 10-15 now experience cyberbullying.

Some platforms have argued that they are already doing everything they can to prevent harmful content and that the scope for action is limited. Yet, there are innovative new solutions, expertise, and technology, such as AI which can help platforms ensure such content does not slip through the net of their moderation efforts. There is an opportunity to get on the front foot when tackling these issues and safeguarding their reputations.

And, getting ahead in the content moderation game is important. For example, YouTube only sat up and took notice of the issue when advertisers such as Verizon and Walmart pulled adverts because they were appearing next to videos promoting extremist views. Faced with reputational and revenue damage, YouTube was forced to get serious about preventing harm by disabling some comments sections and protecting kids with a separate, more limited app. While this a cautionary tale, when platforms are focused on different priorities such as improving search, monetization, and user numbers, it can be easy to forget content moderation, leaving it to an afterthought until it’s too late.

The Online Safety Bill: new rules to manage social media chaos

In the UK, the Online Safety Bill will hold big tech responsible on the same scale at which it operates. The legislation will be social media-focused, applying to companies which host user-generated content that can be accessed by British users, or which facilitate interactions between British users. The duties that these companies will have under the Online Safety Bill will likely include:

  • Taking action to eliminate illegal content and activity
  • Assessing the likelihood of children accessing their services
  • Ensuring that mechanisms to report harmful content are available
  • Addressing disinformation and misinformation that poses a risk of harm

Companies failing to meet these duties will face hefty fines of up to £18m or 10% of global revenue.

The Digital Safety Act: taking aim at illegal content

While the Online Safety Bill targets harmful social content in the UK, the Digital Services Act will introduce a new set of rules to create a safer digital space across the EU. These will apply more broadly, forcing not just social media networks, but also e-commerce, dating platforms, and, in fact, all providers of online intermediary services to remove illegal content.
The definition of illegal content, however, is yet to be defined: many propose that this will relate not only to harmful content but also content that is fraudulent, which offers counterfeit goods, or even content that seeks to mislead consumers, such as fake reviews. This means that marketplaces may become directly liable if they do not correct the wrongdoings of third-party traders.

How to get ahead of the legislation

Online communities might be worried about how to comply with these regulations, but ultimately it should be seen as an opportunity for them to protect their customers, while also building brand loyalty, trust, and revenue. Finding the right content moderation best practice, processes, and technology, in addition to the right expertise and people, will be the cornerstone to remaining compliant.

Businesses often rely on either turnkey AI solutions or entirely human teams of moderators, but as the rules of operation are strengthened, bespoke solutions that use both AI and human intervention will be needed to achieve the scalability and accuracy that the new legislation demands. In the long term, the development of more rigorous oversight for online business – in the EU, the UK, and elsewhere across the world – will benefit companies as well as users.

In the end, most, if not all, platforms want to enable consumers to use services safely, all the time. Browsing at a toy store in Düsseldorf, purchasing something from Amazon, making a match on a dating app, or connecting on a social network should all come with the same level of protection from harm. When everyone works together, a little bit harder, to make that happen, it turns from a complex challenge into a mutual benefit.

The Christmas season is here and while the festivities kick off online retailers hold their breath and wait to see whether all of the preparations they have diligently made will pay off in revenue and sales during this ‘Golden Quarter.’ Will the website be able to handle extra demand? Will all orders be able to be shipped before Christmas?

Yet, The National Cyber Security Centre (NCSC) has highlighted another pressing concern which can have a lasting impact on revenue. Last week it launched a major awareness campaign called Cyber Aware advising potential customers to be aware of an increase in fraud on online platforms this year. This is because millions of pounds are stolen from customers through fraud every year – including a loss of £13.5m from November 2019 to the end of January 2020 – according to the National Fraud Intelligence Bureau.

Fraud is a major concern for marketplaces who are aware of the trust and reputational damage that such nefarious characters on their platform can create. While consumer awareness and education can help, marketplaces know that only keeping one eye on the ball when it comes to fraud, especially within User Generated Content (UGC), is not enough. Fraudulent activity deserves full attention and careful monitoring. Trying to tackle fraud is not a one-off activity but a dedication to constant, consistent, rigorous, and quality moderation where learnings are continuously applied, for the on-going safety of the community.

With that in mind, our certified moderators investigated nearly three thousand listings of popular items on six popular UK online marketplaces, in order to understand whether marketplaces have content moderation pinned down, or, whether fraudulent activity is still slipping through the net. After conducting the analysis during the month of November, including the busy Black Friday and Cyber Monday shopping weekend, we found that:

· 15% of items reviewed showed signs of being fraudulent or dangerous, this rose to 19% on Black Friday and Cyber Monday

· Pets and popular consumer electronics are particular areas of concern, with 22% of PlayStation 5 listings likely to be scams, rising to more than a third of PS5 listings being flagged over the Black Friday weekend

· 19% of listings on marketplaces for the iPhone 12 were also found to show signs of being scams

· Counterfeit fashion items are also rife on popular UK marketplaces, with 15% of listings found to be counterfeits.

The research demonstrates that, even after any filtering and user protection measures marketplaces have a significant number of the products for sale on them are leaving customers open to having their personal details stolen or receiving counterfeit goods. We know that many large marketplaces have a solution in place already, but are still allowing scams to pass through the net, while smaller marketplaces may not have thought about putting robust content moderation practices and processes in place.

Both situations are potentially dangerous if not tackled. While it is certainly a challenging process to quickly identify and remove problematic listings, it is deeply concerning that we are seeing such a high rates of scams and counterfeiting in this data. Powerful technological approaches, using AI in conjunction with human analysts, can very effectively mitigate against these criminals. Ultimately, it should be the safety of the user placed at the heart of every marketplace’s priorities. It’s a false dichotomy that fail safe content moderation is too expensive a problem to deal with – in the longer term, addressing even small amounts of fraud that is slipping through the net can have a large and positive long term impact on the financial health of the marketplace through increased customer trust, acquisition and retention.

2020 was a year we would not want to repeat from a fraud perspective – we have not yet won the battle against criminals. As we move into 2021, we’ll be hoping to help the industry work towards a zero-scam future, one where we take the learnings and lessons together from 2020 to provide a better, safer community for users and customers, both for their safety, but also for the long term, sustainable and financial health of marketplaces.

2020 was a year noone could’ve predicted. E-shopping has been accelerated by years and a whole new segments of customers have been forced online.

Meanwhile, marketplaces have had to deal with new unpredictable moderation challenges as illustrated very poignantly by the spike on this google trends graph showing the past years search volumes for facemasks.

Facemask search term spikeThe entire world has had to get used to a market that was very volatile and extremely sensitive to any new development in the pandemic, good as well as bad.

Now however, at the end of 2020, there’s light at the end of the tunnel. The vaccine is finally within reach and hopefully social distancing and face masks will soon be a curious terms in a bizarre chapter of our history books.

A return to “normality” will, however, not erase the incredible jump the world has done towards a fully digitized society.

What will the new world order mean for marketplaces going forward? What can eCommerce, sharing economy and online platforms in general expect from 2021?

We’ve asked 8 industry experts to give their trend prediction for the new year.

The Coronavirus pandemic has forced marketplaces to adapt, almost instantaneously in some cases. When traffic collapsed, some companies extended listings and dropped all of their fees. Others launched new products to bring them closer to their customers or users.

After the initial shock in Q2, horizontals are setting records in listings, and growth again in traffic. Many auto sites have rebounded quickly, and are seeing a V-shaped recovery. In real estate, there’s a move toward transactions, especially in rentals. Recruitment sites have been hit hardest and face an uphill battle.

The AIM Group is seeing a lot of new digital marketplaces launch. While many are focusing on b-to-b, verticals are evolving fast as well.

Manufacturers are offering used-car marketplaces, and digital dealers like Carvana and Cazoo are threatening traditional dealerships and marketplaces. In real estate, reducing friction and supporting transactions is a must, especially in the rental field. Recruitment is using AI and better networking tools to improve. And horizontals are fast moving into all aspects of transactions: payment, escrow, delivery, and new e-commerce tools.

Marketplaces need to evolve. Quickly. Payment, finance, insurance, delivery, and other consumer-focused services are becoming critical elements of a full-service marketplace. The pandemic is accelerating digitization to a previously unimaginable extent. And those who focus on the pain points of their customers will win.

Katja Riefler, Principal, Managing Director, AimGroup

What challenges will the marketplace industry face in 2021 and how can they solve them?

The biggest problem for marketplaces in 2021, during the pandemic and in the likely economic downturn, will be to convert people who have a longer research/browsing period than before. People might have the intent to buy, but priorities and financial pressure might postpone those purchases.

If your product can be replaced by other options, the increased research period will increase the risk of losing customers.

There are a number of things you can do to overcome this challenge. For instance: Offer delayed payment. Offer additional value, 3 for the price of 2, increase or establish referral commissions, lower the price on your main product and compensate by selling more to same customer later, create a community between buyers etc.

What technology will be a must in 2021 and why?

People have more options and less time to look at each, so things need to be fast and they need to work. There will need to be a higher focus on AI and new ways of discovering the right products. Nail the complete user-journey, not just the product sold.

What opportunities will there be in 2021 and what mindset or approach will companies need to benefit from them?

Building companies will be easier and faster for everyone, so find ways to stay agile and test out many things as you go. The world is constantly changing so your way of doing business must always be challenged and optimized before you are forced to change by your competitors. – Digitizing something analog is not enough anymore, we need to find new ways of doing business, ways that weren’t previously possible. Creating new ways of doing business between parties will win the new age of companies.

What could be a potential game changer in 2021 for the marketplace industry?

International payments have always been hard to pull off, but I believe some players are now working very hard to capture this market. – Adding guarantees and insurance when selling products and services could also add trust to a marketplace, and I foresee some players become successful doing this.

Daniel Beck, Co-founder & CEO at Coliving

What challenges will the marketplace industry face in 2021 and how can they solve them?

Let’s hope 2021 won’t be as challenging as 2020 for businesses in general. Marketplaces haven’t had as hard a time as many though – a lot of them didn’t have a ‘bricks and mortar’ element to their structure anyway, and technology is embedded, which came in handy in a year where everything went online overnight.

However, in 2021, marketplaces will now be competing with traditional businesses who were forced to finally innovate during the pandemic. There also might be a degree of a return to normality, which they will need to brace themselves for too. But the good news is, a lot of the advancements and changes in behavior we’ve seen in the last year that benefit the marketplace industry will never be undone.

What technology will be a must in 2021 and why?

Fraudsters had a bumper year in 2020 with many people inexperienced with online transactions being forced to shop, work and enjoy their leisure time online. Anti-fraud technology has always been advisable, in 2021 it will be a must.

What opportunities will there be in 2021 and what mindset or approach will companies need to benefit from them?

It’s as though everything has been thrown up in the air and we’re waiting to see where it lands. But wherever things do land, the marketplaces that will benefit – and in many cases survive – will not be the ‘fittest’ as many people say when they misquote Darwin, but the most “able best to adapt and adjust to the changing environment”. That’s the mindset that’s important: not being tied to a company, a name, a URL – but a purpose, and because of that being able to be more agile.

What could be a potential game changer in 2021 for the marketplace industry?

The same game changer for all of us – a vaccine. And that’s suddenly, at the end of 2020 looking a lot more likely. Hopefully we will be able to look back and see 2020 as a dark cloud, but one which had a silver lining. For marketplaces, that silver lining is rapid, mass adoption of digital technology. We’ve moved years in months, so from now on, it’s only going to get more interesting.

Elle Tucker, Sharing and Gig Economy Consultant

Thanks to technology and digitalization we have come seven years in seven months during the current pandemic. People are more comfortable working remotely and in virtual teams. The world of work has changed and the status quo ante which saw people crowding into city centers between 9am and 5pm every day is not coming back. Both employers and employees are demanding more flexibility and all the tools are there to make it possible. The genie is not going back into the bottle.

At the same time, physical meetings will take on even more importance and value as we exit the current COVID-19 crisis.

Companies who have managed to focus on what customers really need, while stripping away unnecessary bureaucracy, personnel and costs will be the winners as we enter the new year. 2021 will see the growth of the project economy: organizations will hire the specific skills that they need in order to carry out specific – often time limited – jobs. At the same time, automation will pose an increasing threat to those who do not have the knowledge, flexibility or abilities to navigate the new landscape.

Glen Hodgson, Secretary General, Plattformsföretagen and CEO, Free Trade Europa

COVID-19 has accelerated digitalization levels by years. As a result, there has been an influx of new marketplace users. The new users can particularly be found among an older demographic and in emerging markets.  

Within this group there’s overall a very low level of tech comprehension which means we have seen an increase in vulnerable internet users who are exposed to potential fraud and abuse. Keeping this user group safe is a challenge that will continue far into 2021 and beyond.  

Additionally, it will take time before the COVID-19 vaccine has reached high penetration. This means that there will be a continued risk for various fraudulent offers around medical protection and treatments.  

Even with a vaccine it will take time for the hospitality industry to recover. As such Marketplaces with high focus on travel and holiday homes could still face large challenges and will need to think creatively to push through until the world returns to “business as usual”.  

In 2021 we’ll continue to see a convergence between B2C and B2B. Unfortunately, many marketplaces mistakenly think that B2B is risk free and does not face trust and safety issues. This is not at all the case and to be successful platforms will need to put measures in place to safeguard their users in this space as well.  

In the past years we’ve seen automated text automation is becoming quite widespread.

In 2021 I predict Image automation will be used by more and more Marketplaces in various verticals. We will even see an uptick in the requirement for automated video moderation as video becomes an even bigger factor when selling on marketplaces.  

2021 will continue to be challenging to many businesses, but there’s also a range of opportunities for agile companies who dare to move fast.

Young demographics are trending towards a more sustainable approach in their consumer behavior benefiting many marketplaces and sharing economy ventures. Also offering and integrating payments and delivery services is becoming hygiene factors. Finally, including various insurances and guarantees in your service offerings are becoming more common (for used cars for instance). 

Additionally short video is trending strongly in popularity among young consumers. This could have a huge impact on parts of the industry opening up new opportunities, but also triggering many new trust and safety concerns.

How will you ensure that videos are appropriate, accurate and of the quality you need? 

Petter Nylander, CEO at Besedo

In 2021, one of the biggest challenges the marketplace industry will continue facing is how to deal with the implications of COVID-19. On the other hand, this new world order also presents tremendous new opportunities for marketplaces. Can services earlier offered offline be brought online instead? Can innovations be made in delivery? What kind of previously neglected opportunities are there in domestic (or even virtual) travel? And so on.

Technology solving these dilemmas will be a must in 2021. This means integrations to solutions like video conferencing tools and new delivery options. However, the most significant opportunities likely lie in business model innovation.

Juho Makkonen, Founder of Sharetribe

What challenges will the marketplace industry face in 2021 and how can they solve them?

As the entire ecommerce industry is re-inventing itself for a post-COVID reality, we are likely to see both challenges and opportunities emerge. In our experience, the defining challenge shall be the industry’s approach to the development of authentic customer experiences. This will touch on both the principles of service (and how to manage communication at scale) and that of personalization (and how to build trust and value from vast datasets). The solution to these challenges is to blend the capabilities of technologies (such as the adoption of AI) with an agile business mentality which is willing to experiment and innovate.

What technology will be a must in 2021 and why?

Technology is a tool. In 2021 it will be a ‘must’ to have a resilient mindset, a set of nimble processes, and technology that will allow the business to reduce operating expenditure whilst improving customer experience across multiple touch points. Artificial Intelligence will allow marketplaces to personalize the shopping experience but also to offer unique services such as conversational shopping and support. Increasingly, AI will help marketplaces predict behavior and take better, data-led decisions on pricing, stock and customer targeting.

What opportunities will there be in 2021 and what mindset or approach will companies need to benefit from them?

The biggest opportunity in 2021 will be to use big data sets from the marketplace to continuously improve and evolve the marketplace itself. This will happen in two ways: (a) autonomously by having AI processes which re-build, re-price, re-target and re-organize stores based on real-time trends, and (b) manually, by providing new levels of data insights to management teams who can now take more informed decisions to improve their offering.

What could be a potential game changer in 2021 for the marketplace industry?

The marketplace of tomorrow will be AI led. It will offer customer-centric search results that are personalized on the fly. It will retarget potential customers interested in the products for sale and create a more efficient sales process. It will ensure that all conversational communication is personalized and is constant across multiple channels and devices. The latter will be led by Virtual Agents who will curate the shopping experience and serve the customer without human interaction in 60% of the time.

Dr. Gege Gatt
 at EBO

2021 is the year we are all entering with the biggest uncertainty in decades. Unfortunate events from 2020 will continue to change users’ habits and behavior rapidly so predicting the trends becomes as good as predicting from the crystal ball – not really something you want to base your business plans on. Future is no longer an extrapolation of trends from the past, as radical shifts and changes occur.

When the short and mid-term predictions become unreliable, it is best to focus on long-term trends and I see several long-term trends that play in favor of the marketplace industry.

Moving our lives online fast makes users less afraid to embrace doing almost anything online so marketplaces oriented to providing the full transaction services will have an opportunity to provide a trusted environment and guarantee it with the quality of their services and value of their brands.

New niches appear, and small niches grow extremely fast as people are forced to meet each other less often in person. This especially changes the jobs market where online candidate interviewing becomes a must and where companies care much less about where their employees work from. This gives the strongest players in the field a chance to use their economy of scale and provide advanced services that will bind both demand and supply side to their ecosystems.

Fast transition to online brings scale, both from the side of the economy and side of data, to many marketplaces. This allows faster implementation of AI-powered technologies such as advanced Cognitive Search, Visual Search, AI-based Recommending and Matchmaking of buyers and sellers.

All above play in favor of marketplaces. On the other side, there is an inevitable trend of the economic cycle and forthcoming downturn that will make things harder. Still, marketplaces that focus on long-term trends, use the opportunity to build core technologies and stick to their promises to the customers will benefit when the economic growth re-starts.

Davor Anicic, Co-founder & CEO at Velebit AI

7% of reviewed listings for popular items across six UK marketplaces are found to be scams.

This Christmas, British shoppers will be turning to online shopping more than ever before as Internet retailers offer quick and convenient services. Consumers increasingly rely upon these services as the UK faces its second lockdown of the year, with many shoppers preferring not to visit physical stores. The opportunity is for online marketplaces to gain new customers, especially as many consumers turn to shopping for goods online for the first time. In fact, according to research by O2 Business and Retail Economics, 44% of consumers believe their shift to online shopping during the last COVID peak will stay the same going forward.

The good news does not stop there. In a saturated industry where monoliths such as Amazon dominate, consumers have become much more aware of where they shop and are much more willing to try a new service – over a third (39%) of respondents to a survey undertaken by Bazaarvoice said that they had bought from a new brand during the first quarantine. For small to medium sized marketplaces, this shift in shopping habits has opened up an opportunity to appeal to these newly adventurous shoppers, willing to discover a new platform.

Yet, while there is a huge opportunity for marketplaces, there is also considerable threat. The Association of Certified Fraud Examiners found that, in August this year, 77% of anti-fraud experts said that the pandemic has created an increase in fraud, while the New York Times reports that more than 200,000 coronavirus-related scam complaints have already been filed in the US this year.

To uncover the scale of this problem in the UK and help online marketplaces to understand how to keep consumers safe in the run up to Christmas, our certified content moderators started tracking six popular UK marketplaces over the peak shopping period. By reviewing listings of the items most associated with fraudulent selling, they are identifying how many show tell-tale signs of risk despite having made it through the marketplaces’ safety measures.

Having so far reviewed over 1000 listings during the first two weeks of November, we have found that:

  • 7% of reviewed listings are likely to be scams
  • Puppies are particularly risky, with 23% being found to be scams
  • 14% of fashion listings are for counterfeit items
  • Smaller marketplaces are particularly rife with scam products

The data demonstrates that no matter how large the marketplace, all marketplaces need to take precautions to prevent fraud on their platforms in the lead up to Christmas and onwards – as these scams and counterfeit posts are certainly not one-offs. Marketplaces need to rid their platforms of fraud. Larger marketplaces may see a smaller percentage of fraudulent posts but considering that they have a much larger user base, even small percentages can lead to thousands of users becoming victims. If marketplaces – of all sizes – do not act now, then they could risk long-term reputational damage and the potential for fraud on their platform to spiral out of control.

Recent research undertaken by Next Consulting on our behalf also demonstrates that:

  • Up to 73% of users would never return to a platform after seeing fraudulent content
  • 76% of users would not recommend a marketplace after having seen fraudulent content
  • 80% would not buy from a brand that they had seen fraudulent posts for previously

Tackling fraudulent activity on marketplace platforms is not an easy challenge. The scam posts and counterfeit goods we found listed above were post-reviewed, meaning that these were the posts that had either slipped through a content moderation process or there was no process and analysis in place to start. Marketplaces need to review their content moderation strategy and explore a holistic approach – one that uses both AI and filter automation layered with manual moderation in order to provide high quality review and prevention. AI and automation are effective as a first line of defense for preventing harmful content. However, alone they are not the panacea. AI is fantastic for routine health checks – such as checking suspicious content that follows a continuous pattern. However, some scam posts venture out of the ‘routine’ and break the patterns that are typically picked up by AI. That is when human moderation will need to step in to ensure that rigorous analysis of the context is applied.

Working with a trusted partner who can provide guidance, best practice approaches and support for content moderation will be crucial for many marketplaces in the lead up to Christmas and beyond.

Do you need help with your content moderation setup? Get in touch.

When British journalist, Sali Hughes, found herself the victim of online trolls, she took it upon herself to understand why. As part of these efforts, she actually met one of them in person.

While it’s vital that victims, like Ms. Hughes, speak out against the responsible individuals – what of the sites where all of these comments are posted? Negative User Generated Content (UGC) continues to be problematic for all kinds of online marketplaces and classified sites – as well as chat forums. But to what extent can they be held accountable for enabling these kinds of viewpoints to be aired – and to what extent should they be punished?

While the human impact is a critical concern that many businesses, governments, and advocacy groups are striving to curb, there’s a similarly disastrous impact for the sites themselves – not just in terms of reputation or bad user experience; there’s a financial impact too.

Let’s consider the ways in which negative UGC can affect the business bottom line and look at ways companies can put a stop to it.

Bad Content: The Bigger Picture

The need for UGC platforms to respond swiftly and decisively to negative content is a given. Legislators have continually weighed in on these issues. For example, last year, (as mentioned in a previous blog) the EU voted to give online businesses one hour (from being contacted by law enforcement authorities) to remove terrorist-related content – from the moment that it’s identified on a site. Failure to comply could incur these businesses a fine of up to 4% of their global revenue.

Similar efforts would have governments taking a more proactive stance – such as British media watchdog, Ofcom’s proposals to police social media earlier this year. But given concerns over freedom of speech and expression, such moves are bound to provoke a backlash – from businesses and individuals.

Another solution is for businesses to regularly audit their own sites: which larger platforms like Facebook and YouTube do already (and with varying degrees of success given Facebook’s continued fines over under-reporting illegal activity). In addition, organizations like GARM (Global Alliance for Responsible Media) brings advertisers, agencies, media companies, platforms and industry organizations together to improve digital safety.

While the vast majority of online platforms do everything they can to ensure their sites remain safe places for all of their users and customers, the issue is that all of these combined actions don’t stop trolls and cyber criminals.

The Business Impact

In addition to the ongoing regulatory maelstrom, the urgency to respond is exacerbated by a myriad of business concerns. These include retention, conversion, engagement, reputation, and customer satisfaction – all of which can be easily damaged and disrupted by bad or harmful User Generated Content. This in turn can pave the way for other types of negative online behaviours: from scams to fake ads.

Retention, Conversion, & Engagement
When customers lose faith in an online platform – be it a service or a marketplace – there’s a negative impact.

It follows that if users leave; revenues will drop. Lower engagement leads to fewer conversions. But that’s not all. Costs increase too as a result – as it becomes increasingly more expensive to win back old, or entice new, customers.

Lower user retention stemming from a negative experience pushes up the cost of user acquisition. Similarly, higher user leakage means that the lifetime value of users will drop too.

Given that, a bad UGC experience can contribute to a fairly rapid downward spiral, the case for prevention rather than reaction is more important than most site owners consider.

A company’s reputation online matters just as much – if not more so – than how it’s perceived offline. After all, content tends to have a habit of lingering online. That’s why bad UGC can be so damaging.

It can often be hard for brands to shake the stigma of bad content published about them. By the same token, their reputation can be damaged by (unknowingly) hosting it as well. This can be disastrous for online marketplaces, classifieds, and chat sites, who often need to rebuild trust from the bottom up.

Then of course there are legal and liability issues that can stem from unauthorized UGC as well as harmful content. Take the 2017 case of Kayla Kraft vs Anheuser-Busch. When an image of the claimant was supposedly submitted as part of a campaign to crowdsource advertising images, she filed a lawsuit asserting the image had been used without her consent.

Customer Support
While many businesses will focus on providing multichannel support, and making it as easy as possible for customers to access their business and support channels, reducing the number of support requests doesn’t always factor in as highly.

This is a mistake. Ultimately, the more calls, emails, and support tickets there are, the higher the cost of customer service – as better trained staff are needed to deal with incoming queries. But, with a more robust, preventative solution in place, the need for a bigger support offering reduces significantly.

Take our client, Connected2Me – a social media platform where users can chat to each other anonymously. While the idea itself is intended to be fun; the team were experiencing more negative User Generated Content than they could handle. As a result, they were getting an increased number of support tickets, which were proving difficult for the in-house team to keep on top of.

When they contacted us in 2018, Connected2Me had tried adding automation to their content moderation workflow, but had not been able to find a solution which could live up to their required accuracy. The team was manually moderating content but needed to ensure 24/7 monitoring in order to reduce the amount of support tickets and provide the best possible user experience.

With our help Connected2Me now has an accurate moderation solution in place covering numerous languages. They can now move forward confidently – meeting user expectations and provide the experience they were originally aiming for. These efforts are also helping them attract new investment and develop a loyal and happy user base.

You can read the case study in full here.

User Safety = Sales Success

Task most people with introducing themselves to a crowd of strangers in person, and the chances are you’ll see them do their very best to present a positive version of themselves.

But transpose this to an online environment, add a degree of anonymity, allow people to share content, and all kinds of intriguing behaviors can manifest themselves.
Ultimately, online platforms can unwittingly play host to a torrent of negativity – which is why preventative action is wholly necessary at a site level.

From a company perspective, the need to counteract it is as much a business concern as it is a user-centric one. But, when you think about it, the two are in fact one and the same. A trusted site that’s known for quality content, reliable customers, and a great user experience will attract more prospects than a platform in which they’re likely to be subject to scams and abuse.

And as for Sali Hughes? She was surprised by the person she met. It wasn’t some bitter, twisted, housebound hacktivist – it was a well-dressed, professional woman in her mid 30s; the kind of person she might even be friends with in other circumstances.

It just goes to show: you can never second guess when, where, or from whom online abuse will come from – which is why a moderation strategy that can be applied at scale and is specifically designed to uphold your site’s rules and procedures is a safer bet for all.

If, like the companies mentioned here, your online business relies on User Generated Content then you need to make sure that every single customer gets the best experience possible.

Here at Besedo, it’s our goal to help companies do exactly that.

Interested in learning more about how we can help counter the effects of bad UGC 24/7? Then talk to our team today.

Having quality tools is key to performing well. That’s true for content moderation as well. An outdated or feature incomplete platform affects everyone from CEO to content moderator. And the consequences can be dire ranging from lacking user and operational insights to decreased productivity or even the inability to perform important tasks.

If you are looking for your first content moderation tool or considering exchanging your in-house platform it’s a good idea to take a step back and consider the options available. It might be tempting to hand the task and your specifications to your dev team and await delivery. On the other hand, it might be better to buy an off the shelves solution that’s plug and play. In short should you build or buy?

The answer is of course very dependent on your situation, your company and your current requirements. We’ve created a list of questions that you should ask yourself and an overview of pros and cons to help you take a more informed decision.

  1. Is content moderation business defining or business critical?
  2. What are the time constraints for implementing a moderation solution?
  3. What developer resources do you have available?
  4. What is your budget?


Is content moderation business defining or business critical?

10 years ago, most online platforms didn’t prioritize content moderation as there was a lot less understanding of the negative impact from bad content. Today, most agree that content moderation is business critical to ensure user trust and ensure conversion. However, business critical isn’t the same as business defining.

For most customer facing businesses, a help-desk tool is critical, but unless customer support and how it’s delivered is a core part of your USP the software you use for customer queries doesn’t have to be unique to your business.

What you do want though, is a platform that is stable, has all required features and is updated regularly as requirement shift.


What are the time constraints for implementing a new moderation solution?

What is the timeline for implementing your new solution? Building a moderation platform from scratch can be a yearlong project depending on how many developers you can throw at the task.

At the other side of the coin, even out of the box solutions usually require some level of integration with your current systems. It’s good to get an idea of requirements early in the decision process so you understand the timeline for either option. If you do decide to buy, shop around a bit to understand the difference between vendors and how much effort they will put into supporting you during the integration step.


What developer resources do you have available?

Before committing to building or buying you should figure out what developer resources you realistically have at your disposal. Keep in mind that content moderation tools are an almost living entity that needs to develop with your product and ongoing trends.

When you evaluate your developer need for an in-house solution, remember to include ongoing maintenance and new feature development.

For bought solutions you should as mentioned before do a discovery call with potential vendors to understand how much time integration will take.


What is your budget?

Budget is always an important aspect when deciding whether to build or buy. It can be really hard to estimate the cost for a content moderation solution regardless of whether you build and buy. Many vendors will have on-boarding fees, monthly fee and price/item which can make obscure to predict the actual monthly bill.

For in-house projects on the other hand, it’s easy to forget the cost of management, salaries, project meetings, and ongoing management and feature updates.

Most importantly many companies forget to keep opportunity cost in mind. What unique features could our developers have created for our core product instead of building a moderation platform?

Whether you decide to build or buy, spend some time investigating potentially hidden costs to avoid unpleasant surprises.

If you buy, go with a partner that is transparent and straight forward.

If you build, map out every cost, not just direct developer costs, but also adjacent expenditures, especially after the project has been delivered.

Building a Content Moderation Platform:

Greater feature control (within the project and budget scope)Even well-planned projects are subject to scope creep, going over budget or surpassing deadline
You can build the exact tool you need, and ensure that it fits with all your other in-house toolsIt’s likely that there’s a product on the market that already matches most of your requirements. Developing from scratch might waste money and time when there’s potentially already an off the shelf solution that would fit your needs.
Significant upfront and hidden ongoing costs
Timescale for building an in-house tool is significantly higher than buying an off the shelf solution


Buying a Content Moderation Platform:

Ongoing feature addition from vendors with their main focus on content moderation software development.Less customization to your particular needs. If a solution has missing features, make sure you check with the vendor to see if it’s on their roadmap.
Low upfront cost.While the initial cost may not be high, you should ensure that you understand the ongoing monthly cost and any additional fees in case your volumes grow.
Ongoing support, training and maintenance.
Fast go live date.While you will be able to deploy a purchased tool faster, make sure you do consider implementation time.
Tech knowledge loss protection. When buying you are not dependent on in-house developers with certain knowledge and wont have to worry about them leaving. Buying protects you from development delays and additional cost related to rehiring and training.

The decision to buy or build is never easy and very dependent on the business and the use cases. In most cases, asking yourself whether building an in-house solution will give you a competitive edge or if the needs can be sufficiently covered by an already existing content moderation platform.

If you want to better understand your options, feel free to reach out to us and get a transparent offer for a tailored content moderation solution.

Or check out Kaidees experience with our content moderation tools.

Most online platforms would agree that Images are one of the absolute most important elements of a user profile. When it comes to building trust between strangers, having an actual picture of the person you are about to engage with is vital. Whether you are looking for a date, booking a ride or renting a holiday home interacting with a total stranger can be daunting. In the real world visual clues like facial expression and body language is intuitively used to decode intent. In the digital world we must emulate these trust markers and images are a crucial part of this.

There’s one problem though. While good images can put a face on a stranger and boost user trust, bad images can have the exact opposite effect. This is one of the reasons we advocate for image quality and why we’re continuously expanding on Implios capabilities for catching and managing bad, inappropriate or low-quality images.

The latest tool in Implios image moderation toolbox is a misoriented AI module.

Why should you care about misoriented images?

The use case is straight forward of course, misoriented images (E.g. Wrongly rotated or upside down) will be caught by the model and sent for manual moderation.
Catching misoriented images is important for the overall impression of your site. A bunch of upside-down faces will make browsing time-consuming and confusing at best or make your platform look unprofessional and scammy at worst.
As more users access and create their profiles using mobile phones the number of images that are misoriented increase and the need to efficiently deal with the issue grows accordingly.

Which is why we’re excited to announce that Implio can now help you automatically identify misoriented images.

How to automatically detect misoriented images

The misoriented module will be available to all Implio users off the shelf soon. For now to gain access, just reach out to us and we’ll activate it for you. When the module is active all images will be scanned by the AI and tagged with misoriented if they are rotated wrongly.


This tag can then be utilized in Implios powerful rule creator where you can decide to send to manual moderation, reject outright (not recommended) or take no action if you want to use the rule for tracking purposes only.

Here’s an example of an image caught by the new misoriented module. As you can see the picture is upside down and it’s been tagged by the AI with “face” and “misoriented”

rule match for misoriented images

To the right you can see that it has matched the misoriented rule.

If you decide to send misoriented images to the manual queue moderators will be able to fix the issue. Here’s a view of Implios image editing tool. Here you can crop and rotate images as you see fit.

rotating misoriented images

This version of the misoriented image model works best with human subjects, but we’re hard at work expanding on it and soon we’ll add capabilities that will allow the model to tag items with the same level of accuracy.

If you’re looking for a way to optimize the way you handle misoriented images on your site or platform then get in touch. We can help you with the setup and have a look at your site for other low hanging content quality issues that can easily be resolved with a good moderation setup.