Let’s take a look at job scams. What is it? In this blog post, we will look into the reasons behind this surge, dig a little deeper into the challenges job boards face, the scope of the problem, and how a Hybrid Content Moderation solution can help mitigate the risk.

Undoubtedly, the rapid advances in Artificial Intelligence (AI) and technology fuelled increased productivity and efficiency on a global scale. On the flip side, though, as technology evolves rapidly, so do frauds.

Job board scams can be phishing for your personal details.
Job board scams are phishing for your personal details.

A closer look at the data

The pandemic generated a record-high unemployment rate, with an ever-growing pool of applicants searching for new job opportunities. In the wake of Covid-19, Job scams rose to an unprecedented rate, with Americans having lost a whopping $68 million to fraudulent job offers, in the first quarter of 2022, according to a report issued by CNBC.

The Federal Trade Commission (FTC) states that Job frauds tripled over the last three years, while a study published by Better Business Bureau (BBB) revealed that 14 million victims fell prey to job scams in 2022, with financial losses amounting to $2 billion.

Breaking down the anatomy of job scams

Scams have existed for a long time. They take various forms and can equally affect big corporations and less tech-savvy individuals. In fact, scammers have also targeted high-profile corporations, with Google and Facebook losing more than $100 million to business email compromise (BEC) frauds.

The turbulent job landscape is being exploited by fraudsters, who employ various deceitful tactics to defraud victims and steal or sell the data to the dark web. Scraping is a method fraudsters use to obtain personally identifiable information (PII) to create fake passports, driving licenses, or even new bank accounts.

Scammers pose as legitimate employers in popular job boards using a variety of sophisticated scam schemes, ranging from cloned company websites to spear phishing attacks that spread malware, which is then used to commit identity fraud or extract large sums of money.

Fake job listings on social media

Besides job boards, fraudsters share fake job offers on popular and trustworthy social media networks, capitalizing on the fact that 59% of the population uses social media and spends, on average, 2 hours and 29 minutes daily on social networks.

Let’s take a look at how this affects companies of all sizes.

LinkedIn

LinkedIn boasts more than 700 million users across the globe, but so does the potential for fake job ads. Alarmingly, over half of billion LinkedIn users have been targeted by fraudsters through scraping. This method is used by fraudsters to gain access to publicly viewable data such as:

  • job titles
  • emails
  • former colleagues
  • accolades
  • names
  • phone numbers

This data is then sold to hackers for phishing scams.

They can also use PDF attachments for the job descriptions through links that contain malware.

Another technique recently used in LinkedIn is online fraud impersonation. Fraudsters use spoofing, a method where they steal companies’ logos and hide their actual webmail accounts.

“You appeared in [number] searches this week” is one common phishing LinkedIn technique and is designed to steal users’ login credentials through fake LinkedIn landing pages.

A 232% spike in email phishing attacks that impersonate LinkedIn has been reported since February 2022.

Twitter

Twitter is another social media platform not immune to fake job offers. In particular, scammers use shortened URL links (i.e. bitly), which lead users outside of the platform to unverified web pages.

The bottom line is that fake accounts can be created quite easily, from either real or fake accounts, and social media platforms still struggle to verify user profiles that appear legitimate but are, in fact, populated with fake connections.

Content moderation matters

In-house moderation often lacks the technological know-how and expertise to detect fake job listings, cloned company websites, or URLs, among other phishing methods.

To successfully mitigate the ever-increasing risk of scams, job platforms need a highly- sophisticated hybrid moderation solution powered by artificial intelligence (AI), machine learning, and human workforce.

At Besedo, we pride ourselves on adapting to your platform’s moderation needs and goals. Our filter specialists are committed to supporting you in creating customized rules and filters that align with your platform’s specific guidelines.

Implio is the ultimate Hybrid solution, which will automate the bulk volume of fake job listings and flag any other questionable listings that require manual review. Besedo offers a pool of highly-trained and experienced moderators who will work as an extension of your in-house team and thus save time and free up resources.

Besedo’s offering is unique because it leverages AI and human intelligence to protect your job platform from fraudulent listings and identify fake employer profiles, spoofed company websites, trojan horse attacks, and malicious links in real time.

Written by

Anamela Agrodimou

Sales And Marketing Specialist at Besedo

Anamela is currently based in Athens, Greece where she works with marketing and sales. She speaks many languages and is keen to learn even more. She got her Master’s degree in marketing at Jönköping University in Sweden and she maintains she liked the snow during the cold Nordic winter.

Start using content moderation today

Form background

So you have a product, or you’re developing one, leveraging user-generated content (UGC). Great! Would you rather spend time designing and developing your product – and business – or start your mornings reviewing inappropriate content? 

That’s a somewhat leading question.

An illustration displaying the various layers of content moderation

As a startup, it’s important to focus on your core idea, product development, and design to create a successful product. Those efforts will set your product apart from the competition and make it successful. However, many distractions and clutter can take your focus away from what you need to do to what you feel you must do.

The expression ‘hire your weakness’ means to double down on your strengths while leveraging the benefit of others’ expertise. In some cases, you hire marketing and customer support. In others, you want to outsource, like accounting or content moderation. By outsourcing these tasks, you can free up your time to focus on product development and design.

5 tips for early days development

  1. Set rules: Your content will only be as good as the rules you set up as a business. What type of content do you want to allow, and what don’t you like? Set these rules early and put them clearly in your Terms of Service. Your users will have a much better experience if everyone obeys the house rules.
  2. Be proactive: Be clear about your standards and expectations for content posted on your site or apps. Start moderating content from the get-go and set the tone for what is and isn’t acceptable. Being reactive is much more complicated, and you will only try to catch up.
  3. Be transparent: If users see that you’re being proactive about moderation, they’ll be more likely to trust the process and feel comfortable submitting their own content. Use user-generated content to improve your product and reach new customers. Take feedback seriously, use it to inform your marketing strategy, and always look for ways to turn negative experiences into positive ones.
  4. Be smart and nice: Comedian Jimmy Carr said on Mike Birbiglia’s podcast Working It Out, “Never tell a joke if you gotta look around before telling it.” The same applies to apps and websites; if you need to take a quick look around and say, “it’s just us here – look at what is on this website I run,” that’s an indicator you’ve been neither smart nor nice.
  5. Ask for help: Hiring your weakness also means asking for help when needed. When developing a product, there’s no shame in not being the best accountant or lawyer in the world. You ask for help. The same goes for content moderation. You’ll be surprised by what you can learn from an email or a phone call with an expert.

“Never tell a joke if you gotta look around before telling it.”

— Jimmy Carr

5 reasons why you should outsource content moderation

There are several ways to outsource content moderation for your startup. We are, of course, somewhat biased on the matter *ahem* but here are five tips on outsourcing content moderation for your startup. 

  1. The cost is cheaper: When you outsource content moderation, you only pay for the service itself. There are no additional costs, such as advertising or hiring a full-time moderator. Doing this in-house would require your staff to work around the clock to keep up with everything, and you also have to consider all the languages they need to speak. And the time zones they need to cover during their shifts.
  2. It’s faster and more efficient: An outsourced and automated content moderation can handle a large volume of content quickly and efficiently. The best content moderators will ensure you get that hybrid solution where a human touch is needed. How are you supposed to know the difference between bullying and banter? And how do you train your staff to understand leet speak? Besedo uses Natural Language Processes, and our filters can detect millions of variations for each unwanted word.
  3. The quality is higher: A hybrid model with automation and manual moderation is the only way to ensure the highest possible quality. AI is scalable, hard to deceive, and has high automation levels. Combine that with filters, a control panel, and manual moderation with extremely high accuracy. Manual moderators will also understand the context, something an AI might only sometimes understand.
  4. You don’t have to worry about it: Doing content moderation in-house can be a logistical nightmare. But when you outsource content moderation, you don’t have to plan for holidays, sick leave, vacations, overtime, etc. 
  5. It’s a good investment: Outsourcing content moderation allows businesses to regulate all posts, comments, and reviews that users post on the company’s website. This includes websites, apps, and other digital platforms where fake reviews, NSFW material, and disinformation are rampant. Monitoring scams and threats is a time-consuming process that requires constant attention.

The consequences of harmful user-generated content

Harmful user-generated content can have several consequences for your startup. First and foremost, it can damage your brand reputation. Users posting negative comments or reviews about your product or service can dissuade other potential customers from using your business. Harmful user-generated content can also lead to decreased traffic and engagement on your website or social media platforms. Suppose users constantly see negative content about your company. In that case, they may be less likely to return to your site or interact with your brand online. 

Finally, harmful user-generated content will damage your relationship with the people who generate it. No one wants to see trolls run rampant in the commentary section or constantly see inappropriate content uploaded to a platform they generally like. They may become frustrated and stop submitting if you’re continually letting others destroy a platform they are investing their time and effort into helping grow. Strike a balance between giving users a platform to share their thoughts and maintaining the quality of your site.

You can also download our checklist “Set up your content moderation operations in 6 simple steps” for marketplaces. That’s an excellent start. We’d love to hear what your biggest challenges are right now. 

Video games are a great escape from reality, allowing people to find solace in an imaginary world. But what about the nasty side of gaming, when bullying or grooming occurs?

Gaming has changed drastically over the last decade. For example, back in the mid-2000s, a player could just start a new game and go off on their own for as long as they wanted. If they wanted to talk to other people online, there were IRC channels or message boards to find others with similar interests. But today? Today it’s more common than not with in-game messaging, and we have instant messaging apps like Discord and voice chat apps such as TeamSpeak, Mumble, and Element.

Why Is Content Moderation Important?

Content moderation is important for many reasons. It helps keep your gaming platform safe by ensuring that only appropriate content is shared. Additionally, it can enhance the user experience by making sure that users are only seeing content that is relevant and interesting to them. Ultimately, content moderation helps create a positive and enjoyable experience for all gaming platform users.

All types of content should be moderated. This includes but is not limited to video, text, and images. Content moderation will include identifying inappropriate content, such as nudity, hate speech, and illegal activity.

It all boils down to giving everyone the best possible user experience.

Grooming in games starts with a simple question from another player, such as “are you home alone?”

What are the advantages of a solid moderation program? A solid moderation program can help protect your gaming platform from banning trends that can disrupt your customer base, diminish customer trust in your company/product, and protect you from potential lawsuits by both users and third parties.

Bullying and Grooming in Online Gaming

There are many ways that bullying and grooming can happen on online gaming platforms. These can include:

  • Players send abusive or threatening messages to other players.
  • Players create offensive avatars or in-game content.
  • Players engaging in hate speech or harassment in chat rooms or forums.
  • Players manipulate game mechanics to harass other players.

Grooming in games starts with a simple question from another player, such as “are you home alone?”

Content moderation is the best way to combat these issues and keep your gaming platform safe. By moderating user-generated content, you can remove offensive material, stop trolls and bullies from ruining the experience for everyone, and create a safe and welcoming environment for all players.

What Are the Effects of Bullying/Grooming in Online Gaming?

While many established gaming platforms have some form of content moderation to protect users from inappropriate or harmful content. However, left unattended, your game will risk becoming a breeding ground for bullying and grooming. Suffice to say, this will harm the victim’s self-esteem and mental health, most likely causing them more harm than missing out on gaming.

Victims of bullying and grooming will, and rightfully so, speak to others about their experiences. Once the word is spreading about these user experiences, well, you can stick a fork in your game’s reputation. Gamers will speak out in reviews, subreddits, and social media, leaving you facing an incredible uphill battle to save your platform’s credibility and reputation.

Keep your users safe and engaged with the game.

Imagine starting every morning by reviewing a few hundred NSFW images. After your first cup of coffee, you have some grooming to review and possible CSAM activity to talk to the police about.

Sounds awful.

Set yourself up to focus on game development rather than spending endless hours reviewing inappropriate behavior.

Content Moderation to Prevent Abuse in Gaming

In-app messaging and chats need supervision and moderation to be safe. That means that if you’re running a gaming platform, you need to have someone monitoring the conversations that take place within it. This will help keep things safe for all users, and it will also enhance the user experience by ensuring that people can communicate without being harassed or subjected to inappropriate content.

Not to toot our own horn, but companies like ourselves review and approve content before it is made public. While this sound like it might be a slow process, it actually happens with just milliseconds of latency, meaning people are unlikely to even notice. 

Suppose a user abuses an in-game messaging app. In that case, technology like Besedo’s will ensure others don’t get a chance to see the offensive messages or images being sent. Real-time moderation for profanity or nudity will ensure that we can keep chats clean and civilized.

That way, Besedo also works to keep the app user-friendly and respectful by enforcing the app’s terms of use. If you’re into podcasts, we highly recommend episode 120, called ‘Voulnet,’ of Darknet Diaries.

Conclusion

Content moderation is a vital step in keeping your gaming platform safe and user-friendly. By taking the time to review and approve all content before it goes live, you can avoid potential problems down the road.

Not only will this save you time and money, but it will also improve the overall quality of your user experience. So if you’re looking for a way to keep your gaming platform safe and improve your user experience, content moderation is the answer.

With the rise of online dating comes the inevitable problem of fake profiles. These are profiles created by people who are not interested in dating but are looking to scam others out of their money or boost their own ego. Learn to spot a fake profile and protect yourself from being scammed, fooled, or harassed.

Photo by Victoria Heath on Unsplash

The problem with fake profiles is bigger than you think

The problem with fake profiles is more widespread than you think. Some sites estimate that as many as 10% of dating profiles are fake. That means that for every 10 people you see on a dating site, one of them is likely not even a real person.

So why do people create fake profiles?

There are a few reasons. Some people do it to scam others out of money. They create a profile, build up a relationship with someone, and then ask for money. Others do it to boost their ego. They create a profile with photos that make them look much more attractive than they really are and then message people they know they will never meet.

Whatever the reason, fake profiles are a problem because they ruin the user experience for everyone else. That is just as bad for customers as it is for businesses.

There are a few things you can do to spot a fake profile, but the best thing you can do is to be aware of the problem and be cautious when you’re interacting with people online. If something seems too good to be true, it probably is. Trust your gut and be careful out there!

5 ways to detect fake profiles

How do you detect fake profiles? It’s actually not as difficult as you might think. There are a few key things to look for that can help you spot a fake profile pretty easily.

  1. The first thing to look for is a lack of personal information. If a profile doesn’t have much information about the person, it’s likely fake.
  2. Look for inconsistencies in the information that is provided. If a person’s age, location, or other details don’t seem to match up, it’s probably because they’re not being truthful.
  3. Ask to see their social media accounts. If they can’t provide you with their Instagram account in this day in age, well, we don’t know what to tell you.
  4. Have a look at the follower/following ratio. A fake profile has zero or below 10 followers. Come on, how many people do you know with less than 5 friends on Instagram or Facebook?
  5. Take a look at the photos that are posted on the profile. Fake profiles will often use stock photos or photos clearly not of the person claiming to be behind the profile. If something looks too good to be true, it probably is!

Also, if someone you match with quickly seems to get you all too quickly. You have the same interests, they mention music or tv-show you like, not saying that’s a red flag but if you look at the big picture it might be someone you know who is catfishing you.

How do we stop harassment in dating apps?

We’re all too familiar with the scenario. You mind your own business, chatting away on your favorite dating app, when suddenly the conversation turns left, and you’re bombarded with messages from someone you don’t know. It’s annoying, it’s intrusive, and it can even be scary.

Well, this is where content moderators play an important role in keeping chat apps safe for everyone. We help to stop harassment and bullying by enforcing the policies set by the company behind the platform. Most dating apps have a chat functionality in place that you can use once you have matched with someone. That’s all great.

Say what you want, but be nice and obey the house rules.

Besedo offers chat moderation in real-time

When someone doesn’t obey the house rules, technology like Besedo’s will ensure you don’t get a chance to see the offensive messages sent to you. Real-time filtering for profanity or nudity will ensure that we can keep chats clean and civilized.

That way Besedo also works to keep the app user-friendly and respectful by enforcing the app’s terms of use.

I have been harassed on a dating app

You can do a few things to protect yourself from harassment in chat apps.

  1. Make sure you have the latest version of the app installed. Many chat apps include features that allow you to block or report users who are being abusive.
  2. If someone is harassing you, use these reporting tools to protect yourself. Always report the user and block them.
  3. Take screenshots to provide proof. It can also help protect others.
  4. Be careful about who you add to your chat app contacts list. If you don’t know someone well or if they seem sketchy, it’s best not to add them. This will help reduce the chances of being harassed by someone you don’t know.
  5. Finally, remember that you can always leave a chat app if it’s making you uncomfortable. There’s no shame in doing so, and it’s often the best way to protect yourself from abuse. If someone makes you feel unsafe or uncomfortable, just hit the exit button and move on.

At a time when people are looking for love and connection more than ever, it’s important to be aware of the risks of fake dating profiles. While most dating platforms do their best to keep users safe, there are always going to be some bad actors who slip through the cracks.

That’s why it’s important to be vigilant in online dating. If something seems too good to be true, it probably is. Be wary of anyone who asks for money early on, or who wants to move too fast without getting to know you first.

Scammers will hurt your brand with poor reviews

Dating apps have been accused of promoting a hook-up culture and fostering an environment where users are more likely to swipe in the search for someone better endlessly. But another side to dating apps that can be just as problematic is the proliferation of fake profiles.

Fake profiles are not only a problem for users but also pose a risk to the app itself. If users come across too many fake profiles, they may start to question the legitimacy of the app and its users. This can lead to them deleting the app and leaving negative reviews, which will damage the app’s reputation.

If you’re running a dating app, it’s important to ensure that you’re taking steps to prevent fake profiles from being created. This includes things like requiring verification for new users, using artificial intelligence to identify suspicious activity, and monitoring user reviews for feedback about fake profiles. 

Hi, we’re Besedo; we should talk! Nudge, nudge 😉

Taking these measures can help protect your app from being tainted by scams and poor reviews.

Don’t waste your time with dishonest people

With the prevalence of dating apps and websites, it’s no surprise that there are fake profiles out there. The important thing to remember is to be vigilant and do your research before meeting anyone in person. If you suspect that someone you’re talking to is a fake, report them to the site or app so that they can be removed. And most importantly, don’t waste your time on someone who isn’t being honest with you. 

There is plenty of fish in the sea, so keep swimming!


William Singam

Sales Director APAC

William is the Besedo Sales Director for APAC and you can meet him at the GDI Singapore Conference in July 2022 where he is one of the speakers. When he is not on stage, he’ll be happy to share his wealth of experience about 1-2-1 chat moderation, user experience, app reviews, and just about anything content related.

A good website loads fast, boasts a beautiful design, is search engine friendly and offers a brilliant user experience design. In fact, having a website with a poor design could make users feel like your brand is of poor quality or untrustworthy.

*record scratch*

But if you peel off that top layer of design elements – what is a user experience, really? 

Nielsen Norman Group probably says it best that “user experience encompasses all aspects of the end-user’s interaction with the company, its services, and its products.”

All your design efforts will come up short if your website, or app, is not supporting your users’ goals. To most business owners, these goals are so fundamental that they risk being forgotten when you’re focused on all aspects of your business. With user-generated content platforms such as dating apps, marketplaces, video streaming, etc., you’re essentially handing over a massive chunk of your user experience to your community.

Consider this: You are interested in buying a bike, so you hop on your favorite marketplace app and search for bikes. The search result shows hundreds of postings near you. Great! The only thing is; first, you must wade through 4 pages of inappropriate images, scams, and harassment.

Two apps showing content with and without content moderation
Moderated content is a big part of creating a great user experience

To quote Donald Miller, “a caveman should be able to glance at it and immediately grunt back what you offer.” This is referred to as the Grunt Test; it’s a real thing.

Many marketing reports show poor design decisions are culprits why customers may leave your site. That’s a given. One report says that 88% of online consumers are unlikely to return to a website after a poor experience.

With user-generated content platforms you’re essentially handing over a massive chunk of your user experience to your community.

Most likely are those numbers closer to 99% should we remove content moderation from the user experience equation.

The User Experience Honeycomb

At the core of UX is ensuring that users find value in what you provide. Peter Morville presents this magnificent through his User Experience Honeycomb. 

The user experience honeycomb as presented by semanticstudios.com

One of the 7 facets of his honeycomb is “credible,” as Morville notes that for there to be a meaningful and valuable user experience, information must be:

Credible: Users must trust and believe what you tell them.

So what if your information and content are user-generated? Then you aren’t the one providing the credibility.

User Experience in user-generated content

We would argue that Credible (or Trust) serves best as the base for your user experience when it comes to user-generated content apps and websites. After all, the user experience is more than just something intuitive to use.

When User Experience Fails Despite Good Design

Few things will hurt your users’ confidence in your app faster than harassment or irrelevant content. In-game chats and, to some extent, dating apps are breeding grounds for trolling. Flame wars can create an unfriendly online environment, making other users feel compelled to respond to abusers or leave your platform entirely. 

Harassment still happens, and no one is immune, despite your platform’s fantastic design.

The emphasis on trust and credibility can not be overstated when your platform relies on user-generated content.

Online reviews and comments from social media are the new word-of-mouth advertisement. With a growing pool of information available online to more consumers, this form of content could either become an effective branding tool or the undoing of branding. Even if the content does not appeal to children, they may still flag it on the site or tell an adult they trust.

Trust user reviews, images, and videos

Suppose handing over a big part of your customers’ user experience to largely unknown users feels like a scary ordeal. In that case, you’re in for a rollercoaster regarding reviews.

Fake online reviews are more prevalent than you might think and could lead you to purchase a product you would not have otherwise. Fake customer reviews are usually glowing, even over-the-top, reading more like infomercials than reviews. One MIT study found that fake reviews typically contained more exclamation points than genuine reviews. Fake reviewers believe that by adding these marks, they’ll emphasize the negative emotions behind their feedback. 

Conversely, it is not uncommon for sellers to purchase fake, one-star reviews to flood competitors’ pages.

According to research, 91% of people read online reviews regularly or occasionally, and 84% trust online reviews as much as personal recommendations.

Building trust into the user journey

Your online business aims to attract, retain, or engage users; creating an experience that turns them off is definitely not a smart step in this direction. It should be kept in mind that users should have an accessible and user-friendly experience when going on this journey with you. We even published a webinar about building trust into a user journey if you’re interested.

Find out which online marketplaces are the biggest in various countries, categories, and much more in our definitive list of marketplaces worldwide.

There are obviously various advantages of selling products on the best-rated marketplaces as a seller. With so many visitors, Amazon has significant benefits as an online market when you choose to sell on the internet. The established audience is one of the biggest reasons you should trade in online markets. Suppose you are a seller who is just getting started selling online. In that case, marketplaces can be a fantastic way to earn some income and establish your brand while working on driving traffic to a new e-commerce site.

Just how big are the biggest marketplaces in the world?

What is an online marketplace?

First of all, we need to understand what defines an online marketplace. It boils down to two key features:

  1. Sellers and buyers are trading through the same website (or app).
  2. The buyer can complete their purchase on the website (or app).

This excludes price comparison sites like PriceRunner or Google Shopping. They are essentially advertising channels rather than online marketplaces.

The buyers are mainly consumers, not businesses. The marketplace sells physical products, not just downloads, streaming, or other services.

We start with approximately 200 marketplaces with more than one million monthly visits. Then we look at the most popular product categories and the break-out stats for a few countries.

In short, we are looking at actual online marketplaces where you can sell physical products to consumers.

The world’s top online marketplaces

#NameCategoryVisits/month
1AmazonGeneral4.81B
2eBayGeneral1.18B
3RakutenGeneral542.7M
4Marcado LibreGeneral511.8M
5ZalandoFashion420.0M
6ShopeeGeneral415.7M
7AliExpressGeneral390.9M
8WalmartGeneral387.3M
9EtsyArts, Craft & Gifts373.2M
10TaobaoGeneral277.9M
11WildberriesGeneral232.7M
12TrendyolGeneral222.1M
13AllegroGeneral189.5M
14FlipkartGeneral186.9M
15PinduoduoGeneral183.8M
16TargetGeneral165.9M
17JDGeneral164.7M
18OzunGeneral164.5M
19TokopediaGeneral158.8M
20MercariGeneral132.6M
21OlxGeneral102.1M
22TmallGeneral113M
23WayfairHomewares98.98M
24AmericanasGeneral97.67M
25AlibabaGeneral90.76M

Estimated monthly visits for April 2022, from SimilarWeb. Traffic to different domains (e.g., amazon.com, amazon.co.uk, amazon.de, etc.) is combined.

All these marketplaces sell goods under a general category, and all except one (Amazon) are pure marketplaces without any retail operations of their own. All these marketplaces sell general goods except Zalando, Etsy, and Wayfair

“Wayfair and Amazon account for 63% of furniture sales online”

Only Amazon and eBay break the one billion visits mark. However, Rakuten and Mercado Libre aren’t too far behind, with over 500 million per month. And Zalando is hot on their heels with 420 million visits per month.

Amazon is the most well-known retailer to have its marketplace, with more than 50% of sales now made through Marketplace sellers. In addition, the biggest players in online furniture marketplaces, Wayfair and Amazon, account for 63% of furniture sales online. 

US-based e-commerce giant Amazon, the top-ranked e-commerce company worldwide by market capitalization, is the third-largest online furniture market. Amazon.com is the most visited e-commerce marketplace, with an average of over 2.3 billion monthly visits.

If you merge the visits to the biggest Amazon domains, the visits are almost 5 billion per month.

It is not surprising to see Amazon and eBay among the top three; eBay receives 1.2 billion monthly visits. Suppose you add up Amazon, eBay, and Etsy. In that case, you are looking at 500 million+ monthly active visitors, which is an enormous amount of real estate on the internet. 

How did the pandemic change our online adoption rate?

According to a study by McKinsey, COVID-19 has pushed companies over the technology tipping point–and transformed business forever. Digital adoption has taken a quantum leap at both the organizational and industry levels. Of course, this affects our consumer behavior on marketplaces worldwide.

Graph with illustration of leap in digitization

Markets in Europe

The most popular European market is Amazon, which gets 1.6B visits per month. At the same time, eBay receives less than half that traffic, at 634M visits.

Another American-based, general-purpose global marketplace, eBay, received 255 million monthly visits in the United Kingdom.

With these impressive numbers, eBay is the only marketplace that comes anywhere near matching Amazon’s numbers in the UK for visitors. 

Amazon is the largest market in the US, with over 300 million customers on Amazon, 100 million of whom are Prime members. The best-known online marketplace is also Amazon, thanks to Amazon’s strong delivery and fulfillment capabilities and its seamless shopping experience. 

Walmart offers various categories of products that draw large volumes of visitors every month, making it one of the leading online markets in 2022. 

Since the rise of the titans such as Amazon, eBay, and Alibaba, brands have been racing to thrive, compete, or get beat in the online markets. Companies like Walmart added marketplaces to their existing retailers’ websites, giving shoppers more product choices while creating price competition among sellers. Of these major online markets, three are in China, and two are based in the U.S., the two biggest drivers of e-commerce sales growth.

The same is true for the sales from online retailers, which are expected to also increase significantly over the next few years. 

There are also niche online markets such as Bonanza, Fruugo, and Hollar, fashion-focused markets like Zalando and Fullbeauty, and deal-focused markets such as Tophatter and Tanga – the list goes on. 

European consumers are using the greatest number of different marketplaces – 63 have more than one million visits per month, generating more than 3.6 billion visits in total.

Let’s have a look in-depth at some categories.

Fashion online marketplaces

#NameCountry/RegionVisits/month
1ZalandoEurope420M
2SheinGlobal148.3M
3ASOSGlobal64.64M
4MyntraIndia53.46M
5ZozoJapan48.95M
6AjioIndia31.55M
7StockXGlobal30.51M
8VintedFrance29.81M
9DSWUSA24.97M
10FarfetchGlobal23.06M

Clothing and fashion are one of the most popular online marketplaces niches. Popularized by many influencers online, the fashion and clothing industry has really found its market on popular apps like TikTok and Instagram.

Fashion marketplaces are spread worldwide, with Europe, the USA, and India in the top-5 spots.

Electronics online marketplaces

#NameCountry/RegionVisits/month
1Bestbuy.comUSA, Canada46.46M
2Gearbest.comUSA, Canada52.33M
3Offerup.comUSA20.15M
4Newegg.comUSA, Canada13.25M
5Bhphotovideo.comGlobal12.56M
6G2A.comUSA, Canada10.18M
7Digitec.chSwitzerland9.34M
8Shutterfly.comUSA, Canada6.65M
9CDOnEurope5.78M
10Game.co.ukUK2.12M

A surprise inclusion on this list, for anyone outside of the USA, is probably Offerup in 3rd place on this list.

Electronics are typically commodities – easily available and extremely price-sensitive.

Travel and Tourism

Travel and tourism are irrelevant when we only look at physical goods. But just for comparison, we thought we’d have a look at the top 10. Booking.com is three times bigger than the runner-up Tripadvisor. Booking is also bigger than Tripadvisor, Airbnb, Expedia, Uber, and Jalan (positions 2–6) combined.

#NameCountry/RegionVisits/month
1Booking.comGlobal490.5M
2Tripadvisor.comGlobal148.1M
3Airbnb.comGlobal89.20M
4Expedia.comGlobal88.19M
5Uber.comGlobal75.85M
6Jalan.netJapan52.38M
7Hotels.comGlobal51.17M
8Agoda.comIndia48.68M
9Travelersdream.comUSA47.36M
10Vrbo.comUSA44.89M

“Booking is bigger than Tripadvisor, Airbnb, Expedia, Uber, and Jalan combined.”

Top online marketplaces by country and region

#RegionMarketplaces*Visits/month
1North America554.5B
2Europe694.1B
3East Asia192.7B
4Latin America191.5B
5Southeast Asia15820M

* Includes only marketplaces with more than one million visits per month.

North American consumers generate the most traffic to online marketplaces, with little more than 4.5 billion visits per month and more than 50 different marketplaces having one million or more visits each. 

Europe is the runner-up with the highest number of marketplaces, with over one million monthly visits.

The third is East Asia, and that is primarily China and Japan, with an estimated 2.7 billion visits.

Online marketplaces by country

Breaking down the top online marketplaces by country. This is a little trickier because we can’t find data for many countries, so this is for the United States and the United Kingdom.

United States

#NameCategoryVisits/month
1AmazonGeneral1.46B
2ebayGeneral665.5M
3EtsyArts, Crafts & Gifts371.3M
4WalmartGeneral363.1M
5Target.comGeneral147.5M
6WayfairHomewares98M
7PoshmarkFashion42.3M
8BestbuyElectronics41.4M
9Samsclub.comGeneral35.8M
10OverstockGeneral23.8M

United Kingdom

#NameCategoryVisits/month
1AmazonGeneral350.1M
2eBayGeneral238.1M
3EtsyArts, Crafts & Gifts33.39M
4ASOSFashion19.6M
5JohnLewis.comGeneral16.74M
6veryFashion11.1M
7DiscogsMusic5.1M
8ManoManoHomewares4.6M
9DepopFashion3.1M
10HomebargainsHomewares2.3M

About the data

The lists are ranked by estimated website visits based on SimilarWeb and Statista data for April 2022. Please note that traffic to different domains for the same marketplace (amazon.com, amazon.de, amazon.jp, etc.) has been combined. It’s regarded that Gross Merchandise Value might be an ideal measure of size, but that data is not available for most marketplaces.

Due to the lack of reliable traffic data from the sources, we have not included app-only marketplaces.

This is Besedo

The all-in-one platform for content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

One of the surprising and amazing things about the internet, especially when we’re talking about things involving user-generated content, is how many ideas become well-established parts of society before we even fully understand them.

Online dating is a perfect example. While the first big dating site was launched more than 25 years ago, Tinder – which broke new ground for dating apps – only just turned 10. And yet, it’s already perhaps the main way of meeting a partner, with a Stanford study in 2019 finding that 39% of heterosexual couples met online.

Despite its massive popularity, though, we’re not necessarily wise to everything that dating apps have to throw at us or aware of all the ways they’re changing society or us. Unusual and potentially dangerous trends are emerging from online dating all the time, and never fail to claim headlines in the media: Stylist, for example, recently described voicefishing as “the freaky new dating trend we’re trying to get our heads around.”

The Challenge For Apps

On the one hand, then, there’s frequent discussion of the dangers that online dating can pose, but, on the other hand, people clearly keep coming back anyway to enjoy the benefits of digital matchmaking. Dating apps clearly have a job to do to make sure that people are empowered to be flirty, but not dirty; daring, but not dangerous.

That work all comes down to steering user interactions in the right direction, and that’s not always easy. The difficulty is illustrated by a study that recorded how dating app users speak about their experiences. Direct quotes from study participants show how dating apps – like many online venues – develop their own language, codes, and norms:

  • “I get him a different picture and I make his profile his “buyer” – he didn’t have a buyer. I made his profile a buyer, and said “You can always go back” and it blew up!”
  • “…sometimes they’ll write “say orange if you’ve read this.” And so you’re expected if you match, the first thing you say to them is orange to show that you’ve actually read through it.”

If you don’t know what some of these phrases mean, well, don’t worry: that’s exactly the point! We know that dating app users successfully find partners and have fun doing it – but while they’re doing so, they also have to navigate a fast-changing environment of language and interaction.

The moderation response

What is a challenge for users, here, is just as much of a challenge for content moderation strategies: both human moderators and automated tools need to constantly learn, adapt, and evolve in order to keep users safe. At the same time, though, the freedom to be playful and inventive in how you speak and interact with others is an important part of how dating apps work for people. While the medium might be digital, it’s still (hopefully) a real person on the other side of the screen – and, just as with bumping into someone on the street or meeting them in a bar, the element of surprise and fun is essential.

There’s a fine line to tread, then, when moderating dating apps. We need to react quickly to new threats, but intervene gently; we need to be strong in safeguarding users, but permissive about how they act; and we need to listen and learn attentively about how people interact. Needless to say, it’s a job that takes real expertise.

What do you think? Do dating apps get it right when protecting and helping their users? How can we respond to the ways language is evolving online?

And what is a “buyer”, anyway?

If you want to talk about it, reach out today.

Axel Banér

Sales Director – EMEA

In parts one and two of this blog post series about the evolution of language, we talked about how moderating user-generated content (UGC) echoes how people communicate. And also how the rapid evolution of language online is now making that job harder.

In short: there’s nothing new about setting rules for acceptable speech, but we have to get faster about how we do it.

However, it’s also worth thinking about how online communication doesn’t just build on how offline communication works but offers something genuinely new and different. The fact that email was created to be a digital equivalent of postal mail, for example, is right there in the name – but today, email offers much more than the post ever could, from uniquely personalized content to embedded video.

Across the internet, there’s a wealth of communication options, ranging from adding simple emoji to broadcasting yourself live to millions of viewers, which don’t have a direct offline equivalent. In a way, of course, pointing this out is stating the obvious; those otherwise impossible options are, to a large extent, precisely why the internet is so powerful and popular.

Risk-reward?

And yet, from a business perspective, it would be easy to look at this UGC and see it as something quite similar to cybersecurity. Cyber attackers are often locked in a kind of arms race with security professionals, each trying to identify weaknesses first and develop more robust tactics than the other. Having a wide variety of options is also a range of potential ways to get around the policies that a platform might want to impose – whether it is stopping people from conducting business through other channels or monitoring for much graver abuse issues or hate speech.

Giving shoppers the power to post videos of products they purchase, for example, has clear benefits in building credibility. But, conversely, users can use that feature to publish irrelevant or even maliciously untrue content. Or, building reaction gifs into an online dating messaging platform might enrich conversations but could also be used as an avenue for guerilla marketing.

The sheer variety at play here marks a real difference from the offline reality of (mostly) speech and writing.

While these concerns are well-founded, thinking about this kind of UGC in these terms runs the risk of missing how vital it is as an engine of growth for online businesses: the perception of danger might cloud sight of the benefits.

The most successful moderation approaches are about enabling interactions as much as they are about blocking them; not an arms race, but teamwork.

New moderation for new communication

It’s becoming more widely understood that offering advice about, examples of, and benefits in return for positive behaviors on platforms is ultimately more effective than punishing negative behavior. This is something that research has shown, and it’s a method that large online platforms are increasingly turning to.

Here we might be looking at something fundamentally different from the long offline history of moderating speech, which has typically relied on limiting certain expressions and interactions.

When businesses make themselves open to users and customers communicating in richer ways, we think that the best approaches will focus on how moderation can empower users in ways that enable growth. An entirely conservative approach will only stifle the potential of audiences, customers, and users.

These new worlds of content will not be effectively moderated using tools and methods adopted to deal with purely text-based interactions. As users’ interactions become more complex, we will need human input to oversee and understand how those interactions are working.

Petter Nylander

CEO

Dating apps are once again preparing to be abuzz with activity for Valentine’s Day. Even though outlooks toward dating apps have become increasingly positive over the past few years, with platforms gaining in both popularity and users, they have throughout their short existence continued to attract a great deal of attention on the risks they pose to users from a personal safety perspective.

Any dating app user will be familiar with the anxiety involved with moving from digital to in-person interactions, and unfortunately, that anxiety has a legitimate source. According to the Pew Research Centre, one in two online dating users in the US believes that people setting up fake accounts to scam others is very common.

The financial details back them up, too: the FTC recently highlighted that, with $1.3b in losses over the last five yearsromance scams are now the biggest fraud category they track.

And people who strike up online relationships between Christmas and Valentine’s Day might be at particular risk of romance fraud. Last March, for example, the UK’s National Fraud Intelligence Bureau experienced a spike of romance fraud reports. It’s little wonder, then, that Netflix chose the start of February to release its true-crime documentary The Tinder Swindler.

With online dating apps now entirely mainstream as one of the default ways of meeting people, with over 300m active users, it is more important than ever that the businesses running them take strong steps to protect user safety. This is a moral imperative, of course, in terms of working for users’ best interests – but, as the market matures, it’s also quickly becoming a potentially existential problem for dating platforms.

Challenges faced by those looking for love

When considering managing the online reputation of a company, user experience, and business outcomes are often one and the same thing, meaning that moderation is an important measure to consider. Disgruntled customers, for instance, often utilize social media to publicly criticize companies, leading to a backlash that can rapidly spiral out of control.

It’s not easy, however: online dating is, understandably, a highly sensitive and personal area. Users who might otherwise be highly cautious online are more likely to let their guard down when it comes to looking for love. Platforms have a duty of care to their users to put a stop to fraudulent behavior in order to support and protect their users in a way that does not feel ‘intrusive’.

Effective moderation in this space demands a range of approaches. A well-moderated dating app generates a more seamless and convenient user experience which in turn reduces spam content and unhappy user feedback. Keeping users safe, creating the right brand experience, and building loyalty and growth go hand in hand.

How it works in practice

As we enter a peak season for online dating, a moderation strategy that brings users closer to the people they want to connect with, with less spam and a clearer sense of safety, will be a real competitive differentiator. Ensuring a safe and positive user experience should be at the heart of dating sites’ content moderation strategy.

AI-enabled content moderation processes are essential to catch and remove these fraudulent profiles before they target vulnerable end-users. Online dating app, Meeticimproved its moderation quality and speed with 90% automation at 99% accuracy through an automated moderation platform.

With dating apps relying so heavily on user trust, it is essential that platforms are able to detect and remove scammers, whilst maintaining a low false-positive rate to ensure minimal impact on genuine users. Content moderation teams must also be continuously trained and updated on the ever-evolving tricks of romance scammers.

A content moderation partner can be a great way to ensure high accuracy and automated moderation to maintain a smooth customer experience. Only with a team of highly trained experts coupled with precise filters and customized AI models will online dating sites be truly efficient at keeping end-users safe.

Platforms cannot afford to make this a ‘non-issue’ – even if users do not experience it themselves, many will see others being harassed online and experience negative feelings towards the brand and platform. For platforms, everything is at stake for both their reputation and ultimately, the wellness of their users.

Update October 31, 2022: Thank you to Bedbible for reaching out. We have updated our link reference to their site. You should check them out, they are amazing.

Martin Wahlstrand

Regional Sales Director Americas

Martin is Besedo’s Regional Sales Director Americas. While you can’t swipe right on anyone here at Besedo, Martin and his team would love to give you a demo of how content moderation can help your users be safer and have a great user experience.

Evolution of language, part two: flexing with the speed of conversation

Have you ever been overheard and misunderstood by a stranger? It’s not, thankfully, an everyday occurrence, but it is something that most of us have probably experienced at least once. Imagine you’re talking to a friend somewhere public, perhaps in a café or on public transport. Maybe the conversation turns to discussing a film you both like or joking about a recent political event. Suddenly, you realize that, a few meters away, someone has caught a few words midway through your chat, and doesn’t look at all happy about how you’re speaking.

Words don’t have to mean anything offensive in order to cause concern when taken out of their context – of being a fictional story, for instance, or an inside joke between friends. Language is always social, and seeing it from a different social vantage point can cause serious errors in interpretation.

The social question in content moderation

Being mistakenly overheard is a very small version of something which happens all the time on a much larger scale. Particularly in an age where cultural ideas spread virally, it’s not unusual for people to find it hard to keep up with what the conversations around them mean. For example, a new hit crime series on Netflix may leave someone confused, at least for a day or two, as to why they keep hearing people describing gruesome murders.

If this kind of event can temporarily wrong-foot human beings, though, it’s a much more persistent problem for content moderation systems. After all, while office worker can ask their colleagues what is going on, content moderation generally can’t directly ask the user what they mean, even when human moderators are involved. Automated systems, meanwhile, can maintain full awareness of anything happening on-platform – but often have little scope to understand that in terms of the wider world.

In one way, this situation is unsurprising: content moderation systems have evolved to meet specific business needs, such as protecting brand reputation, maintaining revenue, and protecting user safety, and the state of the art is extremely effective at achieving this. Tools from filter lists to machine learning are very powerful when the aim is to create clear boundaries for acceptable speech.

They are less well-suited, however, to these situations which can cause the greatest friction with users, when seemingly normal interactions are punished without any apparent explanation. No matter how well trained and refined a business’s content moderation is, a system focused on the platform will always have the potential to be surprised by the world-changing around it.

The cultural stakes of moderation

As user-generated content takes on an ever more central position in people’s daily lives, content moderation likewise takes on ever more responsibility to behave effectively, and the potential for disagreement between businesses and userbases grows ever more severe. In the extreme end, this can lead to international media attention on a system trying to deal with content it was not specifically designed for.

To put it another way, while protecting (for example) brand reputation may once have meant strictly enforcing rules for user interaction, expectations of user content are evolving and the demands on businesses that rely on user content are becoming more subtle. Automated moderation which doesn’t keep pace with cultural shifts, therefore, is becoming less palatable.

One consequence of this is that we still rely on humans to help decide which side of the line to err on in ambiguous situations. While they can’t directly address users, human moderators nonetheless outperform their machine colleagues when it comes to making subtle distinctions. This still leaves problems, however, of establishing large enough teams to cope with the volume of content and ensuring that the manual review process engages with the right content in the first place.

In the longer term, the expertise of the content moderation community will have to be seriously applied to thinking about how to help create healthier, more human conversations – not just limiting those which are clearly negative. We think that user-generated content presents a far greater opportunity for sustainable growth than it does a risk factor for brand damage; as we consult with our customers (and speak to you through blogs like this) we’re always keen to hear more about how the next, more subtle generation of content moderation tools might best be designed.

Petter Nylander

CEO