From dating sites and online marketplaces to social media and video games – content moderation has a huge remit of responsibility.

It’s the job of both AI and human content moderators to ensure the material being shared is not illegal or inappropriate: always acting in the best interest of the end-users.

If you’re getting the content right for your end-users, they will want to return and hopefully bring others with them. But content moderation is not a form of censorship.

Person in a mask shushing the viewer with the finger.
Photo by engin akyurt on Unsplash

When every piece of content added to a platform is checked and scrutinized – content moderation is not censorship, nor is it policing.

Come along, and we’ll show you the evidence.

Moderating content vs. censoring citizens

Content moderation is not a synonym for censorship. In fact, they’re two different concepts.

In 2016, we looked at this in-depth in our article titled Is Moderation Censorship?  – which explains the relationship between content moderation and censorship. It also gives some great advice on empowering end-users so that they don’t feel censored.

But is it really that important in the wider scheme of things?

Content moderation continues to make headline news due to the actions taken by high-profile social media platforms, like Twitter and Facebook, against specific users – including, but not limited to, the former US President.

There’s a common misconception that the actions taken by these privately-owned platforms constitute censorship. In the US, this can be read as a violation of the First Amendment rights concerning free speech.

The key is that the First Amendment protects citizens against government censorship.

That’s not to say privately-owned platforms have an inalienable right to censorship, but it does mean that they’re not obliged to host material deemed unsuitable for their community and end-users.

The content moderation being enacted by these companies is based on their established community standards and typically involves:

  • Blocking harmful or hate-related content
  • Fact-checking
  • Labeling content correctly
  • Removing potentially damaging disinformation
  • Demonetizing pages by removing paid ads and content

These actions have invariably impacted individual users because that’s the intent – to mitigate content that breaks the platform’s community standards. In fact, when you think about it, making a community a safe place to communicate actually increases the opportunity for free speech.

Another way to think about content moderation is to imagine an online platform as a real-world community – like a school or church. The question to ask is always: would this way of behaving be acceptable within my community?

It’s the same with online platforms. Each one has its own community standards. And that’s okay.

Content curators – still culpable?

Putting it another way, social media platforms are, in fact, curators of content – as are online marketplaces and classified websites. When you consider the volume of content being created, uploaded, and shared, monitoring it is no easy feat. Take, for example, YouTube. As of May 2019, Statista reported that more than 500 hours of video were uploaded to YouTube every minute. That’s just over three weeks of content per minute.

These content-sharing platforms actually have a lot in common with art galleries and museums. The items and artworks in these public spaces are not created by the museum owners themselves –they’re curated for the viewing public and given contextual information.

That means the museums and galleries share the content but are not liable for it.

An important point to consider is if you’re sharing someone else’s content, there’s an element of responsibility.

As a gallery owner, you’ll want to ensure it doesn’t violate your values as an organization and community. And like online platforms, art curators should have the right to take down material deemed to be objectionable. They’re not saying you can’t see this painting; they’re saying, if you want to see this painting, you’ll need to go to a different gallery.

The benefits of content moderation for your business

To understand the benefits of content moderation, let’s look at the wider context and some of the reasons why online platforms use content moderation to help maintain and generate growth.

First, we need to consider the main reason for employing content moderation. Content moderation exists to protect users from harm. Each website or platform will have its own community of users and its own priorities in terms of community guidelines.

Content moderation can help to build that trust and safety by checking posts and flagging inappropriate content. Our survey in 2021 of the UK and US showed that one-third of users still felt some degree of mistrust even on a good classified listing site.

Second, ensuring users see the right content at the right time is essential for keeping them on a site. Again, with the content of classified ads, our survey revealed that almost 80% of users would not return to the site where an ad lacking relevant content was posted – nor would they recommend it to others. This lack of relevant information was the biggest reason users clicked away from a website. Content moderation can help with this too.

Say you run an online marketplace for second-hand cars, you don’t want it to be flooded with pictures of cats suddenly. In a recent example from the social media site Reddit, the subreddit r/worldpolitics started getting flooded with inappropriate pictures because the community was tired of it being dominated by posts about American politics and that moderators were frequently ignoring posts that were deliberately intended to gain upvotes.

Moderating and removing inappropriate pictures isn’t censorship. It directs the conversation back to what the community originally was about.

Thirdly, content moderation can help to mitigate scams and other illegal content. Our survey also found that 72% of users who saw inappropriate behavior on a site did not return.

A prime example of inappropriate behavior is hate speech. Catching it can be a tricky business due to coded language and imagery. We have a few blog posts about identifying hate speech on dating apps, and there are three takeaways:

Three ways to regulate content

A good way to imagine content moderation is to view it as one of three forms of regulation. This model has recently gained a lot of currency, which helps explain the role of content moderation.

Firstly, let’s start with discretion. In face-to-face interactions, most people will tend to pick up on social cues and social contexts, which causes them to self-regulate. For example, not swearing in front of young children. This is personal discretion.

When a user posts or shares content, they’re making a personal choice to do so. Hopefully, discretion will also come into play for many users: will what I’m about to post cause offense or harm to others? Do I want others to feel offended?

Discretion tells you not to do or say certain things in certain contexts. Sometimes, we all get it wrong, but self-regulation is the first step in content moderation.

Secondly, at the other end of the scale, we have censorship. By definition, censorship is the suppression or prohibition of speech or materials deemed obscene, politically unacceptable, or a threat to security.

Censorship has government-imposed law behind it and conveys that censored material is unacceptable in any context because the government and law deem it to be so.

Thirdly, we have content moderation in the middle of both of these.

This might include things like flagging harmful misinformation, eliminating obscenity, removing hate speech, and protecting public safety. Content moderation is discretion at an organizational level – not a personal one.

Content moderation is about saying what you can and can’t do in a particular online social context.

Summary and key takeaways

Okay, so what can Besedo do to help moderate your content?

  • Keep your community on track
  • Facilitate the discussion you’ve built your community for (your house, your rules)
  • Allow free speech, but not hate speech
  • Protect monetization
  • Keep the platform within legal frameworks
  • Keep a positive, safe, and engaging community

All things considered, content moderation is a safeguard. It upholds the trust contract users, and website owners enter into. It’s about protecting users and businesses and maintaining relevance.

The internet’s a big place, and there’s room for everyone.

Contact our team today to learn more about what we can do for your online business.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

From ancient Greek merchants attempting to claim insurance by sinking their ships to Roman armies selling the Emperor’s throne. Since time began, fraudsters have been willing to try their luck whenever a system is open to exploitation.

And succeeding.

However, most historical crimes were essentially isolated events. While a single fraudster could be a repeat offender, compared to the number of people who can be duped across different digital channels, fraud has become much more of an everyday concern for all of us.

But how did we get to this point? What risks do we need to be aware of right now? What can we do about it?

A history of digital deviance

Similar to other forms of fraud, digital scams date back further than you might think. Email phishing allegedly first took place in the early 1970s, although it’s generally accepted that the term was coined, and the practice became commonplace in the mid-1990s.

Since then, the online world has seen con artists try their hand at everything from fake email addresses to using information gleaned from massive data breaches, with $47 million being the most amount of money a single person has lost to an email scam.

Incidentally, the most famous email scam, the 419, aka Advance-fee, aka The Nigerian Prince scam, surfaced as mail fraud some 100 years ago.

But email isn’t the only digital channel that’s been hijacked. The very first mobile phone scams came about during the high-flying 80s when they became available – way before they were popular.

Text messages requesting funds be “quickly sent” to a specific account by a family member began soon after, though again didn’t surge in number until well into the 90s when uptake soared.

Of course, these aren’t the only forms of online fraud that surfaced at the start of the Internet’s popularity. Password theft, website hacks, and spyware – among others – proliferated at an alarming rate worldwide at a similar time.

One of the biggest problems we face today is the ease with which online fraud can take place.

Hackers continue to evolve their skills in line with advances in tech. But when you consider the number of websites that anyone can access; the marketplaces and classified websites/apps that rely on user-generated content – pretty much anyone can find a way to cheat these systems and those that use them.

Fraud follows trends

Scammers operate with alarming regularity all year round. However, they’re much more active around specific retail events:

However, while 2020 was shaping up to be a landmark year for fraudsters, given the many different sporting and cultural events like the Euro 2020 and Copa Americas football tournaments, and of course, the Summer Olympics – it seems that fate had very different plans for all of us: in the form of the COVID-19 pandemic.

But true to form, scammers are not above using an international healthcare crisis to cheat others. COVID-19 has given rise to different challenges and opportunities for online businesses. For example:

  • Video conferencing services
  • Delivery apps
  • Dating websites
  • Marketplaces

These have largely been financially advantageous, given they are digital services.

However, given the knock-on economic factors of coronavirus, there may be wider-reaching behavioral shifts to consider.

Fraudulent behavior simply seems to adapt to any environment we find ourselves in. In the UK, research shows that over a third (36%) of people have been the target of scammers during the lockdown. Nearly two-thirds said they were concerned that someone they knew could be targeted.

Examples playing on the fear of contamination include the sale of home protection products, while other more finance-focused scams include fake government grants (requesting personal information), help with credit applications (for a fee), and even investment opportunities promising recession-proof returns for those with enough Bitcoin to put into such schemes.

The gap is closing

It’s clear that online fraud is closely aligned with wider trends. In fact, the newer something is, the more likely scammers are likely to step in. Looking at the various timelines of many of these scams against when the technology was first invented, it’s clear that the gap between the two is closing as time progresses.

There’s a very good reason for this: the pace of adoption. Basically, the more people there use a particular device or piece of software, the more prolific specific types of scams have become.

Consider the release of the latest iPhone an event that gives fraudsters enough incentive to target innocent users.

The launch of the new iPhone preys upon consumer desire and manipulates Apple fans into being able to get their hands on the latest technology. As with most eCommerce scams, this is often with promises of delivery before the official launch date.

Not only that, but the accompanying hype around the launch proved the perfect time for fraudsters to offer other mobile phone-orientated scams.

Catch ’em all

In general, new tech and trends lead to new scams. For instance, Pokémon Go introduced new scams such as the Pokémon Taxi. So-called expert drivers literally were taking users for a ride to locations where rare Pokémon were said to frequent.

The fact it was so new and its popularity surged in such a short period of time made it a whole lot easier for fraud to materialize. Essentially, there was no precedent set – no history of usage. No one knew what exactly to anticipate. As a result, scams materialized as quickly as new users signed up.

In one case, players were being tricked into paying for access as new server space was needed. Not paying the $12.99 requested would result in their Pokemon Go accounts being frozen. While that might be a small price to pay for one person, it would mean a significant amount at scale.

Moderation methods

Regardless of their methods, fraudsters are ultimately focused on one thing. Whether they’re using ransomware to lock out users from their data, posting too-good-to-be-true offers on an online marketplace, or manipulating lonely and vulnerable people on a dating app – the end result is cold hard cash through scalable methods.

Hackers will simply barge their way into digital environments. Those using online marketplaces and classifieds sites essentially need to worm their way into their chosen environment. In doing so, they often leave behind a lot of clues that experienced moderators can detect.

For example, the practice of ad modification or the use of Trojan ads on public marketplaces follows particular patterns of user behavior that can cause alarm bells to ring.

Take appropriate steps

So what can marketplace and classified site owners do to stay ahead of fraudsters? A lot, in fact. Awareness is undoubtedly the first step to countering scams, but that alone will only raise suspicion than act as a preventative measure.

Data analysis is another important step. But, again, the biggest issue is reviewing and moderating at scale. So how can a small moderation team police every single post or interaction when thousands are created daily?

This is where moderation technology – such as filters – can help weed out suspicious activity and flag possible fraud.

To stay ahead of fraudsters, you need a combination of human expertise, AI, and filters. While it’s possible for marketplace owners to train AI to recognize these patterns at scale, completely new scams won’t be picked up by AI (as it relies on being trained on a dataset). This is where experienced and informed moderators can really add value.

People who follow scam trends and spot new instances of fraud quickly are on full alert during big global and local events. They can very quickly create and apply the right filters and begin building the dataset for the AI to be trained on.

Conclusion

Ultimately, as tech advances, so too will scams. And while we can’t predict what’s around the corner, adopting an approach to digital moderation that’s agile enough to move with the demands of your customers – and with fast intervention when new scam trends appear – is the only way to future-proof your site.

Prevention, after all, is much better than a cure. But a blend of speed, awareness, and action is just as critical where fraud is concerned.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

Sometimes small features can have a big impact. With our newly implemented user counter you get a whole new level of insights about your users.

What it does

The user counter shows you how many items the user has had approved and how many they’ve had rejected. You can also quickly access an overview to see the actual listings that were approved or rejected giving insight into user behavior and listing habits.

How it works

Click an item in the Item log.

Besedo-Blog-Counter

This brings up the item overview window. Here, next to the User ID you’ll find the user counter. The number in green shows approved listings by this user. The one in red, how many listings the user has had rejected.  

Use cases for user counter

If you have experience with content moderation you’ve probably already thought of several use cases for the user counter.

Here are a couple of examples of how it can be used in Implio.

1. Qualifying returning users

Need to understand the quality of a user? Check their listings history. If they have only rejections, this user may cause problems going forward as well.

2. Assistance in grey area decisions

When manually moderating items you sometimes come across grey area cases, where it’s hard to judge whether the listing is genuine or problematic. In those cases where you have to make a snap decision either way, having the user’s previous history to lean on can be helpful. A user with only approved listings in the past, is unlikely to have suddenly turned abusive. Although be cautious there are scammers turning this logic to their benefit through Trojan Horse scams. Here they first post a couple of benign listings, then once their profile looks good, they start posting scams.

3. Spotting users in need of education

Have you found a user who consistently get their listings rejected for non-malign reasons? A quick educational mail might help them out and cut down on your moderation volumes.

4. Identify new users

It’s always good to pay extra attention to new users as you don’t yet know whether they are bad actors. Knowing that a user has no previous history of listing items can act as a sign to be extra thorough when moderating. On the flip site, seeing a user with only approved listings allow you to speed up moderation of the item in question as it’s likely OK too. Just keep an eye out for the aforementioned Trojan Horse scammers.

To give a better understanding of how the user counter helps increase productivity and quality of moderation, we’ve asked a couple of our moderators for their experience working with the new feature.


“The user counter helps me get a perspective on the profile. If I see that a user has had listings refused more than two times, I access the profile to see the reason of the refusals. That allows me to make a better decision on the profile. It allows me to spot scammers quickly and make faster decisions.”

– Cristian Irreño. Content moderator at Besedo

“The user counter has allowed me to see the trends on profile decisions. It makes me be more careful when I see accounts with a higher number of refusals. Also, when I am working on a new account, I know I must be more careful with my decision.”

– Diego Sanabria. Content moderator at Besedo

“The counter helps me identify profiles that have frequent acceptance or refusals, and to spot new users.”

– Cristian Camilo Suarez. Content moderator at Besedo

The user counter is available to all Implio users regardless of plan. Want to start using Implio for your moderation? Let us know and we’ll help you get started.

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

Manual moderation isn’t really rocket science. Take away the bad stuff and leave the good. Right?

For very basic moderation sure, but the truth is that as soon as you reach any significant volumes on the site, moderation becomes a lot more complex. And to handle the complexity professionally you will need a very well organized team of moderators. To build that you will need to know the best practices for running an efficient moderation team.

We have dedicated other articles to talking about KPI’s and moderation methods, but once you have decided on the goals and methods you need to look at your delivery framework to ensure that your team has the optimal conditions to consistently and efficiently carry out quality work.

Communicate, Communicate, Communicate!

Set up a communication procedure to make sure new policies and site decisions are communicated to your moderation agents as fast as possible. When agents are in doubt they will lose precious time debating or speculating on what to do with new issues. This will also cause mistakes to be made.

Put in place a process for communicating new policies and ensure that someone is in charge of collecting, answering and communicating answers to questions from the moderation team.

Also make sure someone in your organization is on top of current events that might pose new challenges. We have covered such an example in a previous blog post The Summer of Big Events. And Big Ticket Scams.

Setting up a structure for a communication flow between the moderation team and the rest of your organization is key to enabling your moderators to work at their top speed and for them to feel equipped to do their job properly.

When we, at Besedo, provide a client with a team of moderators, we start out by setting up a clear framework for how questions from the agent on one side,  together with answers and new directions from the client on the other side are communicated.

Usually the structure will consist of the following:

  • A quarterly meeting where any adjustments to current guidelines or new focuses for the client business strategy are discussed. This allows the moderation team to give input on where and how their efforts are best applied to accommodate the client’s long-term vision.
  • A monthly meeting where our client informs about upcoming policy changes and new features.
  • A weekly meeting where current issues and challenges are raised by both parties. This is a great place to discuss any errors that have been made and request clarification on any policies that seem to cause a lot of grey areas.
  • Daily contact to touch base. This is usually not in the form of a meeting, but rather an ongoing conversation between a point of contact at the client side and one at Besedo’s site. This allows the moderation team to quickly receive answers and communicate new challenges that may pop up during the day. The key to success in this case is to have ONE clear point of contact on each side where all communication can be channeled.

After each meeting, the communication will be emailed out and also cascaded to the team through team leaders or veteran agents, ensuring that all moderators regardless of shift are made aware.

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

Moderation superstars are like athletes. They need ongoing training to stay on top.

1 hour spent training can save many more long term. It’s easy to think that moderation is straightforward, but it takes time, knowledge and experience to spot bad content in seconds when reviewing hundreds of ads an hour.

While it can be tempting to throw people headfirst into the moderation job (especially if you are short on staff) it almost always pays to spend time equipping your moderator for the job. You will have less mistakes, better speed and a happier new colleague.

When we employ new moderators to Besedo, we pass them through a very in-depth on-boarding program. Not only do we introduce them to the clients rules and platform, we also spend time teaching them about content moderation in general, the trends, the tools of the trade and the reasons behind moderating.

But we don’t stop there. We have ongoing training, workshops and knowledge sharing forums. The industry is not static, laws change and scams are always evolving. This means our moderation team needs to constantly improve and keep up with current trends. And so should yours!

You want ROI on moderation? You have to work for it!

When we speak to owners of sites and apps that deal with user generated content, one of the concerns we face is that they have not seen the expected ROI from moderation in the past.

Digging into their previous experience we often see that while they have had moderation efforts in place, they have not dedicated time and resources to really structure and maintain it.

We cannot stress enough how important it is to setup processes, training and retraining for your moderation team. Without it they will be working in the dark, catching some things, but leaving too much else untouched. An approach like this can almost be more harmful than no moderation at all, as you customer won’t know what to expect and whether or not site policies are backed up.

If you want to see proper ROI from moderation, it will require a lot of work, resources and attention. Sit down, plan, structure, implement and keep iterating.  It isn’t going to happen by itself!

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

COVID-19 continues to create new challenges for all. To stay connected, we’re seeing businesses and consumers spend an increasing amount of time online – using different chat and video conferencing platforms to stay connected, and combat social distancing and self-isolation.

We’ve also seen the resurgence of interaction via video games during the lockdown, as we explore new ways to entertain ourselves and connect with others. However, a sudden influx of gamers also brings a new set of content moderation issues – for platform owners, games developers, and gamers alike.

Let’s take a closer look.

Loading…

The video game industry was already in good shape before the global pandemic. In 2019, ISFE (Interactive Software Federation of Europe) reported a 15% rise between 2017 and 2018, turning over a combined €21bnAnother report by ISFE shows that over half of the EU’s population played video games in 2018 – some 250 million players, gaming for an average of nearly 9 hours per week: with a pretty even gender split.

It’s not surprising that the fastest growing demographic was the 25-34 age group – the generation who grew alongside Nintendo, Sony, and Microsoft consoles. However, gaming has broader demographic appeal too. A 2019 survey conducted by AARP (American Association Of Retired Persons) revealed that 44% of 50+ Americans enjoyed video games at least once a month.

According to GSD (Games Sales Data) in the week commencing 16th March 2020, right at the start of the lockdown, video games sales increased by 63% on the previous week. Digital sales have outstripped physical sales too, and console sales rose by 155% to 259,169 units in the same period.

But stats aside, when you consider the level of engagement possible, it’s clear that gaming is more than just ‘playing’. In April, the popular game Fortnite held a virtual concert with rapper Travis Scott; which was attended by no less than 12.3 million gamers around the world – a record audience for an in-game event.

Clearly, for gaming the only way is up right now. But given the sharp increases, and the increasingly creative and innovative ways gaming platforms are being used as social networks – how can developers ensure every gamer remains safe from bullying, harassment, and unwanted content?

Ready Player One?

If all games have one thing in common, it’s rules. The influx of new gamers presents new challenges in a number of ways, where content moderation is concerned. Firstly, because uninitiated gamers (often referred to as noob/newbie/nub) are likely to be unfamiliar with established, pre-existing rules for online multiplayer games or the accepted social niceties or jargon of different platforms.

From a new user’s perspective, there’s often a tendency to carry over offline behaviours into the online environment – without consideration or a full understanding of the consequences. The Gamer has an extensive list of etiquette guidelines which get frequently broken by online multiplayer gamers, from common courtesies such as not swearing in front of younger users on voice-chat, not spamming chat-boxes to not ‘rage-quitting’ a co-operative game due to frustration.

However, when playing in a global arena, gamers might also encounter subtle cultural differences and behave in a way which is considered offensive to certain other groups of people.

Another major concern, as outlined by Otis Burris, Besedo’s Vice President Of Partnerships, outlined in a recent interview, which affects all online platforms, is the need to “stay ahead of the next creative idea in scams and frauds or outright abuse, bullying and even grooming to protect all users” because “fraudsters, scammers and predators are always evolving.”

Multiplayer online gaming is open to negative exploitation by individuals with malicious intent or grooming, simply because of the potential anonymity and sheer numbers of gamers taking part simultaneously around the globe.

While The Gamer list spells out that kids (in particular) should never use someone else’s credit card to pay for in-game items, when you consider just how open gaming can be from an interaction perspective, the fact that these details could easily be obtained by deception or coercion needs to be tackled.

A New Challenger Has Entered

In terms of multiplayer online gaming, cyberbullying and its regulation continue to be a prevalent issue. Some of the potential ways in which users can manipulate gaming environments in order to bully others include:

  • Ganging up on other players
  • Sending or posting negative or hurtful messages (using in-game chat-boxes for example)
  • Swearing or making negative remarks about other players that turn into bullying
  • Excluding the other person from playing in a particular group
  • Anonymously harassing strangers
  • Duping more vulnerable gamers into revealing personal information (such as passwords)
  • Using peer pressure to push others into perform acts they wouldn’t normally have

Whilst cyberbullying amongst children is fairly well researched, negative online interactions between adults are less well documented and studied. The 2019 report ‘Adult Online Harms’ (commissioned by the UK Council for Internet Safety Evidence Group) investigated internet safety issues amongst UK adults, and even acknowledges the lack of research into the effect of cyberbullying on adults.

With so much to be on the lookout for, how can online gaming become a safer space to play in for children, teenagers, and adults alike?

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

Pause

According to a 2019 report for the UK’s converged communications regulator Ofcom: “The fast-paced, highly-competitive nature of online platforms can drive businesses to prioritize growing an active user base over the moderation of online content.

“Developing and implementing an effective content moderation system takes time, effort and finance, each of which may be a constraint on a rapidly growing platform in a competitive marketplace.”

The stats show that 13% of people have stopped using an online service after observing harassment of others. Clearly, targeted harassment, hate speech, and social bullying need to stop if games manufacturers want to minimize churn rate and risk losing gamers to competitors.

So how can effective content moderation help?

Let’s look at a case study cited in the Ofcom report. As an example of effective content moderation, they refer to the online multiplayer game ‘League Of Legends’ which has approximately 80 million active players. The publishers, Riot Games, explored a new way of promoting positive interactions.

Users who logged frequent negative interactions were sanctioned with an interaction ‘budget’ or ‘limited chat mode’. Players who then modified their behavior and logged positive interactions gained release from the restrictions.

As a result of these sanctions, the developers noted a 7% drop in bad language in general and an overall increase in positive interactions.

Continue

Taking ‘League Of Legends’ as an example, a combination of human and AI (Artificial Intelligence) content moderation can encourage more socially positive content.

For example, a number of social media platforms have recently introduced ways of helpfully offering users alternatives to UGC (user generated content) which is potentially harmful or offensive, giving users a chance to self-regulate and make better choices before posting. In addition, offensive language within a post can be translated into non-offensive forms and users are presented with an optional ‘clean version’.

Nudging is also another technique which can be employed to encourage users to question and delay posting something potentially offensive by creating subtle incentives to make the right choice and thereby help to reduce the overall number of negative posts.

Chatbots, disguised as real users, can also be deployed to make interventions in response to specific negative comments posted by users, such as challenging racist or homophobic remarks and prompting an improvement in the user’s online behavior.

Finally, applying a layer of content moderation to ensure that inappropriate content is caught before it reaches other gamers will help keep communities positive and healthy. Ensuring higher engagement and less user leakage.

Game Over: Retry?

Making good from a bad situation, the current restrictions on social interaction offer a great opportunity for the gaming industry to draw in a new audience and broaden the market.

It also continues to inspire creative innovations in artistry and immersive storytelling, offering new and exciting forms of entertainment, pushing the boundaries of technological possibility, and generating new business models.

But the gaming industry also needs to ensure it takes greater responsibility for the safety of gamers online by ensuring it incorporates robust content management strategies. Even if doing so at scale, especially when audience numbers are so great, takes a lot more than manual player intervention or reactive strategies alone.

This is a challenge we remain committed to at Besedo – using technology to meet the moderation needs of all digital platforms. Through a combination of machine learning, artificial intelligence, and manual moderation techniques we can build a bespoke set of solutions that can operate at scale.

To find out more about content moderation and gaming, or to arrange a product demonstration, contact our team!

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

Reviews can make or break a business. The same applies to online marketplaces, classifieds, and even dating sites. And they don’t just impact these platforms – they affect how people see the brands that advertise on them, as well as individual vendors and those looking for love and companionship.

However, in a world where User-Generated Content (UGC) is so prevalent, the fact is anyone from anywhere can leave a good or bad review. And have it seen in a very public way.

While bad reviews can hurt businesses and brands, fake positive ones can damage reputations too.

Confused? It’s a tricky area to navigate.

Let’s consider how reviews can build trust and how online marketplaces can address these moderation challenges.

Photo by Ravi Sharma on Unsplash

Reviews build consumer trust

As discussed in previous articles, trust is at the epicenter of the digital economy. As consumers, we take trust leaps when deciding if a particular online product or service is suitable for us. This is why reviews matter so much – they help us form opinions.

In a practical sense, many of these sentiments (which can largely be attributed to economist and TED speaker Rachel Botsman) are grounded in our search for social proof, which forms one of the key cornerstones of the ‘Trust Stack’ – which encompasses: trust in the idea, trust in the platform, and (as is the case here) trust in the user.

Because the three have an interdependent relationship, they reinforce each other – meaning that user trust leads to trust in the platform and idea; and vice versa.

If it sounds improbable that consumers are more likely to trust complete strangers, then consider the numbers. Stats show that 88% of consumers trust online reviews as much as personal recommendations – with 76% stating that they trust online reviews as much as recommendations from family and friends.

Needless to say, they factor in a great deal. Therefore, customer reviews are essential indicators of trust – which is why bad reviews can negatively impact businesses.

“One particular hotel in New York State, US, even stated in its small print that visitors would be charged $500 for negative Yelp reviews”

Brand backlash

While on some marketplaces, a 3.5 out of 5 for average service might be deemed acceptable – for many businesses, a slip in the way they’re reviewed is perceived to have disastrous consequences.

Some companies have fought back at negative reviews, but instead of challenging customers over their comments or trying to figure out where they could do better, they’ve actively tried to sue their critics.

One particular hotel in New York State, US, even stated in its small print that visitors would be charged $500 for negative Yelp reviews. While some service providers have slated – and even looked to sue – Yelp for how it has prioritized reviews with the most favorable first.

Yikes!

But why are overly positive reviews that detrimental? Surely a positive review is what all companies are striving for? The issue is inauthenticity. A true reflection of any experience rarely commands 5 stars across the board, and businesses, marketplaces, and consumers are wise to it.

Authenticity means “no astroturfing”

Many companies want to present themselves in the best possible light. There’s absolutely nothing wrong with that. However, when it comes to reviews of their products and services, if every single rating is overwhelmingly positive, consumers would be forgiven for being suspicious.

In many cases, it seems they probably are. Creating fake reviews – a practice known as astroturfing – has been relatively widespread since the dawn of online marketplaces and search engines. But many are now wise to it and actively doing more to prevent the practice.

Google has massively cracked down on companies buying fake Google reviews designed to positively influence online listings – removing businesses that do from local search results. Similarly, Amazon has pledged to stop the practice of testers being paid for reviews and reimbursed for their purchases.

Astroturfing isn’t just frowned upon, it’s also illegal. The UK’s Competition and Markets Authority (CMA) and the US Federal Trade Commission have strict rules over misleading customers.

In Britain, the CMA has taken action against social media agency Social Chain for failing to disclose that a series of posts were part of a paid-for campaign; and took issue with an online knitwear retailer posting fake reviews.

While some may consider astroturfing a victimless crime, when you consider shoppers’ faith in online reviews and the fact that their favorite sites may be deliberately trying to mislead them, it’s clear that there’s a major trust issue at stake.

For classified sites, dating apps, and online marketplace owners, who have spent long building credibility, gaining visibility, and getting users and vendors on board, a culture where fake reviews persist can be disastrous.

But when so many sites rely on User-Generated Content, the task of monitoring and moderating real reviews, bad reviews, and fake reviews is an enormous undertaking – and often costly.

Manual vs. Automated content moderation

While many fake reviews are often easy to spot (awkwardly put together, with bad spelling and grammar), manually moderating them becomes unsustainable – even for a small team of experts when they appear at scale.

That’s why new ways to detect and prevent are starting to gain traction. For example, many sites and marketplaces are starting to limit review posting to those who’ve bought something from a specific vendor. However, as per the Amazon example above, this is a practice that is easy to circumvent.

A more reliable method is automated moderation – using machine learning algorithms that can be trained to detect fake reviews and other forms of unwanted or illegal content on a particular classified website or marketing. Using filters, the algorithm is continually fed examples of good and bad content to the point that it can automatically identify between the two.

It’s a process that works well with manual moderation efforts. When a user review is visible, a notification can be sent to the moderation team, allowing them to make the final judgment call on a review’s authenticity.

Ultimately, In a world where online truths can often be in short supply, companies – whether they’re brands or marketplaces – that are open enough for customers to leave honest, reasonable reviews stand a better chance of building trust among their users.

While it’s clear businesses have a right to encourage positive online reviews – as part of their marketing efforts – any activities that attempt to obscure the truth (no matter how scathing) or fabricate a rose-tinted fake review can have an even more negative impact than a humdrum review itself.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

The biggest challenge facing technology today isn’t adoption, it’s regulation. Innovation is moving at such a rapid pace that the legal and regulatory implications are lagging behind what’s possible.

Artificial Intelligence (AI) is one particularly tricky area for regulators to reach consensus on; as is content moderation.

With the two becoming increasingly crucial to all kinds of businesses – especially to online marketplaces, sharing economy and dating sites – it’s clear that more needs to be done to ensure the safety of users.

But to what extent are regulations stifling progress? Are they justified in doing so? Let’s consider the current situation.

AI + Moderation: A Perfect Pairing

Wherever there’s User Generated Content (UGC), there’s a need to moderate it; whether we’re talking about upholding YouTube censorship or netting catfish on Tinder.

Given the vast amount of content that’s uploaded daily and the volume of usage – on a popular platform like eBay – it’s clear that while action needs to be taken, it’s unsustainable to rely on human moderation alone.

Enter AI – but not necessarily as most people will know it (we’re still a long way from sapient androids). Mainly, where content moderation is concerned, the use of AI involves machine learning algorithms – which platform owners can configure to filter out words, images, and video content that contravenes policies, laws, and best practices.

AI not only offers the scale, capacity, and speed needed to moderate huge volumes of content; it also limits the often-cited psychological effects many people suffer from viewing and moderating harmful content.

Understanding The Wider Issue

So what’s the problem? Issues arise when we consider content moderation on a global scale. Laws governing online censorship (and the extent to which they’re enforced) vary significantly between continents, nations, and regions.

What constitutes ‘harmful’, ‘illicit’ or ‘bad taste’ isn’t always as clear cut as one might think. And from a sales perspective, items that are illegal in one nation aren’t always illegal in another. A lot needs to be taken into account.

But what about the role of AI? What objections could there be for software that’s able to provide huge economies of scale, operational efficiency, and protect people from harm – both users and moderators?

The broader context of AI as a technology needs to be better understood – which itself presents several key ethical questions over its use and deployment, which vary in a similar way – country-to-country – to efforts designed to regulate content moderation.

To understand this better, we need to look at ways in which the different nations are addressing the challenges of digitalisation – and what their attitudes are towards both online moderation and AI.

The EU: Apply Pressure To Platforms

As an individual region, the EU arguably is leading the global debate on online safety. However, the European Commission continues to voice concerns over (a lack of) efforts made by large technology platforms to prevent the spread of offensive and misleading content.

Following the introduction of its Code Of Practice on Disinformation in 2018, numerous high profile tech companies – including Google, Facebook, Twitter, Microsoft and Mozilla – voluntarily provided the Commission with self-assessment reports in early 2019.

These reports document the policies and processes these organisations have undertaken to prevent the spread of harmful content and fake news online.

While a thorough analysis is currently underway (with findings to be reported in 2020), initial responses show significant dissatisfaction relating to the progress being made – and with the fact that no additional tech companies have signed up to the initiative.

AI In The EU

In short, expectations continue to be very high – as evidenced by (and as covered in a previous blog) the European Parliament’s vote to give online businesses one hour to remove terrorist-related content.

Given the immediacy, frequency, and scale that these regulations require, it’s clear that AI has a critical and central role to play in meeting these moderation demands. But, as an emerging technology itself, the regulations around AI are still being formalised in Europe.

However, the proposed Digital Services Act (set to replace the now outdated eCommerce Directive) goes a long way to address issues relating to online marketplaces and classified sites – and AI is given significant consideration as part of these efforts.

Last year the EU published its guidelines on ethics in Artificial Intelligence, citing a ‘human-centric approach’ as one of its key concerns – as it deems that ‘AI poses risks to the right to personal data protection and privacy’ – as well as a ‘risk of discrimination when algorithms are used for purposes such as to profile people or to resolve situations in
criminal justice’.

While these developments are promising, in that they demonstrate the depth and importance which the EU is tackling these issues, problems will no doubt arise when adoption and enforcement by 27 different member states are required.

Britain Online Post-Brexit

One nation that no longer needs to participate in EU-centric discussions is the UK – following its departure in January this year. However, rather than deviate from regulation, Britain’s stance on online safety continues to set a high bar.

An ‘Online Harms’ whitepaper produced last year (pre-Brexit) sets out Britain’s ambition to be ‘the safest place in the world to go online’ and proposes a revised system of accountability which moves beyond self-regulation and the need to establish a new independent regulator.

Included in this is a commitment to uphold GDPR and Data Protection laws – including a promise to ‘inspect’ AI and penalise those who exploit data security. The whitepaper also acknowledges the ‘complex, fast-moving and far-reaching ethical and economic issues that cannot be addressed by data-protection laws alone’.

To this end, a Centre for Data Ethics and Innovation has been established in the UK – complete with a two-year strategy setting out its aims and ambitions, which largely involves cross-industry collaboration, greater transparency, and continuous governance.

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

Expansion Elsewhere

Numerous other countries – from Canada to Australia – have expressed a formal commitment to addressing the challenges facing AI, data protection, and content moderation. However, on a broader international level, the Organisation for Economic Co-operation and Development (OECD) has established some well-respected Principles on Artificial Intelligence.

Set out in May 2019. as five simple tenets designed to encourage successful ‘stewardship’ of AI, these principles have since been co-opted by the G20 in their stance on AI.

They are defined as:

  • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  • AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  • Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

While not legally binding, the hope is that the level of influence and reach these principles have on a global scale will eventually encourage wider adoption. However, given the myriad of cultural and legal differences, the tech sector faces, international standardisation remains a massive challenge.

The Right Approach – Hurt By Overt Complexity

All things considered, while the right strategic measures are no doubt in place for the most part – helping perpetuate discussion around the key issues – the effectiveness of many of these regulations largely remains to be seen.

Outwardly, many nations seem to share the same top-line attitudes towards AI and content moderation – and their necessity in reducing harmful content. However, applying policies from specific countries to global content is challenging and adds to the overall complexity, as content may be created in one country and viewed in another.

This is why the use of AI machine learning is so critical in moderation – algorithms can be trained to do all of the hard work at scale. But it seems the biggest stumbling block in all of this is a lack of clarity around what artificial intelligence truly is.

As one piece of Ofcom research notes, there’s a need to develop ‘explainable systems’ as so few people (except for computer scientists) can legitimately grasp the complexities of these technologies.

The problem posed in this research is that some aspects of AI – namely neural networks which are designed to replicate how the human brain learns – are so advanced that even the AI developers who create them cannot understand how or why the algorithm outputs what it does.

While machine learning moderation doesn’t delve as far into the ‘unknowable’ as neural networks, it’s clear to see why discussions around regulation persist at great length.

But, as is the case with most technologies themselves, staying ahead of the curve from a regulatory and commercial standpoint is a continuous improvement process. That’s something that won’t change anytime soon.

New laws and legislations can be hard to navigate. Besedo helps businesses like yours get everything in place quickly and efficiently to adhere to these new legislations. 

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

Scammers are unrelenting. And smart. They’re active right throughout the year. This means there’s no particular season when online marketplace and classified site owners need to be extra vigilant. The pressure’s always on them to maintain user safety.

However, scammers know when and how to tailor their activities to maximise opportunities. That’s why they’ll often latch onto different events, trends, seasons, sales, and other activities throughout the year – using a variety of techniques to lure in users, under the guise of an offer or piece of information.

With so much going on in 2020 – from the Tokyo Olympics to US election – scammers will almost certainly be more active than usual. Here’s what consumers and marketplaces need to be aware of this year.

If you want to learn more about the specific scam spikes, visit our scam awareness calendar where we predict spikes on a month-by-month basis.

Holiday Bookings

When the nights draw in and temperatures drop, many begin to dream of sunnier climes and set about searching for their next holiday.

But whether it’s a summer booking or winter getaway, price is always an issue. Cue thousands of holiday comparison sites, booking portals, and savings sites. While many of these are legitimate outfits, often the convoluted online booking experience – as a consequence of using aggregation sites – can confuse would-be travellers.

They’re right to be cautious. As with buying any other goods or services online, even the most reputable travel sites can fall victim to scammers – with scammers advertising cheap flights, luxury lodgings at 2 Star prices, and also offering ‘free’ trips (before being lured into attending a pressured timeshare sales pitch).

If in doubt, customers should always book the best-known travel sites, pay using their verified portal (rather than a link sent via email or direct bank transfer) to ensure that the company that they actually pay for their holiday is accredited by an industry body (such as ATOL in the UK).

Seasonal Scams

From Valentine’s Day to Easter; Halloween to Hanukkah – seasonal scams return with perennial menace year-after-year. Designed to capitalise on themed web searches and impulse purchases, fraudsters play the same old tricks – and consumers keep falling for them.

Charity scams tend to materialise around gift-focused holidays, like Thanksgiving in the US, as well as at Christmas. Anyone can fall victim to them – such as the recent case of NFL player, Kyle Rudoph, who gave away his gloves after a high scoring game for what he thought was a charity auction; only to discover they were being sold on eBay a few days later.

Another popular seasonal scam is phishing emails offering limited-time discounts from well-known retailers, as well as romance scams (catfishers) in which some are prepared to cultivate entire relationships online with others simply to extract money from them.
The general rule with any of these is to be wary of anyone offering something that seems too good to be true – whether it’s a 75% off discount or unconditional love. Scammers prey on the vulnerable.

Football Fever

A whole summer of soccer is scheduled for June and July this year – thanks to the upcoming UEFA European Football Championship (Euro 2020) and the Copa America; both of which will run at the same time: on opposite sides of the World.

However, while you’d expect fake tournament tickets and counterfeit merchandise to be par for the course where events like these are concerned – and easily detectable. But the reality is that many fraudulent third party sites are so convincing, buyers are falling for the same scams experienced in previous years.

If in doubt, customers should always purchase from official websites — such as UEFA online and Copa America. While Euro 2020 tickets are sold out for now (over 19 million people applied for tickets), they’ll become available to buy again in April for those whose teams qualified during the playoffs.

While third party sites are the biggest culprits, marketplace owners should be extra vigilant where users are offering surplus or cheap tickets to any games at all. Although given the prices at which the tickets sell for, you’d be forgiven for thinking that the real scammers are the official vendors themselves.

Olympic Obstacles

The Summer Olympic Games is no stranger to scandals – of the sporting variety. However, In the same way as the soccer tournaments referenced above, fake tickets tend to surface in the run-up to the games themselves – on ‘pop-up’ sites as well as marketplaces.

Telltale signs of a scam include vendors asking to be paid in cryptocurrencies (such as Bitcoin), official-sounding domain names (that are far from official), as well as phishing emails, malware, and ransomware – all designed by scammers looking to cash in on the surrounding media hype and immediate public interest that high-profile events bring.

In addition to scams preceding the games, advice issued just prior to the 2016 Rio Olympics recommends visitors be wary of free public WiFi – at venues, hotels, cafes, and restaurants – and recommends travellers take other online security precautions; such as using a Virtual Private Network (VPN) in addition to antivirus software.

Lessons learned from the 2018 Winter Olympics in Pyeongchang shouldn’t be ignored either. Remember the ‘Olympics Destroyer’ cyber attack? That shut down the event’s entire IT infrastructure during the opening ceremony? There was little anyone could do to prevent that from happening (so advanced was the attack and so slick was its coordination). Still, it raised a lot of questions around cybersecurity generally – which no doubt have informed best practice elsewhere.

Also, visitors should avoid downloading unofficial apps or opening emails relating to Olympics information – unless they’re from an official news outlet, such as NBC, the BBC, or the Olympic Committee itself.

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

Probing Political Powers

With Brexit upon us and the US general election set for November, many are more aware of misinformation campaigns, high profile emails hacks, and electronic voting booth hacking.

While those in the public eye may seem to be the most at risk, ordinary citizens are too. We have Facebook and Cambridge Analytica to thank for that.

Despite this high profile case, while political parties themselves must abide by campaigning practices and even though data security laws – such as GDPR – exist to protect our data, it seems more work needs to be done – by social media companies and governments.

But what can people do? There are ways to limit the reach that political parties have, such as opting out of practices like micro-targeting and being more stringent with social media privacy settings, good old-fashioned caution and data hygiene are encouraged.

To help spread this message, marketplaces and classified sites should continue to remind users to change their passwords routinely, exercise caution when dealing with strangers, and advocate not sharing personal data off-platform with other users – regardless of their assumed intent.

Sale Of The Century?

From Black Friday to the New Year Sales – the end of one year and the early part of the next is a time when brands of all kinds slash the prices of excess stock – clearing inventory or paving the way for the coming season’s collection. It’s also a time when scammers prey upon online shoppers’ frenzied search for a bargain or last-minute gift purchase.

As we’ve talked about in previous blogs, the level of sophistication with which scammers operate in online marketplaces seems to get increasingly creative – from posting multiple listings for the same items, changing their IP addresses, or merely advertising usually expensive items at low prices to dupe those looking to save.

Prioritising Content Moderation

The worrying truth is that scammers are becoming increasingly sophisticated with the techniques they use. For online marketplace owners, not addressing these problems can directly impact their site’s credibility, user experience, safety, and the amount of trust that their users have for their service.

Most marketplaces are only too well aware of all of these issues, and many are doing a great deal to inform customers of what to look out for and how to conduct more secure transactions, online.

However, action always speaks louder than words – which is why many are now actively exploring content moderation – using dedicated expert teams and machine learning AI – the latter adds value to larger marketplaces.

Keeping customers informed around significant events and holidays – like those set out above – ensures that marketplaces are seen as transparent and active in combating fraud online.
This also paints sites in a favourable light when it comes to attracting new users, who may stumble upon a new listing in their search for seasonal goods and services.

Ultimately, the more a site does to keep its users safe, the more trustworthy it’ll be seen as.

Want to know more about optimimizing your content moderation? Get in touch with one of our content moderation solution expert today or test our moderation tool, Implio, for free.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

When Facebook CEO, Mark Zuckerberg recently came under fire for the company’s admittedly tepid approach to political fact-checking (as well as some revelations about just what constitutes ‘impartial press’), it became clear that where content moderation is concerned, there’s still a big learning curve – for large and small companies.

So given that a company like Facebook with all of the necessary scale, money, resources, and influence, struggles to keep on top of moderation activities – what chance do smaller online marketplaces and classified sites have?

When the stakes are so high, marketplaces need to do everything they can to detect and remove negative, biased, fraudulent, or just plain nasty content. Not doing so will seriously damage their credibility, popularity, and ultimately, their trustworthiness – which, as we’ve discussed previously, is a surefire recipe for disaster.

However, we can learn a lot from the mistakes of others by putting the right moderation measures in place. Let’s take a closer look at the cost of bad content and at ways to prevent it from your online marketplace.

The cost of fraudulent ads

Even though we live in a world in which very sophisticated hackers can deploy some of the most daring and devastating viruses and malware out there – from spearphishing to zero-day attacks – there can be little doubt that the most common scams still come from online purchases.

While there are stacks of advice out there for consumers on what to be aware of, marketplace owners can’t solely rely on their customers to take action. Being able to identify the different types of fraudulent ads – as per our previous article – is a great start, but for marketplace owners, awareness goes beyond mere common sense. They too need to take responsibility for their presence – otherwise, it’ll come with a cost.

Having content moderation guidelines or community that give your employees clear advice on how to raise the alarm on everything from catfishers to Trojan ads is crucial too. However, outside of any overt deception or threatening user behaviors, the very existence of fraudulent content negatively impacts online marketplaces as essentially, it gradually erodes the sense of trust that they have worked so hard to build. Resulting in lowered conversion rates and, ultimately, reduced revenue.

One brand that seems to be at the center of this trust quandary is Facebook. It famously published a public version of its own handbook last year, following a leak of its internal handbook. While these take a clear stance on issues like hate speech, sexual, and violent content; there’s little in the way of guidance on user behavior on its Marketplace feature.

The fact is, classified sites present a unique set of moderation challenges – that must be addressed in a way that’s sympathetic to the content forms being used. A one-size-fits-all approach doesn’t work. It’s too easy to assume that common sense and decency prevail where user-generated content is concerned. The only people qualified to determine what’s acceptable – and what isn’t – on a given platform are the owners themselves: whether that relates to ad formats, content types, and the products being sold.

Challenging counterfeit goods

With the holiday season fast approaching, and two of the busiest shopping days of the year – Black Friday and Cyber Monday – just a few weeks away, one of the biggest concerns online marketplaces face is the sale of counterfeit goods.

It’s a massive problem: one that’s projected to cost $1.8 trillion by 2020. It’s not dodgy goods sites should be wary of; there’s a very real threat of being sued by an actual brand for millions of dollars: if sites enable vendors to use their name on counterfeit products: as was the case when Gucci sued Alibaba in 2015.

However, the financial cost is compounded by an even more serious one – particularly where fake electrical items are concerned.

According to a Guardian report, research by the UK charity, Electrical Safety First shows that 18 million people have mistakenly purchased a counterfeit electrical item online. As a result, there are hundreds of thousands of faulty products in circulation. Some faults may be minor; glitches in Kodi boxes and game consoles, for example. Others, however, are a potential safety hazard – such as the unbranded mobile phone charger which caused a fire at an apartment in London last year.

The main issue is the presence of fraudulent third-party providers setting up shop on online marketplaces; advertising counterfeit products as a genuine article.

Learn how to moderate without censoring

Why moderating content without censoring users demands consistent, transparent policies.

Untitled(Required)

Staying vigilant on issues affecting consumers

It’s not just counterfeit products that marketplaces need to counter; fake service providers can be just as tough to crack down on too.

Wherever there’s misery, there’s opportunity. And you can be sure someone will try to capitalize on it. Consider the collapse of package holiday giant, Thomas Cook, a couple of months ago – which saw thousands of holidaymakers stranded and thousands more have their vacations canceled.

Knowing consumer compensation would be sought, a fake service calling itself thomascookrefunds.com quickly set to work gathering bank details, promising to reimburse those who’d booked holidays.

While not an online marketplace-related example per se, cases like this demonstrate the power of fake flags planted by those intent on using others’ misfortune to their own advantage.

Similarly, given the dominance of major online marketplaces, as trusted brands in their own right, criminals may even pose as company officials to dupe consumers. Case in point: the Amazon Prime phone scam, in which consumers received a phone call telling them their bank account had been hacked and they were now paying for Amazon Prime – before giving away their bank details to claim a non-existent refund.

While this was an offline incident, Amazon was swift to respond with advice on what consumers should be aware of. In this situation, there was no way that moderating site content alone could have indicated any wrongdoing.

However, it stands to reason that marketplaces should have a broader awareness of the impact of their brand, and a handle on how the issues affecting consumers should be aligned with their moderation efforts.

Curbing illegal activity & extremism

One of the most effective ways of ensuring the wrong kind of content doesn’t end up on an online marketplace or classifieds site is to use a combination of AI moderation and human expertise to accurately find criminal activity, abuse, or, extremism.

However, in some cases, it’s clear that those truly intent on making their point still can find ways around these restrictions. In the worst cases, site owners themselves will unofficially enable and advise users on ways to circumvent their site’s policies for financial gain.

This was precisely what happened at the classifieds site Backpage. It transpired that top executives at the company – including the CEO, Carl Ferrer – didn’t just turn a blind eye to the advertisement of escort and prostitution services; but actively encouraged the rewording and editing of such ads to give Backpage ‘a veneer of plausible deniability’.

As a result of this, along with money laundering charges, and for hosting child sex trafficking ads; not only was the site taken down for good, but officials were jailed – following Ferrer’s admission of guilt for all of these crimes.

While this was all conducted knowingly, sites that are totally against these kinds of actions, but don’t police their content effectively enough, are putting themselves at risk too.

Getting the balance right

Given the relative ease with which online marketplaces can be infiltrated, can’t site owners just tackle the problem before it happens? Unfortunately, that’s not the way they were set up. User-generated content has long been regarded as a bastion of free speech, consumer-first commerce, and individual expression. Trying to quell that would completely negate their reason for being. A balance is needed.

The real problem may be that ‘a few users are ruining things for everyone else’, but ultimately marketplaces can only distinguish between intent and context after content has been posted. Creating a moderation backlog when there’s such a huge amount of content isn’t a viable option either.

Combining man & machine in moderation

While solid moderation processes are crucial for marketplace success, relying on human moderation alone is unsustainable. It’s for many sites just not physically possible to review every single piece of user-generated content in real-time.

That’s why online content moderation tools and technology are critical to helping marketplace owners identify anything suspicious. When combining AI moderation with human moderation, you’re able to efficiently find the balance between time-to-site and user safety; which is what we offer here at Besedo.

Ultimately, the cost of bad content – or more specifically, not moderating it – isn’t just a loss of trust, customers, and revenue. Nor is it just a product quality or safety issue. It’s also the risk of enabling illegal activity, distributing abusive content, and giving extremists a voice. Playing a part in perpetuating this comes at a much heavier price.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background

What is a content moderator? Why not ask one? We sat down with Michele Panarosa, Online Content Moderator Level 1 at Besedo, to learn more about a content moderator’s daily work, how to become one, and much more.

Hi Michele! Thank you for taking the time to sit down with us. Could you tell us a bit about yourself?

My name is Michele Panarosa, I’m 27 years old and I come from Bari, Puglia, Italy. I’ve been an online content moderator for nine months now, formerly an IT technician with a passion for technology and video games. In my spare time, I like to sing and listen to music. I’m a shy person at first, but then I turn into an entertainer because I like to have a happy environment around me. They call me “Diva” for a good reason!

What is a content moderator?

A content moderator is responsible for user-generated content submitted to an online platform. The content moderator’s job is to make sure that items are placed in the right category, are free from scams, don’t include any illegal items, and much more.

How did you become a content moderator?

I became an online content moderator by training with a specialist during the first weeks of work, but it’s a never-ending learning curve. At first, I was scared of accidentally accepting fraudulent content, or not doing my job properly. My teammates, manager, and team leaders were nice and helped me throughout the process. As I kept learning, I started understanding fraud trends and patterns. It helped me spot fraudulent content with ease, and I could with confidence escalate items to second-line moderation agents who made sure they got refused.

Communication is essential in this case. There are so many items I didn’t even know existed, which is an enriching experience. The world of content moderation is very dynamic, and it has so many interesting things to learn.

What’s great about working with content moderation?

The great part of content moderation is the mission behind it. The Internet sometimes could seem like a big and unsafe place where scammers are the rulers. I love this job because I get to make the world a better place by blocking content that’s not supposed to be online.

It’s a blessing to be part of a mission where I can help others and feel good about what I do. Besides, it makes you feel important and adds that undercover aspect of a 007 agent.

How do you moderate content accurately and fast?

Speed and accuracy could be parallel, but you need to be focused and keep your eyes on the important part of a listing. Only a bit of information in a listing can be very revealing and tell you what your next step should be. On top of that, it’s crucial to stay updated on the latest fraud trends to not fall into any traps. Some listings and users may appear very innocent, but it’s important to take each listing seriously and it’s always better to slow down a bit before moving on to the next listing.

What’s the most common type of content you refuse?

The most common type of items I refuse must be weapons – any kind of weapons. Some users try to make them seem harmless, but they’re not. It’s important to look at the listing images, and if the weapon is not exposed in the image, we’ll simply gather more information about the item. Usually, users who want to sell weapons try to hide them by not using images and being very short in their descriptions (sometimes no description at all).

It’s our task, as content moderators, to collect more details and refuse the item if it turns out to be a weapon. Even if it’s a soft air gun or used for sports.

What are the most important personal qualities needed to become a good content moderator?

The most important personal qualities needed to become a good content moderator are patience, integrity, and curiosity.

  • Patience: Moderating content is not always easy and sometimes it can be challenging to maintain a high pace while not jeopardizing accuracy. When faced with factors that might slow you down, it’s necessary to stay patient and not get distracted.
  • Integrity: It’s all about work ethic, and staying true to who you are and what you do. Always remember why you are moderating content, and don’t lose track of the final objective.
  • Curiosity: As a content moderator, you’re guaranteed to stumble onto items you didn’t even know existed. It’s important to stay curious and research the items to ensure they’re in the right category or should be refused – if they don’t meet the platform’s rules and guidelines.

Summary and main takeaways

At its core, a content moderator ensures that the content on a given website or service meets the company’s standards and guidelines. This can involve anything from reviewing and removing offensive or inappropriate content to monitoring user behavior and flagging potential rule violations. Content moderators play an important role in keeping online spaces safe and welcoming for all users, and we hope this article has helped you better understand what they do.

And one last thing…

Feel free to look at our career page for vacancies if you are interested in a job as a content moderator.

Michele Panarosa

Michele Panarosa

Michele is an Online Content Moderator Level 1 and has worked in this role for nine months. Previously he worked as an IT technician. Michele is passionate about technology and video games, and in his spare time, he enjoys music, both singing and listening.

This is Besedo

Global, full-service leader in content moderation

We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

Form background