To reach high accuracy and precision your moderation team tools that provide them enough insights to take the right decisions.

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.

– Sir Arthur Conan Doyle through Sherlock Holmes.

With that in mind we are developing Implio to present as much relevant data as possible to users when they are faced with a moderation decision. The keyword here is relevant. We want to ensure that key information isn’t drowned out by filler data, but also that moderators have easy access to earlier conclusions and historical data pertaining to the person or item they are reviewing.

Our most recent step on the road to full representation is Implio’s newest feature: moderation notes.

With Moderation Notes, moderators can share insights about end users and the content items they post. These insights then automatically appear next to items being reviewed, as relevant.

Moderation notes are for instance very powerful in fighting fraud. When a moderator rejects an item as fraudulent, they can leave a note stating exactly that and even add additional information. Next time an item comes in from the same user or from the same IP address or email, the next moderator in charge will see the note that was left behind and know that they should be extra diligent when reviewing the item.

 

How to leave notes

The more your team uses notes, the more powerful they become.

Once you start using the feature ask your team to start leaving notes with insights that contributed to their moderation decision or that may be useful in the future.

Notes can be left by clicking the note icon located in the top-right corner of an item.
Clicking that icon reveals a text field which allows you to leave a note, up to 2,000 characters long:

moderation notes icon

You can create as many notes as you need to, but notes cannot be edited or deleted. This is to ensure that important data isn’t accidentally removed.

 

How does moderation notes work?

For any incoming moderation item in a moderation queue, Implio will look for any relevant notes and display them.

This happens for any note left on an item sharing one or more of the following attributes with the item currently being reviewed:

  • same item ID
  • same user ID
  • same IP address
  • same email address
  • same phone number

Attributes in common between the note and the item being reviewed are symbolized by icons displayed above the note itself.

moderation notes attributes

If the icon is greyed out it means, there’s no relation between that specific data point and the item being reviewed.

For instance, if the name and email are different, but the IP and phone number are the same you will see the former greyed out while the latter will be highlighted.

The moderator who left the note and the date at which it was left are indicated below the note.

It’s important to consider that moderation notes are to be used as additional information to help moderators take the right decision. On their own they are not enough to give a full picture of the user and their actions. They are however an important piece of the puzzle when dealing with grey area cases and a powerful complement to existing insights.

 

There’s more to come.

This is the first version of the moderation notes feature, but we have big plans on how to make it an even better tool in our ongoing objective to improve efficiency and accuracy.

A feature like moderation notes might sound simple, but used collaboratively in moderation teams, it can be incredibly powerful.

We’ve designed the feature around the needs from our customers, with a strong focus on ease of use. But we’ve also looked forward ensuring that notes can be leveraged by other parts of Implio, to make it even more useful.

The next step is to have automation rules make use of moderation notes. For instance, by automatically sending new contents for manual review if the user has received a note with a specific keyword like ‘fraud’ in the past.

– Maxence Bernard, Chief R&D Officer at Besedo

Reach out if you want a demo of moderation notes or to see how Implio as a whole can help speed up your moderation process.

Most online platforms would agree that Images are one of the absolute most important elements of a user profile. When it comes to building trust between strangers, having an actual picture of the person you are about to engage with is vital. Whether you are looking for a date, booking a ride or renting a holiday home interacting with a total stranger can be daunting. In the real world visual clues like facial expression and body language is intuitively used to decode intent. In the digital world we must emulate these trust markers and images are a crucial part of this.

There’s one problem though. While good images can put a face on a stranger and boost user trust, bad images can have the exact opposite effect. This is one of the reasons we advocate for image quality and why we’re continuously expanding on Implios capabilities for catching and managing bad, inappropriate or low-quality images.

The latest tool in Implios image moderation toolbox is a misoriented AI module.

Why should you care about misoriented images?

The use case is straight forward of course, misoriented images (E.g. Wrongly rotated or upside down) will be caught by the model and sent for manual moderation.
Catching misoriented images is important for the overall impression of your site. A bunch of upside-down faces will make browsing time-consuming and confusing at best or make your platform look unprofessional and scammy at worst.
As more users access and create their profiles using mobile phones the number of images that are misoriented increase and the need to efficiently deal with the issue grows accordingly.

Which is why we’re excited to announce that Implio can now help you automatically identify misoriented images.

How to automatically detect misoriented images

The misoriented module will be available to all Implio users off the shelf soon. For now to gain access, just reach out to us and we’ll activate it for you. When the module is active all images will be scanned by the AI and tagged with misoriented if they are rotated wrongly.

 

This tag can then be utilized in Implios powerful rule creator where you can decide to send to manual moderation, reject outright (not recommended) or take no action if you want to use the rule for tracking purposes only.

Here’s an example of an image caught by the new misoriented module. As you can see the picture is upside down and it’s been tagged by the AI with “face” and “misoriented”

rule match for misoriented images

To the right you can see that it has matched the misoriented rule.

If you decide to send misoriented images to the manual queue moderators will be able to fix the issue. Here’s a view of Implios image editing tool. Here you can crop and rotate images as you see fit.

rotating misoriented images

This version of the misoriented image model works best with human subjects, but we’re hard at work expanding on it and soon we’ll add capabilities that will allow the model to tag items with the same level of accuracy.

If you’re looking for a way to optimize the way you handle misoriented images on your site or platform then get in touch. We can help you with the setup and have a look at your site for other low hanging content quality issues that can easily be resolved with a good moderation setup.

 

At Besedo we’re always looking for ways to improve our offerings and ensure the best possible service and delivery. One thing we take pride in, is the ability to help clients with specialized needs for content moderation. We help solve anything from automating specialized and sensitive site rules to requiring high quality native support and moderation.

Over the past year we’ve seen an increased demand for the latter and in particular a need for native German speakers. While we’ve managed to support our clients currently, we want to ensure that we can scale rapidly when they or new partners require it.

The solution? A new office in the bustling university city of Magdeburg.

Magdeburg is situated in the middle of Germany, under 2 hours from Berlin by train. The city has seen an explosive economic growth in recent years with many interesting companies moving their business there. Soon Besedo will join them.

We’re currently busy preparing the new 200m2 office to make sure it’s fully prepared for welcoming our new employees at the beginning of November 2020.

With our new Magdeburg office, we have room to grow and onboard a sizable, qualified team of German speaking content moderators and customer support agents.

While our new office will focus on German speakers, we have offices across the globe and currently employ agents from more than 23 different countries, supporting sites across the world with expert native speakers.

If your marketplaces, dating site or sharing/gig economy platform is looking for high quality native support or moderation, we’re here to help. Get in touch!

The newest feature to be released in Implio is “Quick Add to List”. It’s a functionality which makes it possible to add information to automation lists while moderating without breaking flow.

When hovering over an IP address, a phone number, an email or an item or user ID, moderators with the roles automation specialist or admin will now get a small icon allowing them to add the info to an already existing list.

Smoothly populate lists with the click of a button

This new feature makes it much easier to populate all kinds of black/white/grey lists. Use cases include auto-refusing select IP addresses (for instance those confirmed to be abused by scammers), or auto-approving items of select user IDs (professional users for example).

 

How to use the Quick Add to List Feature

To illustrate how “Quick Add to List” works, let’s assume you’d like a list of blacklisted IPs. If you already have a list you can import the current IPs to the list, but in this example, we’ll start from scratch building a new list.

 

Create a list in Implio which will contain all your blacklisted IPs

create scam ip list in Implio

 

 

Create a rule that acts whenever an IP from that list is detected

create scam ip rule in implio

 

When moderating hover over the IP to make the quick add to list icon appear and click if you want to add it to a list. You can choose any list you have. In this case we’ll add it to the Scam_IP list.

quick add to list in implio

 

Quick add to list works with the following standard field values:

  • Item ID
  • User
    • ID
    • Email address
    • Phone number
    • IP address

The quick add to list feature is just one of many small “quality of life” improvements we’re building into Implio. The goal is to continue offering our users the most efficient content moderation platform on the market.

Do you want to see how Implio can optimize your moderation process? Get in touch.

Sometimes small features can have a big impact. With our newly implemented user counter you get a whole new level of insights about your users.

What it does

The user counter shows you how many items the user has had approved and how many they’ve had rejected. You can also quickly access an overview to see the actual listings that were approved or rejected giving insight into user behavior and listing habits.

 

How it works

Click an item in the Item log.

Item listing

This brings up the item overview window. Here, next to the User ID you’ll find the user counter. The number in green shows approved listings by this user. The one in red, how many listings the user has had rejected.  

 

Implio user counter feature

Use cases for user counter

If you have experience with content moderation you’ve probably already thought of several use cases for the user counter.

Here are a couple of examples of how it can be used in Implio.

1. Qualifying returning users

Need to understand the quality of a user? Check their listings history. If they have only rejections, this user may cause problems going forward as well.

2. Assistance in grey area decisions

When manually moderating items you sometimes come across grey area cases, where it’s hard to judge whether the listing is genuine or problematic. In those cases where you have to make a snap decision either way, having the user’s previous history to lean on can be helpful. A user with only approved listings in the past, is unlikely to have suddenly turned abusive. Although be cautious there are scammers turning this logic to their benefit through Trojan Horse scams. Here they first post a couple of benign listings, then once their profile looks good, they start posting scams.

3. Spotting users in need of education

Have you found a user who consistently get their listings rejected for non-malign reasons? A quick educational mail might help them out and cut down on your moderation volumes.

4. Identify new users

It’s always good to pay extra attention to new users as you don’t yet know whether they are bad actors. Knowing that a user has no previous history of listing items can act as a sign to be extra thorough when moderating. On the flip site, seeing a user with only approved listings allow you to speed up moderation of the item in question as it’s likely OK too. Just keep an eye out for the aforementioned Trojan Horse scammers.

To give a better understanding of how the user counter helps increase productivity and quality of moderation, we’ve asked a couple of our moderators for their experience working with the new feature.


“The user counter helps me get a perspective on the profile. If I see that a user has had listings refused more than two times, I access the profile to see the reason of the refusals. That allows me to make a better decision on the profile. It allows me to spot scammers quickly and make faster decisions.”

– Cristian Irreño. Content moderator at Besedo

“The user counter has allowed me to see the trends on profile decisions. It makes me be more careful when I see accounts with a higher number of refusals. Also, when I am working on a new account, I know I must be more careful with my decision.”

– Diego Sanabria. Content moderator at Besedo

“The counter helps me identify profiles that have frequent acceptance or refusals, and to spot new users.”

– Cristian Camilo Suarez. Content moderator at Besedo

The user counter is available to all Implio users regardless of plan. Want to start using Implio for your moderation? Let us know and we’ll help you get started.

COVID-19 continues to create new challenges for all. To stay connected, we’re seeing businesses and consumers spend an increasing amount of time online – using different chat and video conferencing platforms to stay connected, and combat social distancing and self-isolation.

We’ve also seen the resurgence of interaction via video games during the lockdown, as we explore new ways to entertain ourselves and connect with others. However, a sudden influx of gamers also brings a new set of content moderation issues – for platform owners, games developers, and gamers alike.

Let’s take a closer look.

Loading…

The video game industry was already in good shape before the global pandemic. In 2019, ISFE (Interactive Software Federation of Europe) reported a 15% rise between 2017 and 2018, turning over a combined €21bn. Another report by ISFE shows that over half of the EU’s population played video games in 2018 – some 250 million players, gaming for an average of nearly 9 hours per week: with a pretty even gender split.

It’s not surprising that the fastest growing demographic was the 25-34 age group – the generation who grew alongside Nintendo, Sony, and Microsoft consoles. However, gaming has broader demographic appeal too. A 2019 survey conducted by AARP (American Association Of Retired Persons) revealed that 44% of 50+ Americans enjoyed video games at least once a month.

According to GSD (Games Sales Data) in the week commencing 16th March 2020, right at the start of the lockdown, video games sales increased by 63% on the previous week. Digital sales have outstripped physical sales too, and console sales rose by 155% to 259,169 units in the same period.

But stats aside, when you consider the level of engagement possible, it’s clear that gaming is more than just ‘playing’. In April, the popular game Fortnite held a virtual concert with rapper Travis Scott; which was attended by no less than 12.3 million gamers around the world – a record audience for an in-game event.

Clearly, for gaming the only way is up right now. But given the sharp increases, and the increasingly creative and innovative ways gaming platforms are being used as social networks – how can developers ensure every gamer remains safe from bullying, harassment, and unwanted content?

Ready Player One?

If all games have one thing in common, it’s rules. The influx of new gamers presents new challenges in a number of ways, where content moderation is concerned. Firstly, because uninitiated gamers (often referred to as noob/newbie/nub) are likely to be unfamiliar with established, pre-existing rules for online multiplayer games or the accepted social niceties or jargon of different platforms.

From a new user’s perspective, there’s often a tendency to carry over offline behaviours into the online environment – without consideration or a full understanding of the consequences. The Gamer has an extensive list of etiquette guidelines which get frequently broken by online multiplayer gamers, from common courtesies such as not swearing in front of younger users on voice-chat, not spamming chat-boxes to not ‘rage-quitting’ a co-operative game due to frustration.

However, when playing in a global arena, gamers might also encounter subtle cultural differences and behave in a way which is considered offensive to certain other groups of people.

Another major concern, as outlined by Otis Burris, Besedo’s Vice President Of Partnerships, outlined in a recent interview, which affects all online platforms, is the need to “stay ahead of the next creative idea in scams and frauds or outright abuse, bullying and even grooming to protect all users” because “fraudsters, scammers and predators are always evolving.”

Multiplayer online gaming is open to negative exploitation by individuals with malicious intent or grooming, simply because of the potential anonymity and sheer numbers of gamers taking part simultaneously around the globe.

While The Gamer list spells out that kids (in particular) should never use someone else’s credit card to pay for in-game items, when you consider just how open gaming can be from an interaction perspective, the fact that these details could easily be obtained by deception or coercion needs to be tackled.

A New Challenger Has Entered

In terms of multiplayer online gaming, cyberbullying and its regulation continue to be a prevalent issue. Some of the potential ways in which users can manipulate gaming environments in order to bully others include:

  • Ganging up on other players
  • Sending or posting negative or hurtful messages (using in-game chat-boxes for example)
  • Swearing or making negative remarks about other players that turn into bullying
  • Excluding the other person from playing in a particular group
  • Anonymously harassing strangers
  • Duping more vulnerable gamers into revealing personal information (such as passwords)
  • Using peer pressure to push others into perform acts they wouldn’t normally have

Whilst cyberbullying amongst children is fairly well researched, negative online interactions between adults are less well documented and studied. The 2019 report ‘Adult Online Harms’ (commissioned by the UK Council for Internet Safety Evidence Group) investigated internet safety issues amongst UK adults, and even acknowledges the lack of research into the effect of cyberbullying on adults.

With so much to be on the lookout for, how can online gaming become a safer space to play in for children, teenagers, and adults alike?

Pause

According to a 2019 report for the UK’s converged communications regulator Ofcom: “The fast-paced, highly-competitive nature of online platforms can drive businesses to prioritize growing an active user base over the moderation of online content.

“Developing and implementing an effective content moderation system takes time, effort and finance, each of which may be a constraint on a rapidly growing platform in a competitive marketplace.”

The stats show that 13% of people have stopped using an online service after observing harassment of others. Clearly, targeted harassment, hate speech, and social bullying need to stop if games manufacturers want to minimize churn rate and risk losing gamers to competitors.

So how can effective content moderation help?

Let’s look at a case study cited in the Ofcom report. As an example of effective content moderation, they refer to the online multiplayer game ‘League Of Legends’ which has approximately 80 million active players. The publishers, Riot Games, explored a new way of promoting positive interactions.

Users who logged frequent negative interactions were sanctioned with an interaction ‘budget’ or ‘limited chat mode’. Players who then modified their behavior and logged positive interactions gained release from the restrictions.

As a result of these sanctions, the developers noted a 7% drop in bad language in general and an overall increase in positive interactions.

Continue

Taking ‘League Of Legends’ as an example, a combination of human and AI (Artificial Intelligence) content moderation can encourage more socially positive content.

For example, a number of social media platforms have recently introduced ways of helpfully offering users alternatives to UGC (user generated content) which is potentially harmful or offensive, giving users a chance to self-regulate and make better choices before posting. In addition, offensive language within a post can be translated into non-offensive forms and users are presented with an optional ‘clean version’.

Nudging is also another technique which can be employed to encourage users to question and delay posting something potentially offensive by creating subtle incentives to make the right choice and thereby help to reduce the overall number of negative posts.

Chatbots, disguised as real users, can also be deployed to make interventions in response to specific negative comments posted by users, such as challenging racist or homophobic remarks and prompting an improvement in the user’s online behavior.

Finally, applying a layer of content moderation to ensure that inappropriate content is caught before it reaches other gamers will help keep communities positive and healthy. Ensuring higher engagement and less user leakage.

Game Over: Retry?

Making good from a bad situation, the current restrictions on social interaction offer a great opportunity for the gaming industry to draw in a new audience and broaden the market.

It also continues to inspire creative innovations in artistry and immersive storytelling, offering new and exciting forms of entertainment, pushing the boundaries of technological possibility, and generating new business models.

But the gaming industry also needs to ensure it takes greater responsibility for the safety of gamers online by ensuring it incorporates robust content management strategies. Even if doing so at scale, especially when audience numbers are so great, takes a lot more than manual player intervention or reactive strategies alone.

This is a challenge we remain committed to at Besedo – using technology to meet the moderation needs of all digital platforms. Through a combination of machine learning, artificial intelligence, and manual moderation techniques we can build a bespoke set of solutions that can operate at scale.

To find out more about content moderation and gaming, or to arrange a product demonstration, contact our team!

As part of our ongoing efforts to help our clients manage the challenges of user-generated content to improve the customer experience on their platforms, Besedo has recently hired Otis Burris as VP of Partnership.

The goal is to build a vast network of quality-driven companies who all strive to add value to online platforms through tech or expertise.

 

Interviewer: Tell us a bit about yourself.

Otis: I recently joined Besedo, back in November 2019. I have a long working history in technology solution sales & partnerships – helping to drive client efficiencies, and transition from the traditional ways of working that may be a little less tech-driven, towards more innovative solutions.

From SaaS to PaaS, I have worked with early mobile application platforms long before the “mobile-first” approach, and with digital innovations in online services and AI solutions. Now I’m at Besedo and very excited about both the mission and the approach, which is combining technology with specialized knowledge to help platforms keep their users safe.

 

Interviewer: Tell us a bit about yourself. How does Besedo help protect and improve users’ online experiences?

Otis: Besedo has made its name by being skilled at detecting and preventing improper or inappropriate behavior online. Many companies rightly put a lot of effort into making sure only “good” users get access to their platform. But what happens when “good” users misbehave?

Besedo provides the technology and expertise needed to monitor and control this behavior and ensure that users are not abusing the platform policies or creating a negative experience for everyone else.

Fraudsters, scammers and predators are always evolving, and it’s difficult for many companies to keep up if Content Moderation is not their core business. Our goal has always been to stay ahead of the next creative idea in scams and frauds or outright abuse, bullying and even grooming to protect all users.

Besedo focuses heavily on content moderation, and as such, we’re able to deliver very high quality and focused service to our clients.

 

Interviewer: What are the industries Besedo directs their services towards?

Otis: Besedo has predominantly worked with online marketplaces, and we have a strong history of delivering to classifieds online marketplaces.

But UGC occurs in many other places and our moderation knowledge can be transferred and applied to most digital platforms. We now moderate dating site profiles and support a lot of the up and coming sharing economy platforms. And then there are online communities where content is continuously shared in a variety of channels making content moderation a must.

 

Interviewer: Why is Besedo launching a partnership program?

Otis: Our clients and their user’s journey start long before the user becomes active on the platform, runs as long as they participate on the site and extends even after they leave.

From onboarding to payments, or customer support and reviews, there is a set of activities that are complementary to a successful experience. To deliver a high-level of satisfaction to all users, several components need to work seamlessly. Our goal is to connect those pieces into a single flow that enhances the overall perception of the client’s platform and that’s where we need partners.

Building a reliable eco-system of partners creates value for all participants. The end result is an offering that attracts more users, increases engagement and reduces churn.

In short, we’re trying to create a one-stop-shop where platforms can go to pick from a portfolio of services that can help them improve their user experience.

 

Interviewer: What kind of partnerships is Besedo seeking?

Otis: There are four main types of partners that we believe will help us deliver a high-quality offering to the online experience, either driven by our partners or by us.

Technology partners can be complementary when reviewing clients’ onboarding journeys.

Service partners are critical to scaling or supporting spikes during periods of heavy or unpredictable traffic – a great example is Covid-19 keeping people at home, with more time on their hands to create online content and complaints.

Our System Integration partners who are great at delivering complete IT solution projects on a significant scale are sometimes lacking the competence in certain niche areas (content moderation for example), which are outside their core deliveries. Besedo’s history and standing in the content moderation industry, makes them a credible partner for those large projects, particularly because we can deliver on a global scale in a range of languages.

Last but not least, are our Industry advocates, who are not only working continuously to educate and evangelize the different industries on the do’s and don’ts of Content Moderation but also to share the latest challenges, ideas, and innovations that can help companies stay ahead of fraudsters, scammers and other online predators.

 

Interviewer: Who should partner with us?

Otis: User Verification companies – They cover the first hurdle. They are the bouncers at the doors who ensure good users arrive on the platform. We then monitor the user’s behaviors once they arrive. Consider us the surveillance and alert team that can remove any negative or harmful elements that managed to get past the first stage.

Payment Solutions – They complete the behavioral transactions and add value to the endpoint conversion cycle and provide essential attributes that can be tagged to user behavior.

Service providers / BPOs – Often, they utilize their client’s systems which are typically not purpose-built. Partnering with us allows service providers to introduce solutions to their clients for higher efficiency and greater control of the QA process. Earn credibility by improving the client’s capabilities. As well as helping us to scale when there is a sudden spike in demand for our services.

Industry Advocates & Marketing Events help us to stay connected with our eco-system and provide opportunities to discover new trends and technologies occurring in the space.

Finally, since starting this role, I’ve had a lot of conversations I struggle to classify, but they often have exciting technology or approaches that we can leverage. Having a partner space allows those types of companies to reach out to us as well with their proposal on how we can collaborate.

 

Interviewer: How can partnering with Besedo help your business?

Otis: Partnering with Besedo can benefit your business in several ways.

It’s a great way to reduce the risks to your clients and deliver an improved user experience at a higher value since, with our combined capabilities, we’ll cover more ground.

If content moderation is not your core business, partnering with us will help build competence and trust in your content moderation delivery projects in a scalable way.

Once you earn credibility within the Besedo eco-system, it will open more opportunities and new revenue streams.

You will gain access to industry-leading moderation expertise in multiple languages built on 18 years of experience. We’re happy to share our knowledge so you can succeed.

Finally, Besedo are friendly towards our partners in the commercial sense. We are happy to aggressively share revenue, especially when it comes to new business or existing business where you’ve proven your ability to add value.

We’re very keen on improving ourselves, both from a tech and service standpoint. Any partner, any organization, any technology that can deliver results, we’d like to make sure you’re rewarded for that. So please reach out if you feel your goals align with ours.

 

Interviewer: What are the options for partnering with us?

Otis: It depends on the approach of your company, but here are a couple of possible setups:

Partners can be set up as a revenue share which is based on how active the partner is in the discovery, development and delivery of the opportunity we collaborate on.

You work as a referral partner, where you identify an opportunity, but Besedo does all the meetings, and so on. That’ll be considered a referral. There’ll be a referral bonus there that can be standardized.

If you look at a reseller scenario, you will have some training on Besedo’s platform Implio, so you can demo it and handle the sales process upfront and we’ll support you in the background. That will warrant a bigger bonus since you as the partner is taking on more of the work.

Typically, partners who are more involved in the process earn a larger share of the revenue.

We also want to include consulting and marketing partners. For marketing collaboration, we’d usually not have a monetary exchange. Instead, we’ll work with blog exchange and other types of content or campaign collaboration. Cost-saving, increased reach and exposure are just some of the benefits to gain from presenting us to the market with a unified front.

 

Interviewer: How can partnerships bring value to companies for a safer internet?

Otis: If you go alone you can go faster; if you go together, you can go further. If Besedo tries to make the Internet safer alone, we’re not going be able to do it all. As I mentioned our user verification partners, they do a great job of KYC (know your customers). They do a lot of homework before the user gets on the platform. That’s a critical step. But their software doesn’t extend onto the platform in the way Besedos does. So, we have to do a good job as well to add to the safety stack, and the same thing goes for the payment solutions, delivery services and so on. There’s a dependency on each solution to be of high quality. And if one is missing or of questionable quality, the safety of your platform decreases significantly.

Data sharing is another area where partnerships can help improve Internet safety.

In the age of data regulation – GDPR, Patriot Acts & other local privacy requirements – it’s becoming harder to leverage or share datasets across platforms or continents. This means that we’re missing out on valuable knowledge and insights that could be used to catch bad actors faster.

Partnerships allow companies to benefit from the complementary expertise of each company, without the partners exposing their core secrets or data insights, and user data. If we can work together to create a safer internet, we will together attract more users to the eco-system, simply because they feel safe.

It’s human nature – more people fly than ever before because planes crash less. The same goes for online users. Fewer bad experiences will attract more users, which in turn increases the revenue opportunities for all members of the eco-system.

 

Interviewer: I want to partner with Besedo. What’s the next step?

Otis: Reach out to me directly, I’m the VP of Partnerships and I’m always interested in hearing ideas on how we can collaborate. otis.burris@besedo.com

You can also fill in the form on our partner page. Add as much info as you can and I’ll be in touch to schedule a call.

And of course, grab us when you see us at any of the many industry events we attend. We’re always happy to have a chat about potential partnerships.

 

Interviewer: Any final thoughts thoughts you’d like to share?

Otis: It’s exciting times! New platforms are popping up everywhere. Content moderation is such a relevant space. It touches everyone, from parents to kids. Everyone is consuming content. Everyone is on a platform doing something, and they need to be protected. I don’t think we can be in a more relevant space at this time.

I’m really excited about Besedo having such a vast amount of experience and now taking it one step further by adding partnerships and technology to improve the solution moving forward.

I’m looking forward to having a lot of great conversations with potential partners.

The outbreak of COVID-19 or Coronavirus has thrown people all over the world into fear and panic for their health and economic situation. Many have been flocking to stores to stock up on some essentials, emptying the shelves one by one. Scammers are taking advantage of the situation by maliciously playing on people’s fear. They’re targeting items that are hard to find in stores and make the internet – and especially online marketplaces – their hunting ground, to exploit desperate and vulnerable individuals and businesses. Price gouging – or charging unfairly high prices – fake medicine or non-existent loans are all ways scammers try to exploit marketplace users.

In this worldwide crisis, now is a great time for marketplaces to step up and show social responsibility by making sure that vulnerable individuals don’t fall victim to corona related scams and that malicious actors can’t gain on stockpiling and selling medical equipment sorely needed by nurses and doctors fighting to save lives.

Since the start of the Covid-19 epidemic we’ve worked closely with our clients to update moderation coverage to include Coronavirus related scams and have helped them put in place new rules and policies.

We know that all marketplaces currently will be struggling to get on top of the situation and to help we’ve decided to share some best practices to handle moderation during the epidemic.

Here are our recommendations on how to tackle the Covid-19 crisis to protect your users, your brand and retain the trust users have in your platform.

Refusal of coronavirus related items

Ever since the outbreak started, ill-intentioned individuals have made the price of some items spike to unusually high rates. Many brands have already taken the responsible step of refusing certain items they wouldn’t usually reject, and some have set bulk-buying restrictions (just like some supermarkets have done) on ethical and integrity grounds.

Google stopped allowing ads for masks, and many other businesses have restricted the sale or price of certain items. Amazon removed thousands of listings for hand sanitizer, wipes and face masks and has suspended hundreds of sellers for price gouging. Similarly, eBay banned all sales of hand sanitizer, disinfecting wipes and healthcare masks on its US platform and announced it would remove any listings mentioning Covid-19 or the Coronavirus except for books.

In our day to day work with moderation for clients all over the world we’ve seen a surge of Coronavirus related scams and we’ve developed guidelines based on the examples we’ve seen.

To protect your customers from being scammed or victim of price-gouging and to preserve your user trust, we recommend you refuse ads or set up measures against stockpiling for the following items.

  • Surgical masks and face masks (type ffp1, ffp2, ffp3, etc.) have been scarcely available and have seen their price tag spike dramatically. Overall, advertisements for all kinds of medical equipment associated with the Covid-19 should be refused.
  • Hands sanitizer and disposable gloves are also very prone to being sold by scammers at incredibly high prices. We suggest either banning the ads altogether or setting regular prices on these items.
  • Empty supermarket shelves of toilet paper have caused this usually cheap item to be sold online at extortionate prices, we suggest you monitor and ban these ads accordingly.
  • Any ads with the mention of Coronavirus or Covid-19 in the text should be manually checked to ensure that they aren’t created with malicious intends.
  • The sale of magic medicines pretending to miraculously cure the virus.
  • Depending on the country and its physical distancing measures, ads for home services such as hairdressers, nail technicians and beauticians should be refused.
  • In these uncertain times, scammers have been selling loans or cash online, preying on the most vulnerable. Make sure to look for these scams on your platform.
  • Similarly, scammers have been targeting students talking about interest rates being adjusted.

Optimize your filters

Ever since the crisis started, scammers have become more sophisticated as days go by, finding loopholes to circumvent security measures. By finding alternative ways to promote their scams, they use different wordings such as Sars-CoV-2 or describing masks by their reference numbers such as 149:2001, A1 2009 etc. Make sure your filters are optimized and your moderators continuously briefed and educated to catch all coronavirus-related ads.

Right now, we suggest that tweak your policies and moderation measures daily to stay ahead of the scammers. As the crisis evolves malicious actors will without doubt continue to find new ways to exploit the situation. As such it’s vital that you pay extra attention to your moderation efforts over the following weeks.

If you need help tackling coronavirus-related scams on your platform, get in touch with us.

How can online marketplaces convert more? This question has many answers, but there is one undeniable consequence: if your users can’t find what they are looking for on your online marketplace, your conversion rates will plummet.

Internal search is a great tool to help users find what they are seeking in your online marketplace. Unfortunately, many sites still implement search as the result of an afterthought rather than establishing it as a pre-requisite for conversion optimization.

A common mistake – which you might be making on your site – is to hide the search tool behind a magnifying glass in the top right corner, which will undoubtedly be detrimental to your conversion rates. If your users can’t see it, they won’t use it, so why hide it?

According to Luigi’s Box, site searchers are 70% more likely to buy than non-searchers.

So, if you make it accessible to your users, imagine what that could mean for your conversion rates?

But it doesn’t end there.

In a recent webinar, we invited Michal Barla, Co-Founder and CPO at Luigi’s Box, to break down the steps you need to take if you want to turn your internal search into a powerful conversion tool.

Optimized site search: the solution improved online marketplace conversion

Get started with our handy checklist about site search, where we take you through the necessary steps to optimize your search experience and convert more.

Our new Implio feature enhances the quality of the images shared on your platform. Your manual moderation team can now crop and rotate user-generated images quickly and efficiently in the moderation tool.

For online platforms like online marketplaces and dating sites, creating a good user experience and trustworthy environment is essential; and high-quality pictures are crucial in that matter. In our user search study, users unanimously picked quality images as the reason to prefer one site over another.

Profile picture or images are crucial for users to trust the person on the other side of the screen or what they want to sell or buy. And as a company, you want to create and maintain that trust for your users.
On dating sites or online marketplaces, the cropping and rotating feature helps you to moderate pictures to comply with your company’s guidelines. For instance, cropping profile pictures so that only one person appears or ensuring that the user’s face is distinctly visible. On top of this, images can also be rotated to make sure that images submitted upside down, or from the wrong angle, can easily be corrected.

The cropping and rotation feature in Implio helps you improve trust and user experience for both your sellers and buyers.

Here’s how the feature works:

Implio cropping and rotationCurious to learn more about our new feature?

Have a look at our Implio features list or sign up to our all-in-one content moderation tool Implio, and try it out.