It’s 2018 and AI is everywhere. Every company and their grandmom are now offering AI-powered solutions. With so many options to pick from does, it really matter who you partner with for AI moderation?

When we started building our AI for moderation in 2008, machine learning had hardly been applied to content moderation. Since then others have understood the value automation brings in keeping marketplace users safe.

Every time we go to a tradeshow or conference we see new companies with AI offers and we understand that as the market gets more saturated it can be hard to decide which vendor to bet on.

To help you navigate the AI jungle we wanted to highlight some very specific areas where our AI is unique in the market.

It’s actionable

A lot of AI models work based on a sliding scale and the output you get is a probability score. The score gives you a picture of how likely the content piece is to be whatever the algorithm is looking for. So if a content piece receives a high probability score from a model looking to detect unwanted content, there’s a good chance that the content piece falls into that category.

However, a scoring system is often arbitrary. When should you reject an item as a scam? When the probability score is 100%? 99% or maybe 85% is good enough?

Our AI doesn’t operate this way. We want to provide our clients with clear answers that they can apply straight away. As such we do not send back an abstract score, instead, our algorithm provides a concrete answer.

We operate with 3 different, but clear answers that are easy to apply a moderation action to. The 3 values we expose are OK, NOK (not okay) and uncertain.

Let’s use the unwanted content model as an example. Our algorithms will look at the content and determine whether it’s unwanted content. If it is it will return “NOK” and you should reject the content piece, if it isn’t you will get “OK” back and you can accept it. If the model isn’t sure it will send back “Uncertain”, this doesn’t happen often, but if it does you should send the content for manual review.

That’s how simple it is. There’s no grey zone, only clear actionable answers to each content piece you run through the model.

 

A holistic AI approach

We believe that the value of AI is often mistakenly judged on the accuracy of the models alone. The reality is that it’s more complex than that. To explain why we need to get a bit technical and quickly outline a bit of AI terminology. (If you are interested you can read more about the basic concepts of AI moderation here)

When evaluating an AI there are multiple KPI’s you can look at, accuracy is just one of them. To determine if our AI is performing to our standards, we look at a wide array of metrics. We can’t cover all in this article, but here are some of the most important ones.

Precision

Is a number that describes how often models predictions were actually correct. If there are 100 content pieces and the machine determine 10 of them to be unwanted content, but only 8 of them were actually unwanted content then the model has a precision of 80%.

Recall

Recall is showing how many of the actual unwanted content pieces the algorithm correctly identifies. If we go back to our example with 100 content pieces. The AI correctly identified 8 unwanted content pieces out of the 100, but, there were 16 unwanted content pieces. In this case, the recall of the model is 50% as it only found half of the unwanted content cases present.

Accuracy

Describes the number of decisions the model gets correct. If we have 100 content pieces and 16 of them are unwanted content the accuracy of the model will be negatively impacted both by the unwanted content it fails to identify and by any good content it wrongly identifies as bad.

This means that if a model out of 100 content pieces correctly identified 8 unwanted content when there were 16 present and it wrongly identified 2 good content pieces as unwanted content the model would have an accuracy of 90%.

Automation rate

Automation rate is a way to measure exactly how much of the total content volume is being handled by AI. If you have 100000 content pieces per day and 80000 of them are dealt with by the models, then you have an automation level of 80%

When judging how well AI works we believe it needs to be based on how it performs in relation to all these 4 metrics as that will give you a truer picture of how well the AI is dealing with the content challenges.

You can never have perfect accuracy, precision, recall, and automation at the same time. Our AI is unique in that it is calibrated to meet your business objectives and to find the right balance between all of these indicators.

 

Supervised and continuous learning

Machine learning models can be taught in different ways and the way they are taught has a huge impact on how well they perform.

Our AI is trained on structured and labeled data of high quality. What this means is that the data sets we train our models on have been reviewed manually by expert content moderators who have taken a yes or no decision on every single piece of content.

We also update the models regularly ensuring that they are updated and adhere to new rules and global changes or events that could impact moderation decisions.

 

A calibrated solution

One of the benefits of designing our AI with an eye on multiple metrics is that we can tailor-make a solution to ensure the perfect fit for your business.

We have multiple levers we can pull to adjust the output allowing us to tweak accuracy and automation ensuring that everything is calibrated as your business requires.

With our solution the accuracy and degree of automation are elastic and that makes our AI setup much more flexible than other available options.

 

Adaptive AI Solution

One of the few drawbacks of Machine Learning is that it’s rigid and static. To change the model, you need to retrain it with a quality dataset. This makes it hard for most AI setups to deal with sudden changes in policies.

We‘ve solved this problem by deeply integrating it into our content moderation tool Implio. Implio has a powerful filter feature which adds flexibility to the solution, so you can quickly adapt to change.

For example. when a new iPhone comes out the AI models will not pick up the new scams until it has been trained on a new dataset including them, but you can add filters in Implio until there’s time to update machine learning. The same is true for other temporary events like the Olympic Games or global disasters except that these are over so quickly that it’s likely not feasible to update the models. Instead, you can add Implio filters that ensure high accuracy even during times with special moderation demands.

In addition, we have a team dedicated to studying moderation trends and best practices and all our AI customers benefit from their knowledge and our 16 years of experience to support and guide them.

 

ML Tailored to content moderation

Most of the AI solutions on the market were created to solve a general problem that occurs in multiple industries. This means that the AI works okay for most companies, but it’s never a perfect fit.

We took the other route and dedicated our efforts to create an AI that’s perfect for content moderation.

When we develop our AI we do it based on the 16 years of experience we have helping companies of all sizes keep their users safe and the quality of their user-generated content high. That has made our stack uniquely tailored to content moderation ensuring unparalleled results in our field.

We also have a team of experts supporting our AI developers with insights, internal learnings from moderating global sites of all sizes and research into industry trends and the challenges faced by online marketplaces and classifieds in particular.

Our research team feeds their insights to Besedo as a whole ensuring a high level of expertise at every level of our organization. From moderation agents to managers and developers. This ensures that our experience and expertise is infused into all our services and products.

 

Get an AI solution that fits your needs

There is no question about it, AI will play a huge role in marketplace growth over the next couple of years. However, to truly benefit from machine learning, make sure you get models that will work well for you.

We often talk to marketplace owners who have become slightly disillusioned after testing AI solutions that weren’t properly calibrated for their business. They have wasted time implementing a solution that didn’t solve their issue in a proper way and now they are wary of AI as a whole.

That’s a shame, when applied correctly AI is a great money saver and provides other benefits like fast time to site and user privacy protection.

To avoid spending money on the wrong AI, have a chat with our solution designers and they will give you a good idea of which setup would work for you and the results you can expect. Together you can tailor a solution that fits your exact needs.

Get in contact with us for our tailored AI moderation solution

Working with user-generated content moderation is not an easy task. Moderators need to be able to spot the slightest details to find fraud, scams or counterfeits etc. Therefore, it’s important to provide your manual workforce with the best possible conditions to work efficiently. To help simplify work for both you and your moderators, we now introduce multiple queues in our all-in-one content moderation tool, Implio. This new feature will help you streamline your manual moderation setup and the day-to-day work for your moderators.   

How do multiple queues help my site?  

We’ve implemented multiple queues to be as flexible as possible. This means that you can decide how many different queues to create, edit their function and names, or delete them whenever you want. Make sure to customize your queues to make the daily operation work as smooth as possible.  

There are numerous ways multiple queues can be of use to your online marketplace. One way is to create a queue per language supported by your site. Utilize geolocation, available in Implio, to ensure that content is automatically placed in the correct queue, making it easier for your moderation teams to specialize and work with one language only.

This use of multiple queues is very valuable to multi-language marketplaces, but our new Implio feature can also help marketplaces who only support one language. Multiple queues can, for instance, be set up to automatically sort content based on price, category or risk and funnel it into different queues allowing you to direct it to specific expert teams or agents.  

You can also create queues for items which are time sensitive and needs a shorter SLA, for example, funnel flagged content or new users to individual queues.  

In Implio, we always have two predetermined queues, one default queue and one escalated queue. Your moderators can easily select which queue to work in from the manual interface. When working in a specific queue, your moderators can escalate an item to a supervisor at any time or send the item to another queue.  

Multiple queues help you enable specialized moderation teams, which will simplify your moderators’ day-to-day work and make your overall moderation setup more effective.    

How does it work?  

Begin with creating a new queue in Implio. Then navigate to automation and create a new rule. Set the rule to send matching content to the correct queue that you just created. Here’s how a language queue set up looks like:

Try it out yourself.

Create your very own account in Implio, it’s free to use up to 10.000 items per month. Follow the steps above to set up your unique queue. Make sure to use the CSV importer to test multiple queues and all the other features available in Implio with your very own data.

If you want to learn more about multiple queues and Implio, get in touch with one of our content moderation experts.

Every feature we include in Implio has been carefully chosen based on feedback from stakeholders (internal and external) and after careful analysis of current and future needs within the industry (read more about how we plan our roadmap). As such it is always exciting when we launch something new since we know it is anticipated by our users and will increase their efficiency and quality of life when working in our tool.

Our developers work hard to ensure regular updates and feature additions to Implio. Here are the biggest improvements we released in 2017

 

1)Manual moderation interface image template

Debuting almost an entire year ago, this particular feature helps manual moderators create a customized UI template in Implio. This allows users to display the necessary moderation information whichever way suits them best. For example, they could configure the layout to prioritize the image shown, user details, customer data, and moderation feedback – among other information.

Implio interface template changer

 

2)Search function

Our second big feature of last year was the new Implio search tool. Never underestimate the power, speed, and usefulness of a good search function! The always-visible search bar is found at the top of each page within Implio. Users can search by keywords, exact quotes, and specific contact information – including email addresses and phone numbers.

The results can be ordered by relevance, newest first, oldest first; and displayed as a list or using images. We think this feature is going to be particularly useful for moderators as they review posts, or monitor accounts and items coming into Implio.

Implio search function

3)New manual interface

In May we launched Implio’s updated manual interface. It was the culmination of months of hard work from our developers; especially our front-end team.

We spent a lot of time performing usability tests and getting client feedback; fine-tuning the new interface to make sure it benefits everyone.

Key improvements added to this version include:

  • Data is organized to follow the API’s structure, to make things more consistent.
  • Revisions of a single item are grouped together so that the moderator only reviews the latest version and can disregard previous ones.
  • Content can be edited directly within the page. Plus type and category can be changed using a simple drop-down menu.
  • A status bar helps you track your progress on the page
  • It’s also much easier (for a developer) to configure a number of settings for each client. These include the number of items displayed per page, and the ability to enable or disable pre-approved items in the queue.

 

4)User roles

Our fourth biggest Implio feature involved the rollout of different user role permissions. Each user role now comes with a specific list of permissions, allowing admins, automation specialists, moderators, and spectators full or restricted access to certain functionalities. As you’d expect, admins have the greatest level of authority, but being able to manage rules and search items will undoubtedly make moderators’ jobs a lot easier.

Implio user administration

 

5)Geolocation filter

Our final feature for 2017, launched just before Christmas. It was our geolocation filter – which we’ve covered in a dedicated blog post.

Essentially it’s used to detect inconsistencies between where users say they’re based, and where their IP address actually shows them to be; ideal for helping online marketplace owners protect their users from scammers.

Geolocation is fully integrated into Implio and is visible in the manual moderation interface. However, users can also create their own rules, helping them quickly compare information, making it easier for moderators to detect discrepancies.

So… what does 2018 hold? Don’t worry there’s a whole lot more where these came from! We already have a number of features, functions, and updates planned for the next 12 months.

Watch. This. Space.

Sue Scheff is an author, parent advocate and cyber advocate who is promoting awareness of cyberbullying and other online issues. She is the author of three books, Wit’s EndShame Nation and Google Bomb.

We had the opportunity to conduct an interview with her where she talked about victims experience of online sexual harassment/online shaming and shared her opinion on what sites can do to help fight the problem. 

 

Interviewer: Hi Sue, thanks a lot for taking the time to share your knowledge, I know you are extremely busy! You’re the author of Shame Nation and Google Bomb, what were you hoping to achieve by releasing them?

Sue Scheff: Awareness. Most importantly, giving a voice to the voiceless.

After I wrote Google Bomb I was stunned by the outpour of people from all walks of life – from all over the world – that contacted me with their stories of Internet defamation/shaming/harassment. People were silently suffering from cyber-bullets, like myself, on the verge of financial ruin and all were emotionally struggling.

Google Bomb was the roadmap to helping people know there are legal ramifications and consequences of online behavior.

By 2012, I was taken back by the constant headlines of bullycide. Names like Tyler Clementi, Amanda Todd, Rebecca Sedwick, Audrie Potts – I knew how they felt – like there was no escaping this dark-hole of cyber-humiliation. At 40 years-old, when this happened to me, I had the maturity to know it would eventually get better. These young people don’t.

Google Bomb was the book to help people understand their legal rights, but with the rise of incivility online, Shame Nation needed to be written to help people know they can survive digital-embarrassment, revenge porn, sextortion and other forms of online hate. I packed this book with over 25 contributors and experts from around the world – to share their first-hand stories to help readers know they can overcome digital disaster. I also include digital wisdom for online safety and survival.

 

Interviewer: You’re a victim of online harassment and won a landmark case of internet defamation and invasion of privacy. Can you please try to explain your experience?

Sue Scheff: In 2003, I was attacked online by what I refer to as a disgruntled client, definitely a woman that didn’t like me. Once she started her attack, the gang-like mentality of trolls joined in. These trolls and this woman created a smear campaign that took an evil twist. From calling me a child abuser,  saying I kidnap kids, exploit families, a crook and more. Things went towards the sexual side when they claimed to be auctioning my panties (of course they never meet me – or had anything) but to anyone reading this, how do you explain these are malicious trolls out to destroy me?

As an educational consultant, I help families with at-risk teens find residential treatment centers. These online insults nearly destroyed me. I ended up having to close my office, hire an attorney and fight.

By 2006 I was both emotionally and financially crippled. In September 2006 I won the landmark case in Florida for Internet defamation and invasion of privacy for $11.3M in a jury verdict. Lady Justice cleared my name, but the Internet never forgets. Fortunately for me, the first online reputation management company opens their doors that summer. I was one of their first clients. To this day – I say my lawyer vindicated me – but it’s ORM that gave me my life back.

 

Interviewer: You’ve also met many other victims of online harassment, online shaming, revenge porn etc. How are victims affected, both in short and long-term?

Sue Scheff: Trust and resilience.

I’ve spoken to many victims of online hate. The most common theme I hear is the lack of trust we (they) have of others (both online and offline) initially. With me, I know I become very isolated and reserved. My circle of trusted friends became extremely small – the fact is, no one understands this pain unless they have walked in your shoes. When researching Shame Nation – others expressed feeling the same way.

The good news is, with time we learn to rebuild our trust in humanity through our own resilience. This doesn’t happen overnight. It’s about acceptance – understanding that the shame doesn’t define you and it’s your opportunity to redefine yourself.

The survivors you will read about in Shame Nation have inspiring stories of hope. They all learned to redefine themselves – out of negative experiences. It’s what I did – and realized that many others have done the same.

Tweet this: “no one understands this pain unless they have walked in your shoes.”- Sue Scheff, about victims of online hate. #wetoo

 

Interviewer: Where do you see the biggest risk of being exposed to online sexual harassment?

Sue Scheff: Online reputation and emotional distress.

Today we face the majority of businesses and universities that will use the Internet to search your name prior “interviewing” you. Depending on how your name survives a Google rinse cycle, it will dictate your financial future – career or job wise.

Just because you have a job – doesn’t mean you’re out of hot water. More than 80% of companies have social media policies in place. If your name is involved in sexual misconduct (scandal) online – you could risk losing your job. Colleges are also implementing these social media policies.

PEW Research says the most common way for adults to meet – is online. If you’re a victim of cyber-shame, online sexual harassment, revenge porn or sextortion – this content could hinder your chances of meeting your soul mate.

The emotional distress is overwhelming. You feel powerless and hopeless. Thankfully today there are resources you can turn to for help.

 

Interviewer: Do you think this issue is growing or are we any closer to solving it?

Sue Scheff: Yes… and no.

In a 2017 PEW survey, over 80% of researchers predicted that online harassment will get worse over the next decade – this includes revenge porn and sexual harassment. This is a man-made disaster, and can only be remedied by each of us taking responsibility for our actions online and educating others. Education is the key to prevention. I believe the #MeToo and Times Up movement have brought more awareness to this topic, but I fear not enough is being done about it for the online world. It’s too easy to use a keypad as a legal lethal weapon.

The good news is that we are seeing stronger revenge porn laws being put in place, as well as more social platforms, are responding to removing content when flagged as abusive. Years ago, we didn’t have this – though it may be slow, it’s moving in the right direction.

Tweet this: More than 80% of researchers predict that online harassment will get worse over the next decade. The time to act is now! #wetoo

 

Interviewer: What would be your advice to internet users today on how to avoid, prevent and fight harassment?

Sue Scheff: Digital wisdom.

I’m frequently asked, “how can I safely sext my partner?” I give the same answer every time. The Internet and social media were not and is not intended for privacy. We only have to think of the Sony email hacking or Ashley Madison leaks to know that no one is immune to have their private habits exposed to the world wide web. You should have zero expectancies of privacy if sending any sexual message via text or otherwise. Several studies concur – a majority of adults will share personal and private messages and images of their partner without their partner’s consent.

Your friend today could quickly turn into a foe tomorrow. Divorce rates are climbing – what used to be revenge offline with charging up your ex’s credit cards, now has longer-term consequences when your nudes can go viral or other comprising images or content. E-venge (such as revenge porn) is how ex’s will take out their anger. Don’t give them that power.

If you find you are a victim of online harassment or online hate – report it and flag it to the social platform. Be sure to fill out a form – outlining how it’s violating their code of conduct – email them professionally (never use profanity or a harsh tone).

I encourage victims not to engage with the harasser. Be sure to screenshot the content – then block them. If you feel this is a case that will get worse and it needs to be monitored, you can ask a friend to monitor it for you so you don’t have to be emotionally drained from it. I also tell the friend not to engage – and to let you know if it gets to a point that it may need legal attention – that your life is in danger or your business is suffering.

 

Interviewer: What is your opinion on what sites can do to help fight this problem?

Sue Scheff: In a perfect world – we would say stricter consequences offline for the perpetrators – which would hinder them from doing this online in the first place.

Strengthen the gatekeepers: User -friendlier and a speedier response time.

Although sites such as Facebook, Twitter and Instagram are stepping up and want to alleviate online harassment, many people still struggle with figuring out the reporting methods and especially the poor response time. Where are the forms? After that – the response time can be troubling – from what victims have shared with me. When you’re a victim of sexual harassment, these posts are extremely concerning – every minute feels like a year.

I personally had a good experience on Facebook – when I wrote about a cyber-stalker on my public page. It was addressed and handled within 48 hours.

Systems should be in place that if a comment/image is flagged as abusive (harassment) by more than 3-5 unique visitors, it should be taken down until it can be investigated by the social platform’s team. I think we can relate to the fact that online abuse reported daily is likely overwhelming social media platforms – however, I believe they should give us the benefit of the doubt until they can investigate our complaint.

 

Interviewer: What do you think about the idea of using computer vision (AI) to spot and block nude pictures before they are submitted on a dating site?

Sue Scheff: If dating sites were able to implement AI for suspicious content, it would be a great start to cut-back on sexual harassment and keeping the users safer.

 

Interviewer: Where can victims turn for support?

Sue Scheff:

 

Are you a victim of online sexual harassment or cyberbullying?

Please heed Sue’s advice and reach out for support.

If you are site looking to help in the fight?

Contact us to see how AI and content moderation can help keep your users safe.

Sue Scheff

Sue Scheff is a Nationally Recognized Author, Parent Advocate and Internet Safety Advocate. She founded Parents Universal Resources Experts, Inc. in 2001.

She has 3 published books, Wit’s End, Google Bomb and her latest, Shame Nation: The Global Epidemic of Online Hate with a foreword by Monica Lewinsky.

Sue Scheff is a contributor for the Psychology Today, HuffPost, Dr. Greene, Stop Medicine Abuse, EducationNation, and others. She has been featured on ABC 20/20, CNN, Fox News, Anderson Cooper, Nightly News with Katie Couric, Rachael Ray Show, Dr. Phil, and more. Scheff has also been in USA Today, LA Times, NYT’s, Washington Post, Wall Street Journal, AARP, just to name a few.

As product owner for our Implio service, it’s Olivier Vencenciusjob to make sure that our all-in-one content moderation tool evolves in the right way – for clients, moderators, and stakeholders. Just how does he manage to juggle all the different needs, wants while keeping the product vision on track?

Interviewer: Hi Olivier, thanks a lot for taking the time to share your knowledge, I know you are extremely busy! Could you start us off by telling us a little more about you and your time at Besedo.

Olivier: Sure. Well, I’m originally from Belgium (the French-speaking part!), but I’m now based in Besedo’s Malta office where I’ve been working as product owner for Implio for the last two years. I’ve actually been working for the company for the past six years though. I studied IT originally, but started life here as a content moderator for one of our clients. In my free time I developed content moderation tools because I was fascinated with how even simple tools could help optimize the process.

This lead to a role as IT support for our in house teams supporting the tools I had created, before joining the newly set up development team where I working on the very first version of what has now become Implio. From there I joined our internal  Centre of Excellence, specializing in process improvement. There, I began to oversee and manage the development of different tools, and share knowledge and best practice about using them, before taking on my current role as product owner.

You could say that all the different hats I’ve worn at Besedo so far has perfectly prepared me for my current position!

Interviewer: What do you do day-to-day; as Implio’s product owner?

Olivier: It’s quite a broad remit, but there are some key things I’m involved with. Essentially I’m in charge of how the product develops, so I work closely with the development team, helping them plan and implement features within Implio, in order to consistently evolve the product. We use Agile methodology, which means we work in an incremental and iterative way – updating and changing feature elements as required.

Work is organized into sprints, so we’ll focus on a particular feature within the product for a two week period. We’ll have brief daily meetings to discuss progress and issues before getting on with assigned tasks and resolving any concerns.

I’m also responsible for defining the product vision – the why, what, where, and how.  It sets the scope for the product and gives us a base to validate our next objectives and ensure that we always deliver value to our users.

Interviewer: Can you talk us through the process of building a product roadmap and how this helps define what steps need to be taken?

Olivier: Certainly. The roadmap is a list of all the short and long term requirements we’ve gathered about Implio from all of the relevant stakeholders. This includes internal stakeholders from across the company – our content moderators, team leaders and managers, as well as feedback from external sources: our clients and prospective clients, so that we thoroughly understand the features they value and what their pain points are.

We begin the process of reviewing all the feedback with the R&D team; looking closely at the most frequent and important pain points and brainstorming ways to tackle them. Once we’ve established this list of possible improvements we prioritize them based on the value they give and their complexity. All of this goes into our roadmap which always remains tied to our product goals and objectives.

Interviewer: Could you give an example of a particular feature(s) you’ve implemented recently?

Olivier: We are currently focusing on creating a smoother on-boarding process for our clients. As part of this we have been working on new set of slides that give new users a tour of the product on sign-up. We have also provided users with new customisable settings related to manual moderation.

Another focus point for us is to expand our existing automation capabilities we recently we did that by releasing a new geolocation tool and we are close to the release of a new set of solutions to tackle common moderation problems using AI such as a language detection tool.

These latter two are specifically related to fraud and scam prevention; allowing us to detect suspicious terms in different languages and hone in on activity taking place in locations that don’t match with a user’s IP address. Our goal with Implio is ensuring that our clients have all the best solutions to catch and prevent scams within one tool.

Interviewer: How do you future-proof Implio? Is that even possible?

Olivier: It all comes from knowing what the current challenges are and taking time to anticipate what’s coming. From my time as a moderator and from our internal, ongoing knowledge sharing I know the challenges in dealing with user profiles, behavior and content for online marketplaces which also applies to dating and sharing-economy sites. I add to this knowledge regularly through user research and interviews.

Thanks to our engineering team and my background of software development I can identify easily what is involved and what are the steps in developing the best solution for tackling these challenges. Combined that knowledge and experience gives me a pretty good understanding of what we need to do in order to build the right tool for both current and future needs. Within our R&D division we are also all encouraged to continuously be on the lookout for new solutions and to experiment with new things we believe could make a difference in the product and for our customers, particularly where automated AI and computer vision are concerned.

For specific content moderation needs and trends within trust and safety we have a full team dedicated to research and internal knowledge sharing so when a new moderation need surfaces I am informed immediately.

I also work closely with our sales and customer success team to identify the needs of our users. We spend time analyzing what they are trying to achieve and design our solutions so new features don’t just solve a specific problem for one client, but benefit our entire userbase and help them solve issues in a smart and innovative way.

Knowledge sharing and ensuring that all teams work closely together across the company is crucial for understanding what our challenges might be in six, 12, or 18 months’ time – or even further down the line. The timeline for implementation can take a similar amount of time, so understanding trends early is an important aspect of our work and crucial to ensuring that our tool is able to solve the challenges of tomorrow.

Interviewer: Speaking of challenges, what’s the biggest challenge in your job?

Olivier: There’s always a lot to do, which is exciting, but it also means that we need to stay focused and prioritize. The customer’s needs come first so we need to action things that are most valuable to them. However, we also need to make sure that what we do balances with the company’s objectives; which involves mapping each feature to the overall product vision so that everything fits together. It can often be a tough decision to make.
Interviewer: And what’s the best or most interesting part of being the product owner for Implio?

Olivier: Having a partnership with customer where we share ideas and discuss feedback. Seeing them being successful and happy with the product is one of the most exciting things about being a product owner!

Product owner Olivier Vencencius

Olivier Vencencius

Olivier has worked with Besedo since 2011. He has held a number of roles within the company and has played an integral part in the development and success of Implio.

Apart from his talent for organization and project leading he is known within the company for the incredible number of cat t-shirts he owns.

Whether you own an online marketplace, dating site, or manage a sharing economy, falsified information and fraud is an unfortunate part of the package, but it doesn’t have to interfere with the way your users interact. Do you know what it takes to create a safe and trustworthy experience for your users? Take a look at Besedo’s latest infographic to unpack the importance of it all, and what next steps to take to implement your ultimate content moderation strategy.

Tweet this: Are your content moderation efforts lacking? Take a look at this free approach by @besedo_official:

How Besedo protects people online info graphic

Take a look at how Besedo can help your content moderation strategy, for free! Try our all-in-one filter automation tool to get started.

We talk weapons, water heaters, challenges of weeding out false positives and how to create accurate filters with Besedo filter manager, Kevin Martinez.

Interviewer: Great to meet you, could you tell us a bit about yourself?
Kevin: I’m Kevin Martinez; originally from Spain, but raised in France, now working out of Besedo’s Malta office. I’ve been with the company for five years. In 2016 I had the honor of setting up Besedo’s first client filter. And we still have the client – so I must have done something right!

Interviewer: Excellent! So, tell us more about what you do.
Kevin: The short answer is ‘I’m a filter manager’. I make sure that our clients’ filters are working as well as they should be – monitoring filter quality across all Besedo assignments.

I manage three filter specialists – two in Colombia and another in Malta. Being from different cultures, speaking different languages, and having a presence in different time zones means we can work with clients across the world.

The longer answer is that I assess decisions that our automated moderation tool Implio has made. Quality checks like these are done at random. I take a sample of content that’s been approved – items that have been filter-rejected and filter-approved – and identify if any mistakes were made. I then learn from these mistakes and make appropriate adjustments to the filter. This way we maintain and improve the accuracy rate of our filters over time.

Quality checks take time, as we’re really thorough. A single one can take half a day! But tracking the quality day-by-day is vital to keeping the filters accurate and it allows us to provide a report with a quality rate for our clients at the end of each month.

Interviewer: That sounds like a complex task… What kind of things are you looking for?
Kevin: Typically, we’re looking for false positives in filters: terms that are correctly filtered according to the criteria set, but aren’t actually prohibited.

Take Italian firearms brand, Beretta, for example. Weapons are prohibited for sale online in some nations, but not in others. So, for many sites a filter rejecting firearms would make sense.

However, there’s another Italian brand called Beretta – but this company manufactures water heaters (!). There’s also a Chevrolet Beretta car, and an American wrestler who goes by Beretta too. The filter can’t distinguish between these as completely different things until we know that they need to be distinguished between. So, lots of research is needed to ensure that, say, a Beretta water heater parts ad isn’t mistakenly rejected from an online marketplace.

A good filter will reduce the time the moderators spend on the content queue and will also reduce the length of time it takes to get a piece of content live on the site. It’s an ongoing process, one that gets better over time: gradually improving automation levels and making the manual moderator’s job a lot easier.

Interviewer: What’s the overall effect of a ‘bad’ filter, then?
Kevin: It depends. If the filter is set up to auto-reject matched words and phrases, it leads to a bad user experience as genuine ads might get rejected (as the case with water heaters illustrated).  If, the filter is set up to send matched content for manual moderation, the automation level decreases. We agree to a certain automation level when we sign a contract with a client, so if there are more items for the manual moderation team to approve; it puts pressure on us to reach our service level agreement.

Interviewer: Which rules are hardest to program into a filter?
Kevin: Scam filters are the most complex to implement; mostly because scams evolve and because scammers are always trying to mimic genuine user behavior. To solve this, we monitor a number of things in order to detect ‘suspicious’ behavior, including email addresses, price discrepancies, specific keywords, IP addresses, payment methods (like PayPal and Western Union) – among other things.

One of the biggest challenges is that on their own, elements like these aren’t suspicious enough to warrant further investigation; so we have to ensure the filter recognizes a combination of them for it to be effective. We perform a lot of research and collaborate closely with clients, to ensure each filter is as accurate as possible.

Interviewer: Sounds like you need a lot of expertise! What does it take to be a good filter manager? 
Kevin: You need to understand how moderation works, and most filter specialists have a good grasp of computer programing (particularly the concept of regular expression) too. But equally you need to have a curious, analytical, and creative mind.

Admittedly, filter quality checks can be a bit repetitive, but they are very important. Being able to investigate, test, and find ways to setup and improve filters is crucial. This means understanding how the filter will interact with words in practice, not just in theory. The most important thing is to have the drive to keep pushing; to find the perfect solution for the client’s needs.

Interviewer: What do you enjoy the most about your work?
Kevin: I love the beginning of every new project. I help onboard each new client from the very start, setting up the filters and creating a report for them. Each one is different, so lots of investigation is involved as there are different rules to consider: depending on who the client is, what they do, and where they’re based.

As mentioned, rules can differ between countries. For instance, in South America, you don’t need to apply a gender discrimination filter for something like jobs or housing – unthinkable in Europe, which has stringent equality laws.

Each day I look at the quality of the client data by opening a random filter, reviewing at the ads going through that filter and seeing everything’s working correctly. There are many parameters involved, and it involves going over the finer details, but this is the stuff I’m passionate about. I can be quite obsessional about it!

Nothing is impossible. I aim to get the client what they want and will try again and again and find a creative way to deliver it!

Kevin Martinez interview

Kevin Martinez

Kevin is a filter manager at Besedo. He combines creativity, perseverance and in-dept research to create highly accurate filters in the all-in-one moderation tool; Implio.

His daily job is to ensure that filters are maintained, tweaked and continuously kept accurate so Besedo’s clients can enjoy a high automation rate without sacrificing user experience.

So how do you calculate the price of custom content moderation with AI? At Besedo we look at it from a number of angles: Volumes, complexity of moderation actions needed and languages. We build and tailor something bespoke for each client that we do not share with anyone else.

If you work in content moderation for a classifieds site or an online marketplace, you’ll have probably heard lots of talk about machine learning and tailored AI. No doubt you’ll have wondered about its features, cost, and value.

As a content moderation service provider we’d gladly shout out positive things about tailored AI all day long (!), but we also wanted to give some background into why we believe it works, to shed some light on alternatives, and give some insight into costs.

 

Cost and ROI comparison between tailored AI and generic ML models

In a nutshell, tailored AI is a machine learning algorithm that’s created using clients structured and labeled data. By inputting this data, you can teach your AI to learn very specific moderation patterns. It can handle complexity, is self-learning, will give you a much higher accuracy rate, and higher automation levels. It’s much more meaningful and offers better results than generic alternatives, which are less reliable and error-prone.

At Besedo for instance with tailored models, we have accomplished automation rates of up to 90% with an accuracy level of up to 99% accuracy. That would not be possible using generic one size fits all models.

Generic AI, while useful when moderating something fixed – like language – can’t handle specific challenges, as it doesn’t learn in the same way as tailored AI. Say you want to set moderation criteria for profile pictures on a dating site. There are lots of things you need to do: ensure users are over 18, censor nudity, make sure there’s a face visible, that no weapons are shown, and that each picture is good quality. These are the requirements of a specific platform. Using several different generic AI models to try and moderate these criteria won’t work as well as a single tailored AI can. But you could always build your own model, right?

While it might seem simple and less expensive to build your own tailored AI, it often ends up as a costly distraction. Companies can spend years pouring in resources into a setup and still never get it exactly right. Creating powerful machine learning moderation models isn’t just a matter of putting a couple of developers on the task. It requires data scientists and semantic experts to make sure the AI keeps learning and performing better. Considering the ongoing cost and complexity, why create your own content moderation algorithm when there are expert companies offering tailormade solutions? Unless you are a huge company with very specific needs you wouldn’t develop your own helpdesk or customer service tool. Then why go that route with content moderation?

 

The price of AI moderation

So how do you calculate the price of a tailored AI? At Besedo we look at it from a number of angles: Volumes, complexity of moderation actions needed and languages. We build something bespoke for each client that we do not share with anyone else. There is a setup fee to create an AI model for the client, which involves learning from available client data to build a specific model; monthly moderation fees, which we base on the projected volumes (starting at a minimum of 200,000 moderated items per month). The monthly moderation fee covers hosting, software licenses, and maintenance. Apart from this, there is a monthly professional fee, which includes updates, new performance improvements and updates of rules to ensure that your automation rate and performance is always improving. Finally, we have a fixed support fee that gives you 24/7 technical support.

A lot goes into creating a tailored AI, but it is still a lot more cost-effective than manual moderation, especially over time: and is far less expensive than developing your own moderation model. You can’t really compare a tailored approach to a generic one at all since that would be like comparing a chisel to a sledgehammer. You will not get the accuracy you need and will end up wasting money – as well as time and effort.

All factors considered, by our calculations companies that choose tailored AI can save anywhere between 50%-90% on manual moderation pricing alone.  Surely that’s a worthwhile investment of time and money?

Still not convinced? Get in touch!

What does it take to build a state-of-the-art Artificial Intelligence content moderation tool? We caught up with Besedo’s semantics expert and computational linguistics engineer, Evgeniya Bantyukova.

Interviewer: Nice to meet you! Tell us a little about yourself.

Evgeniya: I’m Evgeniya and I’m based in Besedo’s Paris office. I’m originally from Russia but I’ve been in France for the past five or so years. I started at ioSquare about a year and a half ago, and have continued to work there as part of Besedo since the two companies merged last year.

Interviewer: What do you do? What is your job title and what does it really mean?

Evgeniya: As a computational linguistics engineer, I guess you could describe me as part linguist and part computer programmer. The work I do bridges the gap between what people search for and post online and the way content is moderated.

I work with semantics. This means I spend a lot of time researching information and looking at the different ways words and phrases are presented and expressed. I also build filters to analyze and identify the information I’ve manually researched. It’s an iterative process of constant refinement that takes time to perfect.

The filters can then be used by us, on behalf of our clients, to identify when a certain piece of text using these terms and phrases is submitted to their site; before it gets posted. The ultimate aim is to ensure that incorrect, defamatory, or just plain rude information doesn’t get posted to our clients’ sites.

Interviewer: What kind of projects have you worked on? Could you give us an example? 

Evgeniya: Sure. Recently I was tasked with creating a filter for profanity terms in several different languages – not just the words themselves, but variations on them, like different ways to spell them or alternative phrasings.

This also involved analyzing them and creating a program or model that could detect their use. There was a lot of data capture and testing involved on millions of data points; which helped ensure the filters we built were as effective as possible.

One thing I’m working on right now is a project tackling fake profiles on dating sites: analyzing scam messages and extracting the expressions and words that are most frequently used. One thing I have discovered in this process is that those posting fake profiles often use sequences of adjectives – words like ‘nice’, ‘honest’, or ‘cool ‘ – so now I’m looking at creating a model that finds profiles that fit that description. That approach on its own would create many false positives, but with discoveries like these we get a much more precise idea of what fake profiles look like, and that helps us create filters that limit the number that go live on our clients’ sites.

Interviewer: How does the work you do feed into AI moderation?
Evgeniya: Crafting filters involves working on a set amount of data. The more data we have, the more accurate we can make our filters. It’s an iterative and human-driven process, but engineered to be very precise.

Filters like these, when used as verification models, can help improve the precision and quality of manual content moderation. And when used in combination with our machine learning/deep learning pipeline, they improve our AI’s overall accuracy and efficiency.

The filters I build are quite generic so they are used as a framework for multiple clients, depending on their moderation needs. And they can be tailored to specific assignment as needed. On top of that and to keep our filters “sharp”, we continuously update them, as language evolves and new trends and words appear.

Interviewer: Do you have any heroes or role models that you admire in your field?

Evgeniya: Well, as you might imagine, role models in computational linguistics are kind of hard to come by. But I’m a big fan of theoretical linguists like Noam Chomsky.

Interviewer: What qualities do you need to succeed in your field?

Evgeniya: I think you need to be genuinely curious about the world in general. Every new trend and phenomenon should interest you as they will result in new tendencies and words and that will impact the filters you are crafting.

You also need to have a knack for languages or at least the structure of how different languages are built.

Finally you need to be openminded and able to stay objective. When working on a profanity filter, it doesn’t help if you are continuously offended. You need to stay neutral and focus on the endgame; keeping people safe online.

This is why I enjoy my job so much, it is very rewarding knowing that you are making a difference – whether that’s ensuring that a site is secure for users or more generally when seeing the positive impact of something you’ve done. Take dating sites for instance; The fact that the work I do can help someone find love, that’s the greatest reward I can think of. I guess I’m something of a hopeless romantic!

Evgeniya Batyukova - portrait

Evgeniya Bantyukova

Evgeniya is a linguistic engineer at Besedo.

She combines her programming and linguistic skills in order to automatically process natural languages.

Her work allows Besedo to build better and more accurate filters and machine learning algorithms.

Here at Besedo we’re thrilled to unveil details of the next steps following our recent merger with Paris-based ioSquare, a leader in automated content moderation!

To ring in the changes, our first step is rebranding the new company. You’ll see our new logo below and we’ll also be launching our fresh new website – complete with a colorful new look – very soon.

besedo logo

After much discussion, we’ve decided to keep ‘Besedo’ as our name and will extend it to all of ioSquare’s products and services. However, our moderation tool, Implio, will keep its name – for now. But the functionality will evolve as we fold lots of ioSquare’s excellent AI moderation products into the mix. The goal here is to create one ultimate tool that can tackle all content moderation challenges on its own.

What won’t change is our commitment to providing customers with a world class service; one that’s able to realistically meet the challenges of User Generated Content moderation as it evolves from being able to verify static text and images, to scrutinise more sophisticated content like videos, virtual reality, and in-app messaging.

Helping Marketplaces Grow with Trust

We’ve also taken the opportunity to take a good long look at our joint ambitions. Besedo will play a more active role in helping online marketplaces grow and build trust online, and we’ll continue to position ourselves at the heart of the content moderation ecosystem. To achieve this, our automated technology needs to continually evolve. The information fed into ioSquare’s deep learning algorithm — which powers the automation element — by Besedo’s human workforce will help us do this: ensuring our cutting-edge machine-learning AI technology reaches new levels of accuracy.

Build trust and brand throughout your company and marketplace with our eBook!

Click the photo to learn how to setup efficient moderation that really works!

moderation in marketplaces blog ebook cta

Trust is also a big part of what we’ll continue to offer: a secure moderation service, powered by both humans and machines, that helps you grow your business through safe and trustworthy user generated content. We’re focused on the long-term, and we are committed to our ambition of providing moderation tools, services and insights that enable a trust-based society online where everyone can engage fearlessly.

We see our new offering adding value right away. There’s a real opportunity to offer a complete moderation tool that combines all the great things about both automation and manual moderation. We know that one size doesn’t fit all and that businesses have varying degrees of moderation needs. Context, speed, and adaptability are key.

That’s why our solutions are scalable: able to handle big data and real time analytics; but agile enough to offer cost-effective moderation to startups and smaller businesses that don’t have enough data to run AI moderation.

Tweet This: How can trust help you build company trust within your #marketplace? Take a look! @besedo_official

Leveraging Human Expertise and AI

With the growth of sharing services and the increasing number of online marketplaces, there’s only going to be a rise in user generated content. Businesses need a trusted partner who can offer a bespoke service that caters to the needs of their company and innately understands the importance of content to their business model. Our intention is to offer that solution — one that’s data driven, high quality and cost effective.
Ultimately by combining the expertise, talent, and ambition of two companies, we’ve created a strong, market-leading platform that fuses automated machine-learning AI with human-controlled manual moderation and over 15 years of experience within the industry. And that can only be good news for you.

 

moderation for marketplace and company trust CTA