Is your site suffering from ‘marketplace leakage’? If so it’s because your customers are sharing their personal details with each other – to avoid paying site fees. But by doing so they also put themselves at risk. Here’s how to make sure your business protects itself from marketplace leakage and those that use it.
Marketplace leakage (also referred to as ‘breakage’) is a real problem for many online businesses. According to Venture Capitalists, Samaipata, the term can be defined as ‘what happens when a buyer and seller agree to circumvent the marketplace and continue transacting outside the platform.’
Broadly speaking, there are several ways in which personal details are shared – via listings, embedded in images, and within one-to-one chats. Information shared typically includes phone numbers, email addresses, WhatsApp details, and money transfer account details.
From a user perspective, it might make sense to try and do so. However, many don’t realize the wider ramifications of marketplace leakage and the negative impact it can have on the platforms they transact on – and on their own businesses.
Let’s look more closely at the impact of sharing personal details online via marketplaces and what can be done to prevent it.
How personal details do damage
As we see it, there are 3 key ways in which sharing personal details can have a negative impact.
From eBay to Airbnb; Amazon to Fiverr – the vast majority of marketplaces facilitate the trade of goods and services. As a result, a core part of each platform is its payment infrastructure.
But not only do these solutions offer a trusted way for users to transact, they can also be used to collect fees – a percentage paid for using the platform.
In the early days of a platform’s existence, many sites may be available to both buyers and sellers for free – whilst the marketplace is trying to scale and get as many users as possible. However, once it’s reached a certain threshold and networks effects are visible, it’s common for them to begin charging, often through the transaction.
This is often when users – primarily those selling on these sites – will try to circumvent the platform and include their contact details in each post. It might be that they paste their email address in the product description itself, or create an image that has details included within it.
When this occurs, your marketplace loses out on conversions. It’s something that’s easy to overlook and – on the odd occasion – let slide. But in the long-term, activities like this will seriously dent your revenue generation.
One of the major differentiating factors between online marketplaces is whether they’re commoditized or non-commoditized – particularly where service-focused platforms are concerned.
While commoditized service providers are more about getting something specific fixed, delivered, or completed (think Uber or TaskRabbit); non-commoditized providers (eg Airbnb) take into account a number of determining factors – such as location, quality, and available amenities.
Due to the nature of these sorts of services, they are more likely to encourage personal interactions – particularly when repeat transactions with the same vendor are involved. Once trust and reliability are established, there’s little incentive for either party to remain loyal to the platform – meaning conversions are more likely to be forfeited.
Leakage of this nature was partly to blame for the demise of Homejoy – an on-demand home services recruitment platform. The nature of the work involved increased the likelihood of recurring transactions. However, it transpired that the features facilitated by the site – in-person contact, location proximity, and reliable workmanship – were of greater value than the incentives offered by using the site itself in many cases.
As a result, more and more transactions began happening outside of the marketplace; meaning that the site lost out on recurring revenues.
3. User safety
Losing control of the conversation and having users operate outside of your marketplace, increases the risk of them being scammed.
This is particularly prevalent in online dating, where even experienced site users can be duped into providing their personal details to another ‘lonely heart’ in order to take the conversation in a ‘different direction’.
eHarmony offers some great advice on what users should be wary of, but the general rule of thumb is to never disclose personal details of any kind until a significant level of trust between users has been established.
While similar rules apply to online marketplace users too, some telltale signs of a scammer are requests for alternative payment methods – such as bank or money transfers, or even checks.
An urgency to trade outside of the marketplace itself is also a sign to be aware of. So it’s important to advise your users to be cautious of traders that share their personal details. Also, make a point of telling them to be wary of vendors who are ‘unable’ to speak directly to them – those who request funds before any arrangements have been made.
In all cases, marketplaces that don’t monitor and prevent this kind of activity put their customers at risk. And if their transaction is taken away from your site, they forfeit the protection and assurances your online marketplace provides.
But unless your users understand the value and security of your platform, they’ll continue to pursue conversations off your site and expose themselves to potential scammers.
Preventing marketplace leakage
The best way to overcome these issues and prevent marketplace leakage is to do all you can as a marketplace owner to keep buyer-seller conversations on your site and reinforce why it’s in their (and to some extent your) interest not to share personal details and remain on your platform.
There are several ways to do this.
The stronger the communication channels are within your platform, the less incentive there is for customers to navigate away from your site.
From eBay and Airbnb’s messaging functionality (which look and feel like email servers) to one-to-one chat platforms (similar to Facebook Messenger or WhatsApp), or even on-site reviews and ratings; the more user-friendly and transparent you make conversations between different parties, the greater the likelihood they’ll remain on your site. A point we also highlighted and covered in our webinar about trust building through UX design.
In addition, it’s always worth reinforcing exactly what your marketplace offers users – and reminding them of their place within it. For example, telling them they’re helping build a trust-based peer-to-peer network is a powerful message – one that speaks to each user’s role as part of a like-minded online community.
Provide added value services
If users feel as though there’s no real value to using your site – other than to generate leads or make an occasional purchase – there’s very little chance that you’ll establish any meaningful connection.
The best way to foster user loyalty is to make the experience of using your marketplace a better experience to the alternative. In short, you need to give them a reason to remain on your site.
In addition to safety and security measurements – consider incentives, benefits, and loyalty programs for both vendors and buyers.
Turo, the peer-to-peer car rental site is an example of a company that does this very well – by offering insurance to lenders and travelers: both a perk and a security feature.
In a similar way, eBay’s money-back guarantee and Shieldpay’s ‘escrow’ payment service – which ensures all credible parties get paid; regardless of whether they’re buying or selling – demonstrate marketplaces acting in both customers and their own interests.
Another way in which marketplaces offer better value is through the inclusion of back-end tools, which can help vendors optimize their sales. Consider OpenTable’s booking solution for example. The restaurant reservation platform doesn’t just record bookings and show instant availability; it also helps its customer fill empty seats during quieter services.
Platforms that can see past their initial purpose and focus on their customers’ needs are those that thrive. They offer a holistic, integrated solution that addresses a wider range of pain points. Which is a great way of ensuring they’ll remain loyal to your business; ultimately reducing leakage.
Filter and remove personal details
A relatively straightforward way to prevent marketplace leakages is to monitor and remove any personal details that are posted on your site.
However, this can turn out to become quite the task, especially when the amount of user-generated content increases.
The next logical step here would be to direct efforts towards improving your content moderation. Either improve your manual moderation and expand your team or look at setting up an automated moderation solution.
An automated filter is a great solution to help prevent personal details to be shared, and although the filter creation process can be complex, it’s definitely possible to create highly accurate filters to automatically detect and remove personal details in moderation tools like Implio.
Machine learning AI is another great automated moderation solution that will help with preventing personal details, and much more. Built on your platform-specific data, a tailored AI moderation setup is developed to meet your marketplace’s unique needs. This solution is a great option for online marketplaces that look for a complete customized solution.
Added value and moderation – a mutual benefit
Trust, security, and accountability are the most valuable features that any marketplace or classifieds sites can offer its users. However, they’re not always the most visible components.
But when they’re parts of a broader benefit – such as optimized user experience or a suite of useful features – the need to share personal details and transact way from a site is mitigated.
That said, shared personal details will always contribute to marketplace leakage. And without the right monitoring and moderation processes in place, it’s impossible for marketplace owners to overcome the challenge of marketplace leakage.
At Besedo, we work with online marketplace and classified sites to help them make the right choices when it comes to safeguarding their businesses and users by removing personal details.
To learn more about how you can prevent personal details form your marketplace, specifically through automated filters, check out our Filter Creation Masterclass on June 25th at 4:00 pm CEST.
If user generated content is the lifeblood, then the portal is the spine of any successful marketplace. Without a quality portal your entire business will quickly collapse.
Developing and maintaining the framework of your marketplace can be hard and expensive work, that takes away focus from other critical business areas like monetization strategies and product evolution.
Luckily there are a lot of great partners out there providing platform solutions. Russmedia Solutions is one of the most experienced companies in the business, having completed more than 100 successful projects.
We had a chat with their CRO Adrian Daniels about common challenges, pitfalls and tips for marketplace owners setting up their portal.
What does Russmedia do?
RussMedia Solutions is an international company that offers software services for online businesses. Our solutions include Job Board Software and other specific solutions for Real Estate Platforms, Car Portals and other online classifieds. We apply a structured approach to development, supported by dedicated project managers and product owners. Today our expertise includes digital marketing, user experience, SEO and conversion rate optimization.
Some of our key features include semantic search, matching, and classification based on machine learning, artificial intelligence, great filtering capabilities, user alerts plus very powerful analytics.
We began as an internal software development department of the Russmedia group, but soon after that, we started to develop online portal solutions not only for our internal products, as we started having requests for our solutions from external partners. We also took on the development and maintenance of news, job, car and real estate portals. Since then it’s been 15 years and more than 100 successful projects.
Why should marketplace founders partner with Russmedia to build their marketplace instead of using an in-house team?
We’ve built many projects from scratch. First our own portals and then many for our partners. When you have an in-house team the cumulated experience of your team is rarely that rich and diverse. Finding a solution outside your company might very often prove to be more effective not only cost wise but also from the human resources management perspective. You do not have a handful of people that are specialized in very specific technologies, but an entire company at your disposal with a lot of experience and with expertise in multiple areas. You have support around the clock and you also get to be part of a community.
Having a company that offers software as a service also saves the cost in developing new features or functionalities, many times we develop these and then offer it to our clients either as part of their subscription or at a better cost than if they would have developed it in-house.
You’ve recently joined us in a webinar around migrating from one marketplace tech to another, could you give us one example of pitfalls you’ve seen marketplace owners fall into when going through this process?
For sure, we assisted and conducted many migrating processes. some very successful, some that had many challenges. From my experience, there is a huge risk if planning has not been done thoroughly. Transparency and setting up clear goals is crucial and unfortunately, it happens that some of these checkpoints are not completed.
If you’d like here is a very short list:
- Schedule proper training for all users of the new tool
- Make a test migration with live data – It’s Important to test with live data, to measure migration timings and find out possible data issues with live data before the actual migration. Once data is there, more complete tests can be performed.
- Make sure all involved parties are present on migration date
What are some of the most common challenges marketplaces have when reaching out to Russmedia?
There are multiple situations, but 3 of the most common are:
- Undersized infrastructure – companies that did not plan their growth in advance and went for a lower budget solution which is not scalable, and then they end up in the situation where they need to migrate to a different solution for this very reason.
- The platform they currently use is not flexible enough – We often have prospects that tell us their current solution does not support multiple languages or cannot be integrated with certain apps or simply some desired design upgrades are not possible on the current platform
- Start-up in need of a solution
There are many vendors out there that can help build a marketplace. What sets Russmedia apart?
Indeed, many great companies and many good solutions are out there. I believe our key differentiator is the fact that our platforms are highly customizable. We are very flexible when it comes to integration or any other custom requests. We developed various projects for our clients from classic real-estate portals or general job portals to jobs in aviation or a marketplace for horses. We can adapt our platforms to nearly any language.
Last, but not least our partners become part of a community. We take on the innovation of new features, so the client no longer has to worry about this. Sometimes we give them new features that they did not even consider developing, but which end up being a great revenue stream for them.
Any examples of past projects that you’re really proud of? What made that project, in particular, a success?
We are proud of all our projects. Vol.at, CVOnline.hu, Laendleimmo.at, Simplysalesjobs.co.uk, Laendlejob.at, Immo.tt.com, Jobs.tt.com, just to name a few. If we’d have to pick one of them maybe laendlejob.at. It is also part of our group and it looks great; it works great and it is highly profitable.
You’ve just recently partnered with Besedo. How does that fit into your strategy? Does it allow you to provide an even better or more complete offering?
Absolutely. Besedo has great tools. AI-powered moderation tools are something that every marketplace should have. This partnership allows us to give more to our clients and it takes the pressure off our team as we no longer need to invest in research to develop something like this.
Last but not least Besedo’s solutions, like ours, are tailor-made so every client gets their own customized solution.
If you could give one piece of advice to those setting up a marketplace portal what would it be?
Plan. Make some plans and when you are done revise those plans. You also need to make sure you plan for the long term. Make sure you take into consideration at least the first three years of your existence and make sure that when you build your business it will be scalable.
This should also reflect on the infrastructure you are setting up. It might seem cheaper to get an off the shelf solution at first, but after a few months you might realize this was not a great idea higher costs are involved and more effort: migration, scouting for a different platform all the back and forth…you lose time and money.
About Adrian Daniel
Adrian is Chief Revenue Officer at Russmedia Solutions. he’s worked his way up in the company, so he knows the business inside-out. Started as a junior programmer in Russmedia more than 9 years ago and believes that technology is here to help us.
He mostly enjoys the fact that technology can create endless possibilities to help businesses thrive. Classifieds portals and their growth have been his focus for nearly a decade and now as a CRO he is am looking at all the potential that Russmedia’s solutions can offer potential partners.
He is highly motivated and believes that any problem has a solution.
It’s 2018 and AI is everywhere. Every company and their grandmom are now offering AI-powered solutions. With so many options to pick from does, it really matter who you partner with for AI moderation?
When we started building our AI for moderation in 2008, machine learning had hardly been applied to content moderation. Since then others have understood the value automation brings in keeping marketplace users safe.
Every time we go to a tradeshow or conference we see new companies with AI offers and we understand that as the market gets more saturated it can be hard to decide which vendor to bet on.
To help you navigate the AI jungle we wanted to highlight some very specific areas where our AI is unique in the market.
A lot of AI models work based on a sliding scale and the output you get is a probability score. The score gives you a picture of how likely the content piece is to be whatever the algorithm is looking for. So if a content piece receives a high probability score from a model looking to detect unwanted content, there’s a good chance that the content piece falls into that category.
However, a scoring system is often arbitrary. When should you reject an item as a scam? When the probability score is 100%? 99% or maybe 85% is good enough?
Our AI doesn’t operate this way. We want to provide our clients with clear answers that they can apply straight away. As such we do not send back an abstract score, instead, our algorithm provides a concrete answer.
We operate with 3 different, but clear answers that are easy to apply a moderation action to. The 3 values we expose are OK, NOK (not okay) and uncertain.
Let’s use the unwanted content model as an example. Our algorithms will look at the content and determine whether it’s unwanted content. If it is it will return “NOK” and you should reject the content piece, if it isn’t you will get “OK” back and you can accept it. If the model isn’t sure it will send back “Uncertain”, this doesn’t happen often, but if it does you should send the content for manual review.
That’s how simple it is. There’s no grey zone, only clear actionable answers to each content piece you run through the model.
A holistic AI approach
We believe that the value of AI is often mistakenly judged on the accuracy of the models alone. The reality is that it’s more complex than that. To explain why we need to get a bit technical and quickly outline a bit of AI terminology. (If you are interested you can read more about the basic concepts of AI moderation here)
When evaluating an AI there are multiple KPI’s you can look at, accuracy is just one of them. To determine if our AI is performing to our standards, we look at a wide array of metrics. We can’t cover all in this article, but here are some of the most important ones.
Is a number that describes how often models predictions were actually correct. If there are 100 content pieces and the machine determine 10 of them to be unwanted content, but only 8 of them were actually unwanted content then the model has a precision of 80%.
Recall is showing how many of the actual unwanted content pieces the algorithm correctly identifies. If we go back to our example with 100 content pieces. The AI correctly identified 8 unwanted content pieces out of the 100, but, there were 16 unwanted content pieces. In this case, the recall of the model is 50% as it only found half of the unwanted content cases present.
Describes the number of decisions the model gets correct. If we have 100 content pieces and 16 of them are unwanted content the accuracy of the model will be negatively impacted both by the unwanted content it fails to identify and by any good content it wrongly identifies as bad.
This means that if a model out of 100 content pieces correctly identified 8 unwanted content when there were 16 present and it wrongly identified 2 good content pieces as unwanted content the model would have an accuracy of 90%.
Automation rate is a way to measure exactly how much of the total content volume is being handled by AI. If you have 100000 content pieces per day and 80000 of them are dealt with by the models, then you have an automation level of 80%
When judging how well AI works we believe it needs to be based on how it performs in relation to all these 4 metrics as that will give you a truer picture of how well the AI is dealing with the content challenges.
You can never have perfect accuracy, precision, recall, and automation at the same time. Our AI is unique in that it is calibrated to meet your business objectives and to find the right balance between all of these indicators.
Supervised and continuous learning
Machine learning models can be taught in different ways and the way they are taught has a huge impact on how well they perform.
Our AI is trained on structured and labeled data of high quality. What this means is that the data sets we train our models on have been reviewed manually by expert content moderators who have taken a yes or no decision on every single piece of content.
We also update the models regularly ensuring that they are updated and adhere to new rules and global changes or events that could impact moderation decisions.
A calibrated solution
One of the benefits of designing our AI with an eye on multiple metrics is that we can tailor-make a solution to ensure the perfect fit for your business.
We have multiple levers we can pull to adjust the output allowing us to tweak accuracy and automation ensuring that everything is calibrated as your business requires.
With our solution the accuracy and degree of automation are elastic and that makes our AI setup much more flexible than other available options.
Adaptive AI Solution
One of the few drawbacks of Machine Learning is that it’s rigid and static. To change the model, you need to retrain it with a quality dataset. This makes it hard for most AI setups to deal with sudden changes in policies.
We‘ve solved this problem by deeply integrating it into our content moderation tool Implio. Implio has a powerful filter feature which adds flexibility to the solution, so you can quickly adapt to change.
For example. when a new iPhone comes out the AI models will not pick up the new scams until it has been trained on a new dataset including them, but you can add filters in Implio until there’s time to update machine learning. The same is true for other temporary events like the Olympic Games or global disasters except that these are over so quickly that it’s likely not feasible to update the models. Instead, you can add Implio filters that ensure high accuracy even during times with special moderation demands.
In addition, we have a team dedicated to studying moderation trends and best practices and all our AI customers benefit from their knowledge and our 16 years of experience to support and guide them.
ML Tailored to content moderation
Most of the AI solutions on the market were created to solve a general problem that occurs in multiple industries. This means that the AI works okay for most companies, but it’s never a perfect fit.
We took the other route and dedicated our efforts to create an AI that’s perfect for content moderation.
When we develop our AI we do it based on the 16 years of experience we have helping companies of all sizes keep their users safe and the quality of their user-generated content high. That has made our stack uniquely tailored to content moderation ensuring unparalleled results in our field.
We also have a team of experts supporting our AI developers with insights, internal learnings from moderating global sites of all sizes and research into industry trends and the challenges faced by online marketplaces and classifieds in particular.
Our research team feeds their insights to Besedo as a whole ensuring a high level of expertise at every level of our organization. From moderation agents to managers and developers. This ensures that our experience and expertise is infused into all our services and products.
Get an AI solution that fits your needs
There is no question about it, AI will play a huge role in marketplace growth over the next couple of years. However, to truly benefit from machine learning, make sure you get models that will work well for you.
We often talk to marketplace owners who have become slightly disillusioned after testing AI solutions that weren’t properly calibrated for their business. They have wasted time implementing a solution that didn’t solve their issue in a proper way and now they are wary of AI as a whole.
That’s a shame, when applied correctly AI is a great money saver and provides other benefits like fast time to site and user privacy protection.
To avoid spending money on the wrong AI, have a chat with our solution designers and they will give you a good idea of which setup would work for you and the results you can expect. Together you can tailor a solution that fits your exact needs.
Working with user-generated content moderation is not an easy task. Moderators need to be able to spot the slightest details to find fraud, scams or counterfeits etc. Therefore, it’s important to provide your manual workforce with the best possible conditions to work efficiently. To help simplify work for both you and your moderators, we now introduce multiple queues in our all-in-one content moderation tool, Implio. This new feature will help you streamline your manual moderation setup and the day-to-day work for your moderators.
How do multiple queues help my site?
We’ve implemented multiple queues to be as flexible as possible. This means that you can decide how many different queues to create, edit their function and names, or delete them whenever you want. Make sure to customize your queues to make the daily operation work as smooth as possible.
There are numerous ways multiple queues can be of use to your online marketplace. One way is to create a queue per language supported by your site. Utilize geolocation, available in Implio, to ensure that content is automatically placed in the correct queue, making it easier for your moderation teams to specialize and work with one language only.
This use of multiple queues is very valuable to multi-language marketplaces, but our new Implio feature can also help marketplaces who only support one language. Multiple queues can, for instance, be set up to automatically sort content based on price, category or risk and funnel it into different queues allowing you to direct it to specific expert teams or agents.
You can also create queues for items which are time sensitive and needs a shorter SLA, for example, funnel flagged content or new users to individual queues.
In Implio, we always have two predetermined queues, one default queue and one escalated queue. Your moderators can easily select which queue to work in from the manual interface. When working in a specific queue, your moderators can escalate an item to a supervisor at any time or send the item to another queue.
Multiple queues help you enable specialized moderation teams, which will simplify your moderators’ day-to-day work and make your overall moderation setup more effective.
How does it work?
Begin with creating a new queue in Implio. Then navigate to automation and create a new rule. Set the rule to send matching content to the correct queue that you just created. Here’s how a language queue set up looks like:
Try it out yourself.
Create your very own account in Implio, it’s free to use up to 10.000 items per month. Follow the steps above to set up your unique queue. Make sure to use the CSV importer to test multiple queues and all the other features available in Implio with your very own data.
If you want to learn more about multiple queues and Implio, get in touch with one of our content moderation experts.
Every feature we include in Implio has been carefully chosen based on feedback from stakeholders (internal and external) and after careful analysis of current and future needs within the industry (read more about how we plan our roadmap). As such it is always exciting when we launch something new since we know it is anticipated by our users and will increase their efficiency and quality of life when working in our tool.
Our developers work hard to ensure regular updates and feature additions to Implio. Here are the biggest improvements we released in 2017
Debuting almost an entire year ago, this particular feature helps manual moderators create a customized UI template in Implio. This allows users to display the necessary moderation information whichever way suits them best. For example, they could configure the layout to prioritize the image shown, user details, customer data, and moderation feedback – among other information.
Our second big feature of last year was the new Implio search tool. Never underestimate the power, speed, and usefulness of a good search function! The always-visible search bar is found at the top of each page within Implio. Users can search by keywords, exact quotes, and specific contact information – including email addresses and phone numbers.
The results can be ordered by relevance, newest first, oldest first; and displayed as a list or using images. We think this feature is going to be particularly useful for moderators as they review posts, or monitor accounts and items coming into Implio.
In May we launched Implio’s updated manual interface. It was the culmination of months of hard work from our developers; especially our front-end team.
We spent a lot of time performing usability tests and getting client feedback; fine-tuning the new interface to make sure it benefits everyone.
Key improvements added to this version include:
- Data is organized to follow the API’s structure, to make things more consistent.
- Revisions of a single item are grouped together so that the moderator only reviews the latest version and can disregard previous ones.
- Content can be edited directly within the page. Plus type and category can be changed using a simple drop-down menu.
- A status bar helps you track your progress on the page
- It’s also much easier (for a developer) to configure a number of settings for each client. These include the number of items displayed per page, and the ability to enable or disable pre-approved items in the queue.
Our fourth biggest Implio feature involved the rollout of different user role permissions. Each user role now comes with a specific list of permissions, allowing admins, automation specialists, moderators, and spectators full or restricted access to certain functionalities. As you’d expect, admins have the greatest level of authority, but being able to manage rules and search items will undoubtedly make moderators’ jobs a lot easier.
Our final feature for 2017, launched just before Christmas. It was our geolocation filter – which we’ve covered in a dedicated blog post.
Essentially it’s used to detect inconsistencies between where users say they’re based, and where their IP address actually shows them to be; ideal for helping online marketplace owners protect their users from scammers.
Geolocation is fully integrated into Implio and is visible in the manual moderation interface. However, users can also create their own rules, helping them quickly compare information, making it easier for moderators to detect discrepancies.
So… what does 2018 hold? Don’t worry there’s a whole lot more where these came from! We already have a number of features, functions, and updates planned for the next 12 months.
Watch. This. Space.
Sue Scheff is an author, parent advocate and cyber advocate who is promoting awareness of cyberbullying and other online issues. She is the author of three books, Wit’s End, Shame Nation and Google Bomb.
We had the opportunity to conduct an interview with her where she talked about victims experience of online sexual harassment/online shaming and shared her opinion on what sites can do to help fight the problem.
Interviewer: Hi Sue, thanks a lot for taking the time to share your knowledge, I know you are extremely busy! You’re the author of Shame Nation and Google Bomb, what were you hoping to achieve by releasing them?
Sue Scheff: Awareness. Most importantly, giving a voice to the voiceless.
After I wrote Google Bomb I was stunned by the outpour of people from all walks of life – from all over the world – that contacted me with their stories of Internet defamation/shaming/harassment. People were silently suffering from cyber-bullets, like myself, on the verge of financial ruin and all were emotionally struggling.
Google Bomb was the roadmap to helping people know there are legal ramifications and consequences of online behavior.
By 2012, I was taken back by the constant headlines of bullycide. Names like Tyler Clementi, Amanda Todd, Rebecca Sedwick, Audrie Potts – I knew how they felt – like there was no escaping this dark-hole of cyber-humiliation. At 40 years-old, when this happened to me, I had the maturity to know it would eventually get better. These young people don’t.
Google Bomb was the book to help people understand their legal rights, but with the rise of incivility online, Shame Nation needed to be written to help people know they can survive digital-embarrassment, revenge porn, sextortion and other forms of online hate. I packed this book with over 25 contributors and experts from around the world – to share their first-hand stories to help readers know they can overcome digital disaster. I also include digital wisdom for online safety and survival.
Interviewer: You’re a victim of online harassment and won a landmark case of internet defamation and invasion of privacy. Can you please try to explain your experience?
Sue Scheff: In 2003, I was attacked online by what I refer to as a disgruntled client, definitely a woman that didn’t like me. Once she started her attack, the gang-like mentality of trolls joined in. These trolls and this woman created a smear campaign that took an evil twist. From calling me a child abuser, saying I kidnap kids, exploit families, a crook and more. Things went towards the sexual side when they claimed to be auctioning my panties (of course they never meet me – or had anything) but to anyone reading this, how do you explain these are malicious trolls out to destroy me?
As an educational consultant, I help families with at-risk teens find residential treatment centers. These online insults nearly destroyed me. I ended up having to close my office, hire an attorney and fight.
By 2006 I was both emotionally and financially crippled. In September 2006 I won the landmark case in Florida for Internet defamation and invasion of privacy for $11.3M in a jury verdict. Lady Justice cleared my name, but the Internet never forgets. Fortunately for me, the first online reputation management company opens their doors that summer. I was one of their first clients. To this day – I say my lawyer vindicated me – but it’s ORM that gave me my life back.
Interviewer: You’ve also met many other victims of online harassment, online shaming, revenge porn etc. How are victims affected, both in short and long-term?
Sue Scheff: Trust and resilience.
I’ve spoken to many victims of online hate. The most common theme I hear is the lack of trust we (they) have of others (both online and offline) initially. With me, I know I become very isolated and reserved. My circle of trusted friends became extremely small – the fact is, no one understands this pain unless they have walked in your shoes. When researching Shame Nation – others expressed feeling the same way.
The good news is, with time we learn to rebuild our trust in humanity through our own resilience. This doesn’t happen overnight. It’s about acceptance – understanding that the shame doesn’t define you and it’s your opportunity to redefine yourself.
The survivors you will read about in Shame Nation have inspiring stories of hope. They all learned to redefine themselves – out of negative experiences. It’s what I did – and realized that many others have done the same.
Interviewer: Where do you see the biggest risk of being exposed to online sexual harassment?
Sue Scheff: Online reputation and emotional distress.
Today we face the majority of businesses and universities that will use the Internet to search your name prior “interviewing” you. Depending on how your name survives a Google rinse cycle, it will dictate your financial future – career or job wise.
Just because you have a job – doesn’t mean you’re out of hot water. More than 80% of companies have social media policies in place. If your name is involved in sexual misconduct (scandal) online – you could risk losing your job. Colleges are also implementing these social media policies.
PEW Research says the most common way for adults to meet – is online. If you’re a victim of cyber-shame, online sexual harassment, revenge porn or sextortion – this content could hinder your chances of meeting your soul mate.
The emotional distress is overwhelming. You feel powerless and hopeless. Thankfully today there are resources you can turn to for help.
Interviewer: Do you think this issue is growing or are we any closer to solving it?
Sue Scheff: Yes… and no.
In a 2017 PEW survey, over 80% of researchers predicted that online harassment will get worse over the next decade – this includes revenge porn and sexual harassment. This is a man-made disaster, and can only be remedied by each of us taking responsibility for our actions online and educating others. Education is the key to prevention. I believe the #MeToo and Times Up movement have brought more awareness to this topic, but I fear not enough is being done about it for the online world. It’s too easy to use a keypad as a legal lethal weapon.
The good news is that we are seeing stronger revenge porn laws being put in place, as well as more social platforms, are responding to removing content when flagged as abusive. Years ago, we didn’t have this – though it may be slow, it’s moving in the right direction.
Interviewer: What would be your advice to internet users today on how to avoid, prevent and fight harassment?
Sue Scheff: Digital wisdom.
I’m frequently asked, “how can I safely sext my partner?” I give the same answer every time. The Internet and social media were not and is not intended for privacy. We only have to think of the Sony email hacking or Ashley Madison leaks to know that no one is immune to have their private habits exposed to the world wide web. You should have zero expectancies of privacy if sending any sexual message via text or otherwise. Several studies concur – a majority of adults will share personal and private messages and images of their partner without their partner’s consent.
Your friend today could quickly turn into a foe tomorrow. Divorce rates are climbing – what used to be revenge offline with charging up your ex’s credit cards, now has longer-term consequences when your nudes can go viral or other comprising images or content. E-venge (such as revenge porn) is how ex’s will take out their anger. Don’t give them that power.
If you find you are a victim of online harassment or online hate – report it and flag it to the social platform. Be sure to fill out a form – outlining how it’s violating their code of conduct – email them professionally (never use profanity or a harsh tone).
I encourage victims not to engage with the harasser. Be sure to screenshot the content – then block them. If you feel this is a case that will get worse and it needs to be monitored, you can ask a friend to monitor it for you so you don’t have to be emotionally drained from it. I also tell the friend not to engage – and to let you know if it gets to a point that it may need legal attention – that your life is in danger or your business is suffering.
Interviewer: What is your opinion on what sites can do to help fight this problem?
Sue Scheff: In a perfect world – we would say stricter consequences offline for the perpetrators – which would hinder them from doing this online in the first place.
Strengthen the gatekeepers: User -friendlier and a speedier response time.
Although sites such as Facebook, Twitter and Instagram are stepping up and want to alleviate online harassment, many people still struggle with figuring out the reporting methods and especially the poor response time. Where are the forms? After that – the response time can be troubling – from what victims have shared with me. When you’re a victim of sexual harassment, these posts are extremely concerning – every minute feels like a year.
I personally had a good experience on Facebook – when I wrote about a cyber-stalker on my public page. It was addressed and handled within 48 hours.
Systems should be in place that if a comment/image is flagged as abusive (harassment) by more than 3-5 unique visitors, it should be taken down until it can be investigated by the social platform’s team. I think we can relate to the fact that online abuse reported daily is likely overwhelming social media platforms – however, I believe they should give us the benefit of the doubt until they can investigate our complaint.
Interviewer: What do you think about the idea of using computer vision (AI) to spot and block nude pictures before they are submitted on a dating site?
Sue Scheff: If dating sites were able to implement AI for suspicious content, it would be a great start to cut-back on sexual harassment and keeping the users safer.
Interviewer: Where can victims turn for support?
Are you a victim of online sexual harassment or cyberbullying?
Please heed Sue’s advice and reach out for support.
If you are site looking to help in the fight?
Contact us to see how AI and content moderation can help keep your users safe.
Sue Scheff is a Nationally Recognized Author, Parent Advocate and Internet Safety Advocate. She founded Parents Universal Resources Experts, Inc. in 2001.
She has 3 published books, Wit’s End, Google Bomb and her latest, Shame Nation: The Global Epidemic of Online Hate with a foreword by Monica Lewinsky.
Sue Scheff is a contributor for the Psychology Today, HuffPost, Dr. Greene, Stop Medicine Abuse, EducationNation, and others. She has been featured on ABC 20/20, CNN, Fox News, Anderson Cooper, Nightly News with Katie Couric, Rachael Ray Show, Dr. Phil, and more. Scheff has also been in USA Today, LA Times, NYT’s, Washington Post, Wall Street Journal, AARP, just to name a few.
As product owner for our Implio service, it’s Olivier Vencencius’ job to make sure that our all-in-one content moderation tool evolves in the right way – for clients, moderators, and stakeholders. Just how does he manage to juggle all the different needs, wants while keeping the product vision on track?
Interviewer: Hi Olivier, thanks a lot for taking the time to share your knowledge, I know you are extremely busy! Could you start us off by telling us a little more about you and your time at Besedo.
Olivier: Sure. Well, I’m originally from Belgium (the French-speaking part!), but I’m now based in Besedo’s Malta office where I’ve been working as product owner for Implio for the last two years. I’ve actually been working for the company for the past six years though. I studied IT originally, but started life here as a content moderator for one of our clients. In my free time I developed content moderation tools because I was fascinated with how even simple tools could help optimize the process.
This lead to a role as IT support for our in house teams supporting the tools I had created, before joining the newly set up development team where I working on the very first version of what has now become Implio. From there I joined our internal Centre of Excellence, specializing in process improvement. There, I began to oversee and manage the development of different tools, and share knowledge and best practice about using them, before taking on my current role as product owner.
You could say that all the different hats I’ve worn at Besedo so far has perfectly prepared me for my current position!
Interviewer: What do you do day-to-day; as Implio’s product owner?
Olivier: It’s quite a broad remit, but there are some key things I’m involved with. Essentially I’m in charge of how the product develops, so I work closely with the development team, helping them plan and implement features within Implio, in order to consistently evolve the product. We use Agile methodology, which means we work in an incremental and iterative way – updating and changing feature elements as required.
Work is organized into sprints, so we’ll focus on a particular feature within the product for a two week period. We’ll have brief daily meetings to discuss progress and issues before getting on with assigned tasks and resolving any concerns.
I’m also responsible for defining the product vision – the why, what, where, and how. It sets the scope for the product and gives us a base to validate our next objectives and ensure that we always deliver value to our users.
Interviewer: Can you talk us through the process of building a product roadmap and how this helps define what steps need to be taken?
Olivier: Certainly. The roadmap is a list of all the short and long term requirements we’ve gathered about Implio from all of the relevant stakeholders. This includes internal stakeholders from across the company – our content moderators, team leaders and managers, as well as feedback from external sources: our clients and prospective clients, so that we thoroughly understand the features they value and what their pain points are.
We begin the process of reviewing all the feedback with the R&D team; looking closely at the most frequent and important pain points and brainstorming ways to tackle them. Once we’ve established this list of possible improvements we prioritize them based on the value they give and their complexity. All of this goes into our roadmap which always remains tied to our product goals and objectives.
Interviewer: Could you give an example of a particular feature(s) you’ve implemented recently?
Olivier: We are currently focusing on creating a smoother on-boarding process for our clients. As part of this we have been working on new set of slides that give new users a tour of the product on sign-up. We have also provided users with new customisable settings related to manual moderation.
Another focus point for us is to expand our existing automation capabilities we recently we did that by releasing a new geolocation tool and we are close to the release of a new set of solutions to tackle common moderation problems using AI such as a language detection tool.
These latter two are specifically related to fraud and scam prevention; allowing us to detect suspicious terms in different languages and hone in on activity taking place in locations that don’t match with a user’s IP address. Our goal with Implio is ensuring that our clients have all the best solutions to catch and prevent scams within one tool.
Interviewer: How do you future-proof Implio? Is that even possible?
Olivier: It all comes from knowing what the current challenges are and taking time to anticipate what’s coming. From my time as a moderator and from our internal, ongoing knowledge sharing I know the challenges in dealing with user profiles, behavior and content for online marketplaces which also applies to dating and sharing-economy sites. I add to this knowledge regularly through user research and interviews.
Thanks to our engineering team and my background of software development I can identify easily what is involved and what are the steps in developing the best solution for tackling these challenges. Combined that knowledge and experience gives me a pretty good understanding of what we need to do in order to build the right tool for both current and future needs. Within our R&D division we are also all encouraged to continuously be on the lookout for new solutions and to experiment with new things we believe could make a difference in the product and for our customers, particularly where automated AI and computer vision are concerned.
For specific content moderation needs and trends within trust and safety we have a full team dedicated to research and internal knowledge sharing so when a new moderation need surfaces I am informed immediately.
I also work closely with our sales and customer success team to identify the needs of our users. We spend time analyzing what they are trying to achieve and design our solutions so new features don’t just solve a specific problem for one client, but benefit our entire userbase and help them solve issues in a smart and innovative way.
Knowledge sharing and ensuring that all teams work closely together across the company is crucial for understanding what our challenges might be in six, 12, or 18 months’ time – or even further down the line. The timeline for implementation can take a similar amount of time, so understanding trends early is an important aspect of our work and crucial to ensuring that our tool is able to solve the challenges of tomorrow.
Interviewer: Speaking of challenges, what’s the biggest challenge in your job?
Olivier: There’s always a lot to do, which is exciting, but it also means that we need to stay focused and prioritize. The customer’s needs come first so we need to action things that are most valuable to them. However, we also need to make sure that what we do balances with the company’s objectives; which involves mapping each feature to the overall product vision so that everything fits together. It can often be a tough decision to make.
Interviewer: And what’s the best or most interesting part of being the product owner for Implio?
Olivier: Having a partnership with customer where we share ideas and discuss feedback. Seeing them being successful and happy with the product is one of the most exciting things about being a product owner!
Olivier has worked with Besedo since 2011. He has held a number of roles within the company and has played an integral part in the development and success of Implio.
Apart from his talent for organization and project leading he is known within the company for the incredible number of cat t-shirts he owns.
Whether you own an online marketplace, dating site, or manage a sharing economy, falsified information and fraud is an unfortunate part of the package, but it doesn’t have to interfere with the way your users interact. Do you know what it takes to create a safe and trustworthy experience for your users? Take a look at Besedo’s latest infographic to unpack the importance of it all, and what next steps to take to implement your ultimate content moderation strategy.
Take a look at how Besedo can help your content moderation strategy, for free! Try our all-in-one filter automation tool to get started.
We talk weapons, water heaters, challenges of weeding out false positives and how to create accurate filters with Besedo filter manager, Kevin Martinez.
Interviewer: Great to meet you, could you tell us a bit about yourself?
Kevin: I’m Kevin Martinez; originally from Spain, but raised in France, now working out of Besedo’s Malta office. I’ve been with the company for five years. In 2016 I had the honor of setting up Besedo’s first client filter. And we still have the client – so I must have done something right!
Interviewer: Excellent! So, tell us more about what you do.
Kevin: The short answer is ‘I’m a filter manager’. I make sure that our clients’ filters are working as well as they should be – monitoring filter quality across all Besedo assignments.
I manage three filter specialists – two in Colombia and another in Malta. Being from different cultures, speaking different languages, and having a presence in different time zones means we can work with clients across the world.
The longer answer is that I assess decisions that our automated moderation tool Implio has made. Quality checks like these are done at random. I take a sample of content that’s been approved – items that have been filter-rejected and filter-approved – and identify if any mistakes were made. I then learn from these mistakes and make appropriate adjustments to the filter. This way we maintain and improve the accuracy rate of our filters over time.
Quality checks take time, as we’re really thorough. A single one can take half a day! But tracking the quality day-by-day is vital to keeping the filters accurate and it allows us to provide a report with a quality rate for our clients at the end of each month.
Interviewer: That sounds like a complex task… What kind of things are you looking for?
Kevin: Typically, we’re looking for false positives in filters: terms that are correctly filtered according to the criteria set, but aren’t actually prohibited.
Take Italian firearms brand, Beretta, for example. Weapons are prohibited for sale online in some nations, but not in others. So, for many sites a filter rejecting firearms would make sense.
However, there’s another Italian brand called Beretta – but this company manufactures water heaters (!). There’s also a Chevrolet Beretta car, and an American wrestler who goes by Beretta too. The filter can’t distinguish between these as completely different things until we know that they need to be distinguished between. So, lots of research is needed to ensure that, say, a Beretta water heater parts ad isn’t mistakenly rejected from an online marketplace.
A good filter will reduce the time the moderators spend on the content queue and will also reduce the length of time it takes to get a piece of content live on the site. It’s an ongoing process, one that gets better over time: gradually improving automation levels and making the manual moderator’s job a lot easier.
Interviewer: What’s the overall effect of a ‘bad’ filter, then?
Kevin: It depends. If the filter is set up to auto-reject matched words and phrases, it leads to a bad user experience as genuine ads might get rejected (as the case with water heaters illustrated). If, the filter is set up to send matched content for manual moderation, the automation level decreases. We agree to a certain automation level when we sign a contract with a client, so if there are more items for the manual moderation team to approve; it puts pressure on us to reach our service level agreement.
Interviewer: Which rules are hardest to program into a filter?
Kevin: Scam filters are the most complex to implement; mostly because scams evolve and because scammers are always trying to mimic genuine user behavior. To solve this, we monitor a number of things in order to detect ‘suspicious’ behavior, including email addresses, price discrepancies, specific keywords, IP addresses, payment methods (like PayPal and Western Union) – among other things.
One of the biggest challenges is that on their own, elements like these aren’t suspicious enough to warrant further investigation; so we have to ensure the filter recognizes a combination of them for it to be effective. We perform a lot of research and collaborate closely with clients, to ensure each filter is as accurate as possible.
Interviewer: Sounds like you need a lot of expertise! What does it take to be a good filter manager?
Kevin: You need to understand how moderation works, and most filter specialists have a good grasp of computer programing (particularly the concept of regular expression) too. But equally you need to have a curious, analytical, and creative mind.
Admittedly, filter quality checks can be a bit repetitive, but they are very important. Being able to investigate, test, and find ways to setup and improve filters is crucial. This means understanding how the filter will interact with words in practice, not just in theory. The most important thing is to have the drive to keep pushing; to find the perfect solution for the client’s needs.
Interviewer: What do you enjoy the most about your work?
Kevin: I love the beginning of every new project. I help onboard each new client from the very start, setting up the filters and creating a report for them. Each one is different, so lots of investigation is involved as there are different rules to consider: depending on who the client is, what they do, and where they’re based.
As mentioned, rules can differ between countries. For instance, in South America, you don’t need to apply a gender discrimination filter for something like jobs or housing – unthinkable in Europe, which has stringent equality laws.
Each day I look at the quality of the client data by opening a random filter, reviewing at the ads going through that filter and seeing everything’s working correctly. There are many parameters involved, and it involves going over the finer details, but this is the stuff I’m passionate about. I can be quite obsessional about it!
Nothing is impossible. I aim to get the client what they want and will try again and again and find a creative way to deliver it!
Kevin is a filter manager at Besedo. He combines creativity, perseverance and in-dept research to create highly accurate filters in the all-in-one moderation tool; Implio.
His daily job is to ensure that filters are maintained, tweaked and continuously kept accurate so Besedo’s clients can enjoy a high automation rate without sacrificing user experience.
So how do you calculate the price of custom content moderation with AI? At Besedo we look at it from a number of angles: Volumes, complexity of moderation actions needed and languages. We build and tailor something bespoke for each client that we do not share with anyone else.
If you work in content moderation for a classifieds site or an online marketplace, you’ll have probably heard lots of talk about machine learning and tailored AI. No doubt you’ll have wondered about its features, cost, and value.
As a content moderation service provider we’d gladly shout out positive things about tailored AI all day long (!), but we also wanted to give some background into why we believe it works, to shed some light on alternatives, and give some insight into costs.
Cost and ROI comparison between tailored AI and generic ML models
In a nutshell, tailored AI is a machine learning algorithm that’s created using clients structured and labeled data. By inputting this data, you can teach your AI to learn very specific moderation patterns. It can handle complexity, is self-learning, will give you a much higher accuracy rate, and higher automation levels. It’s much more meaningful and offers better results than generic alternatives, which are less reliable and error-prone.
At Besedo for instance with tailored models, we have accomplished automation rates of up to 90% with an accuracy level of up to 99% accuracy. That would not be possible using generic one size fits all models.
Generic AI, while useful when moderating something fixed – like language – can’t handle specific challenges, as it doesn’t learn in the same way as tailored AI. Say you want to set moderation criteria for profile pictures on a dating site. There are lots of things you need to do: ensure users are over 18, censor nudity, make sure there’s a face visible, that no weapons are shown, and that each picture is good quality. These are the requirements of a specific platform. Using several different generic AI models to try and moderate these criteria won’t work as well as a single tailored AI can. But you could always build your own model, right?
While it might seem simple and less expensive to build your own tailored AI, it often ends up as a costly distraction. Companies can spend years pouring in resources into a setup and still never get it exactly right. Creating powerful machine learning moderation models isn’t just a matter of putting a couple of developers on the task. It requires data scientists and semantic experts to make sure the AI keeps learning and performing better. Considering the ongoing cost and complexity, why create your own content moderation algorithm when there are expert companies offering tailormade solutions? Unless you are a huge company with very specific needs you wouldn’t develop your own helpdesk or customer service tool. Then why go that route with content moderation?
The price of AI moderation
So how do you calculate the price of a tailored AI? At Besedo we look at it from a number of angles: Volumes, complexity of moderation actions needed and languages. We build something bespoke for each client that we do not share with anyone else. There is a setup fee to create an AI model for the client, which involves learning from available client data to build a specific model; monthly moderation fees, which we base on the projected volumes (starting at a minimum of 200,000 moderated items per month). The monthly moderation fee covers hosting, software licenses, and maintenance. Apart from this, there is a monthly professional fee, which includes updates, new performance improvements and updates of rules to ensure that your automation rate and performance is always improving. Finally, we have a fixed support fee that gives you 24/7 technical support.
A lot goes into creating a tailored AI, but it is still a lot more cost-effective than manual moderation, especially over time: and is far less expensive than developing your own moderation model. You can’t really compare a tailored approach to a generic one at all since that would be like comparing a chisel to a sledgehammer. You will not get the accuracy you need and will end up wasting money – as well as time and effort.
All factors considered, by our calculations companies that choose tailored AI can save anywhere between 50%-90% on manual moderation pricing alone. Surely that’s a worthwhile investment of time and money?
Still not convinced? Get in touch!