The restrictions put in place to combat the global Covid-19 pandemic has had a devastating effect on many businesses. Social distancing, restrictions on physical services and a downturn in spending has also hurt most marketplaces and sharing economy sites despite their digital nature.
After months of closed down societies and harsh restrictions, nations are slowly and carefully opening up again, but the world is forever changed. Businesses who understands and adapts quickly to the new reality will be successful. To do so they’ll need to understand the challenges and opportunities arising in the post-corona business landscape.
We’ve asked 8 online marketplace experts to share their thoughts and predictions to help you prepare and adapt to the new reality.
User safety is key for all online platforms, particularly when you’re dealing with vulnerable youngsters. Moderating can be challenging and getting the balance between censorship and safety right can be hard.
We sat down with industry veteran and founder of Friendbase; Deborah Lygonis, to discuss the experience she’s gained from developing and running a virtual world for teens.
Interviewer: Hi Deborah. Could you please give us a short introduction to yourself?
Deborah: My name is Deborah Lygonis and I am a serial entrepreneur. I have started and run several businesses over the years, mainly within the software and gaming sector, but also e-health and other tech. I love tech and I’m passionate about startups and entrepreneurship. I also work as a coach and mentor for entrepreneurs within what’s called the European Space Agency Business Incubator; The ESA BIC, and for a foundation called Entrepreneurs Without Borders.
Interviewer: Wow! That’s an impressive background. One of the things you’ve started as an entrepreneur is Friendbase, right? Could you tell us a bit more about that?
Deborah: Yes. Friendbase is a company that I founded with my brother and a third guy called Andreas. We’ve known each other for many years. Well, obviously, I’ve known my brother for many years, but Andreas as well, has been part of our group of friends and acquaintances for many, many years. We decided to found Friendbase in 2013. We saw that the whole idea of virtual worlds hadn’t really migrated over to smartphones and we wanted to see if it was possible to create a complete cross-platform version.
So, we put together a mockup of an Android, IOS, Web version and put it out there to see if that was something that today’s young people would like.
Friendbase is a virtual world for teens where they can chat, play games and also design their looks and spaces. Now we’re also moving towards Ed tech in the way that we’ll be introducing quizzes that are both for fun but also have learning elements in them.
Interviewer: That sounds awesome. What would you say is the main challenge when it comes to running cross-platform online community and specifically one that caters to teens?
Deborah: There are a lot of challenges with startups in general, but also, of course, running an online community. One challenge is when you have people that meet each other in the forms of Avatar and written chat and they have different personalities and different backgrounds that can cause them to clash. The thing is that when you write in a chat, the nuances in the language don’t come through as opposed to when you have a conversation face to face. It’s really very hard to judge, the small subtleties in language and that can lead to misunderstandings.
Add to that as well that there are lots of different nationalities online. That in itself can lead to misunderstandings because they don’t speak the same language.
What starts off as a friendly conversation can actually rapidly deteriorate and end up in a conflict just because of these misunderstandings. That is a challenge, but that’s a general challenge, I think, with written social interactions.
Interviewer: Just so we understand how Friendsbase work. Do you have one to one chat, one to many chats or group chats? How does it work?
Deborah: The setup is that we can have up to 20 avatars in one space. No more, because then it will get too cluttered on the small phone screens. So, you can have group chats. I mean, you see the avatars and then they have a text bubble as they write so that it can be several people in one conversation.
Interviewer: Do you have the opportunity for groups of friends to form and join the same kind of space together?
Deborah: Yes. Each member has its own space. They can also invite and open up their space for other friends.
Interviewer: And in that regard. What you often see in the real world with team dynamics is that there is a group of friends and there is the popular people in that group. And then one person who maybe is a little bit an outsider, who will at times be bullied by the rest of the group. Do you see people ganging up on each other sometimes?
Deborah: I haven’t seen groups of people ganging up on one individual. It’s more the other way around. There are individuals that are out to cause havoc and who are just online to be toxic.
Interviewer: That means that you have in general, you have a really nice and good user base. But then there’s the rotten fruits that come in from time to time.
Deborah: That is what it is like today. We are still fairly early stage, though, when it comes to the amount of users. So I would expect this to change over time. And this is something that we’re prepared for. We added safety tools at a really early stage to be able to learn how to handle issues like this and also how to moderate the platform when incidents occur. So, I think that even though that we don’t have that type of ganging up on each other at the moment, I would expect that to happen in the future.
Interviewer: But it sounds like you’re prepared for it. Now you’ve made a really nice segue into my next question; What is the main motivation challenges you experienced running Friendbase? What are the main challenges right now and what do you expect you will have to handle later on?
Deborah: I think that a challenge in itself for all social platforms is to set the bar on what is acceptable and not.
Our target group are mid teens and up. So we don’t expect young children to be on Friendsbase. We feel that if we made a social world for young children, then we’d need to have a completely different set of regulations, more controlled regulations, rather than when it is teenagers and upwards.
However, that demographic is also very vulnerable. So, of course, there has to be some sort of measurement in place. The challenge is to determine, at what level do you want to put the safety bar and also how can you tell the difference between what is banter between friends and when it sort of flips over to actually be toxic or bullying? That’s something that is really, really hard to differ between. And I think that if you work with chat filters, then you have to have some sort of additional reporting system for when maybe the filters don’t manage this challenge. The filter is only a filter and can’t determine between the two. So that’s one challenge. It’s also complex to enforce the rules that are in place to protect the users without being perceived as controlling or patronizing.
At the moment, we also have a challenge in that we have users that come back solely for the purpose to cause havoc and create a toxic environment. We track them down and we ban their accounts, but it’s a continuous process.
That is something that should it escalate over time it will become increasingly time consuming. That’s why it’s really, really important for us to have tools in place so that it doesn’t have to be moderated manually. That will just take too much resource and time.
Of course, you have the even darker side of the internet; sexual predators that are out to groom vulnerable youngsters and to get them to maybe move over to a different platform where they can be used in a way that is extremely negative.
That’s something that is difficult to handle. But today, thanks to artificial intelligence and again, amazing toolsets out there. There are attempts to look at speech patterns and try and identify that sort of behavior. And there it’s also really great to have your own tool sets where the user can actually report someone if they feel threatened or if they feel that someone’s really creepy.
Interviewer: When you have returning users who have made it their goal to attack the platform, in a malicious way, do you see that it’s the same people returning based on their IP or the way that they talk?
Deborah: It’s not always possible to see it based on their IP because they use different ways of logging in. However, given their behavior, we can quickly identify them. And we have a group of ambassadors as well online on Friendbase that help us. On top of that we have a chat filter which can red flag certain behavior. So that helps as well.
There are a group that come back over and over again and for some mysterious reason they always use the same username. So they’re not that hard to identify. That group is actually easier to control than a group which has a different motive on why they are online and why they are trying to target youngsters. The toxic ones that are just there because they think it’s fun to behave badly. It’s easy to find them and close down their accounts.
Interviewer: We already touched upon this, but what would you say is the hardest moderation challenge to solve for you right now?
Deborah: The hardest moderation challenge to solve is, of course, finding the people who are deliberately out to target lonely youngsters that hunger for social contact. The whole grooming issue online is a problem. We are constantly trying to find new toolsets and encourage our users to contact us if there’s something that doesn’t feel right. So grooming is something that we’re very, very much aware of. If we happen to shut down someone’s account by mistake for a couple of hours, they’re most welcome to come to us and ask why. But we’d rather be safe than sorry when it comes to this kind of behavior. However, it is hard to track because it can be so very, very subtle in the beginning.
Interviewer: Friendsbase has been around for a while now. Are there any challenges that has changed or increased in occurrence over the years? And if yes. How?
Deborah: Actually, not really. I think the difference is in our own behavior as we are so much more aware of how we can solve different problems.
Bullying has been around for years. Free Internet as well. Sexual harassment of youngsters and between adults, of course, has also been around for years. It’s nothing new. I mean, the Internet is a fantastic place to be. It democratizes learning. You have access to the world and knowledge and entertainment.
But there is a dark side to it. From a bullying perspective you have the fact that previously, if you were bullied at school, you could go home or you could go to your social group somewhere else and you would have somewhere where you would feel safe.
When it’s online, it’s 24/7.
And it is relentless when it comes to the whole, child abuse part. Of course, it existed before as well. But now with the Internet, perpetrators can find groups that have the same desires as themselves and somehow together they can convince themselves as a group that it’s more acceptable. Which is awful. So that is the bad part of the net.
So, when you ask: Have the challenges changed or increased since we started Friendbase? No, not really. But what has changed is the attitude of how important it is to actually address these issues. When we started the company in 2013. We didn’t really talk that much about safety tools. I mean, we talked about should we have whitelist or a blacklist, the words. It was more on that level. But today most social platforms, they have moderation, they have toolsets, they have guidelines and policies and so forth.
So, I think that we who work with online communities as a whole have evolved a lot over the past years.
Interviewer: Yeah, I would say today in 2020, you probably wouldn’t be able to launch a social community or platform without launching with some sort of moderation tools and well-defined guidelines.
Deborah: I think you’re right. Several years ago, I did the pitch where we were talking about online safety and tools of moderation and were completely slaughtered. What we were told was that being good online or this whole be cool to be kind is going to stop our growth. It’s much better to let it all run rampant and then it will grow much faster. I don’t think anyone would say something like that today. So that’s a huge shift in mindset. Which is great. We welcome it.
Interviewer: That’s a fantastic story. You’ve been in this industry so long; you’ve seen this change. I find it fascinating that just seven years ago when you said I want to protect my users, people laughed at you. And now people would laugh at you if you said, I’m gonna go live without it.
Deborah: I know. Can you imagine going on stage today saying that I don’t care about safety? I mean, people would be so shocked.
Interviewer: You said before when we talked about the main challenges if you experienced growth, you’d need to change your approach to moderation and automate more in order to just keep up?
Deborah: Yes, definitely. We try and stay on top of what toolsets are out there.
We build in our own functionality, such as muting users. So, if someone is harassing you, you can mute them so that you can’t see what they’re writing. Small changes like that, we can do ourselves, which will be helpful.
Something I’d like to see more and that we’ve actually designed a research project around is to not only detect and ban bad behavior, but to encourage good behavior.
Because that in itself will also create a more positive environment.
That’s something that we’re really excited about, to work with people that are experts within gamification and natural language processing to see how can we create tool sets where we can encourage good behavior and see what we can do. Maybe we can start deflecting a conversation that is obviously on its way to going seriously wrong. It could be so simple as a small time delay when somebody writes something really toxic with a pop up saying: “Do you really want to say this?”. To just make someone think once more.
This is something that we’re looking into. It’s super interesting. And I hear there’s a couple of companies just the last few months that are also talking about creating tool sets for something like this. So, I think it’s going to be a really, really interesting development over the coming years.
Interviewer: It sounds like safety is very important to Friendbase. Why is that?
Deborah: Why is that? Quite early on, we who work in the company discussed what our core values should be. And one of the core values we decided upon is inclusion. Everybody is welcome. And for everyone to feel welcome. You have to have a welcoming atmosphere.
When you continue along that line of thought, then obviously you come to the point where, OK, if everyone’s going to be welcome and you want it to be a friendly space, then somewhere you’re going to have to stop toxic behavior. So, for us safety, it’s just part of our core values.
And also, I have a teenage daughter who loves gaming. I’ve seen how platforms behave. She’s part of groups that interact with each other online. I just feel that there must be a way of doing things better. It’s as simple as that. We can do better than this, letting it be super toxic. And there are some amazing people out there working with fantastic toolsets. There are some fantastic platforms and social games out there that also work in the same sort of direction as we do. It’s really great.
And you know what? To be quite honest, I think that there have been several case studies where it’s proven as well from a business perspective that you have a longer retention and a higher profitability when you can keep your user online for a longer time. So, you know, in itself, from a business sense, it also makes perfect sense to work in a way where you keep your user as long as possible.
Interviewer: You have tons and tons of experience obviously with startups and social platforms. If you were to give a piece of advice to someone who is running a similar service to Friendbase or even who are thinking about starting one, what would that be?
Deborah: It would be, first of all, to determine what level of safety you want to have, depending on your user group. Obviously, the younger demographic you have, the more safety tools you must ensure that you have in place. Also, not to build everything yourself. Especially if you’re working on an international market with many languages. Just to be able to filter many languages and in a decent way is a huge undertaking. If you think that you’re going to be able to hack together something yourself, it’s not that easy. It’s better to work with a tool or a company that has that as their core business because they will constantly be working with the state of the art solutions.
So better to liaise with switched on companies that already work with this as their main reason for being. I think that’s important. And then, of course, add your own easy to report system, easy to communicate with your user’s system so that you have sort of a double layer.
I mean, I’ve seen several different companies that work now with different moderation tools and chat filters and so forth. Many of them they do stellar work. And it’s important at the end of the day because if anything really, really bad would happen, then you’re just finished as a business. It’s as simple as that. The last thing you would want is to have someone knock on your door and shut you down because something’s happened online in your platform.
Interviewer: Definitely! What’s in the future for Friendbase? Where are you in two years?
Deborah: Where are we now? We’re now raising funds, because what we’ve seen is that we have a very, very loyal member base and they are wanting to invite more of their friends. And I think that with very, very little work, we can get the platform on a really interesting growth path.
Deborah: So, yeah, our our aim is to become one of the big global players. It’s exciting times ahead.
Interviewer: For sure. Any closing remarks? Any statements you want to get out there from a personal point of view or from Friendbase?
Deborah: The Internet is a great place to be because there’s so much you can learn. You can meet so many interesting people. But, there is a dark side as well. And you have to be aware of it. Just by being a little bit street smart online people can keep themselves safe. And we’re getting there. People are learning. Schools have it in their curriculum, social platforms try to teach users how to behave. So slowly but surely, we’re getting there.
Reviews can make or break a business. The same applies to online marketplaces, classifieds, and even dating sites. And they don’t just impact these platforms – they affect how people see the brands that advertise on them, as well as individual vendors, and of course, those looking for love and companionship.
However, in a world where User-Generated Content (UGC) is so prevalent, the fact is anyone from anywhere can leave a good or bad review and have it seen in a very public way.
While it’s clear why bad reviews can hurt businesses and brands, fake positive ones can damage reputations too.
Confused? It’s a tricky area to navigate.
Let’s consider the ways in which reviews can build trust and how online marketplaces can address this moderation challenge.
Reviews Build Consumer Trust
As we’ve discussed in previous articles, trust is at the epicentre of the digital economy. As consumers we take ‘trust leaps’ when deciding if a particular online product or service is suitable for us. This is why reviews matter so much – they help us form an opinion.
In a practical sense, many of these sentiments (which can largely be attributed to economist and TED speaker, Rachel Botsman) are grounded in our search for social proof; which forms one of the key cornerstones of the ‘Trust Stack’ – which encompasses: trust in the idea, trust in the platform, and (as is the case here) trust in the user.
Because the three have an interdependent relationship, they reinforce each other – meaning that user trust leads to trust in the platform and idea; and vice versa.
If it sounds improbable that consumers are more likely to trust complete strangers, then consider the numbers. Stats show that 88% of consumers trust online reviews as much as personal recommendations – with 76% stating that they trust online reviews as much as recommendations from family and friends.
Needless to say, they factor in a great deal. Customer reviews are therefore essential indicators of trust – which is why bad reviews can negatively impact businesses so heavily.
While on some marketplaces, a 3.5 out of 5 for ‘average’ service might be deemed acceptable – for many businesses, a slip in the way they’re reviewed is perceived to have disastrous consequences.
Some companies have fought back at negative reviews; but instead of challenging customers over their comments, or trying to figure out where they could do better, they’ve actively tried to sue their critics.
One particular hotel in New York State, US, even stated in its ‘small print’ that visitors would be charged $500 for negative Yelp reviews. While some service providers have slated – and even looked to sue – online review giant, Yelp for the way in which it has ‘prioritised’ reviews with the most favourable first.
But why are overly positive reviews that detrimental? Surely a positive review is what all companies are striving for? The issue is inauthenticity. A true reflection of any experience rarely commands 5-stars across the board; and businesses, marketplaces, and consumers are wise to it.
Authenticity Means ‘No Astroturfing’
It’s clear that many companies want to present themselves in the best possible light. There’s absolutely nothing wrong with that. However, when it comes to reviews of their products and services, if every single rating is overwhelmingly positive, consumers would be forgiven for being suspicious.
In many cases, it seems, they probably are. Creating fake reviews – a practice known as ‘astroturfing’ – has been relatively widespread since the dawn of online marketplaces and search engines. But many are now wise to it and actively doing more to prevent the practice.
For example, Google has massively cracked down on companies buying fake Google reviews designed to positively influence online listings – removing businesses that do from local search results. Similarly Amazon has pledged to put a stop to the practice of testers being paid for reviews and being reimbursed for their purchases.
Astroturfing isn’t just frowned upon, it’s also illegal. Both the UK’s Competition and Markets Authority (CMA) and the US Federal Trade Commission have strict rules in place over misleading customers.
In Britain, the CMA has taken action against social media agency, Social Chain, for failing to disclose that a series of posts were in fact part of a paid for campaign; and took issue with an online knitwear retailer posting fake reviews.
While some may consider astroturfing a victimless crime, when you consider the faith that shoppers have in online reviews and the fact that their favourite sites may be deliberately trying to mislead them, then it’s clear that there’s a major trust issue at stake.
For classified sites, dating apps, and online marketplace owners, who have spent so long building credibility, gaining visibility, and getting users and vendors onboard; a culture where fake reviews persist can be disastrous.
But when so many sites rely on User-Generated Content the task of monitoring and moderating real reviews, bad reviews, and fake reviews is an enormous undertaking – and often a costly one.
Manual Vs. Automated Moderation
While many fake reviews are often easy to spot (awkwardly put together, bad spelling and grammar) when they appear at scale manually moderating them becomes unsustainable – even for a small team of experts.
That’s why new ways to detect and prevent are starting to gain traction. For example, many sites and marketplaces are starting to limit review posting to those who’ve bought something from a specific vendor. However, as per the Amazon example above, this is a practice that is easy to circumvent.
A more reliable method is automated moderation – using machine learning algorithms that can be trained to detect fake reviews, as well as other forms of unwanted or illegal content on a particular classifieds site or marketing. By using filters, the algorithm is continually fed examples of good and bad content to the point that it can automatically identify between the two.
It’s a process that also works well in tandem with manual moderation efforts. When a user review is visible, a notification can be sent to the moderation team; allowing them to make the final judgement call on a review’s authenticity.
Ultimately, In a world where online truths can often be in short supply, companies – whether they’re brands or marketplaces – that are open enough for customers to leave honest, reasonable reviews stand a better chance of building trust among their users.
While it’s clear businesses have a right to encourage positive online reviews – as part of their marketing efforts – any activities that attempt to obscure the truth (no matter how scathing) or fabricate a rose-tinted fake review, can have an even more negative impact than a humdrum review itself.
The bad review report
To help you better understand how to naviguate the tricky world of bad review we have compiled a report highlighting the top 5 most frequent complaints on online marketplaces. Our insights will help you better understand how to improve user experience, negate churn and increase user acquisition on your platoform.
The outbreak of COVID-19 or Coronavirus has thrown people all over the world into fear and panic for their health and economic situation. Many have been flocking to stores to stock up on some essentials, emptying the shelves one by one. Scammers are taking advantage of the situation by maliciously playing on people’s fear. They’re targeting items that are hard to find in stores and make the internet – and especially online marketplaces – their hunting ground, to exploit desperate and vulnerable individuals and businesses. Price gouging – or charging unfairly high prices – fake medicine or non-existent loans are all ways scammers try to exploit marketplace users.
In this worldwide crisis, now is a great time for marketplaces to step up and show social responsibility by making sure that vulnerable individuals don’t fall victim to corona related scams and that malicious actors can’t gain on stockpiling and selling medical equipment sorely needed by nurses and doctors fighting to save lives.
Since the start of the Covid-19 epidemic we’ve worked closely with our clients to update moderation coverage to include Coronavirus related scams and have helped them put in place new rules and policies.
We know that all marketplaces currently will be struggling to get on top of the situation and to help we’ve decided to share some best practices to handle moderation during the epidemic.
Here are our recommendations on how to tackle the Covid-19 crisis to protect your users, your brand and retain the trust users have in your platform.
Refusal of coronavirus related items
Ever since the outbreak started, ill-intentioned individuals have made the price of some items spike to unusually high rates. Many brands have already taken the responsible step of refusing certain items they wouldn’t usually reject, and some have set bulk-buying restrictions (just like some supermarkets have done) on ethical and integrity grounds.
Google stopped allowing ads for masks, and many other businesses have restricted the sale or price of certain items. Amazon removed thousands of listings for hand sanitizer, wipes and face masks and has suspended hundreds of sellers for price gouging. Similarly, eBay banned all sales of hand sanitizer, disinfecting wipes and healthcare masks on its US platform and announced it would remove any listings mentioning Covid-19 or the Coronavirus except for books.
In our day to day work with moderation for clients all over the world we’ve seen a surge of Coronavirus related scams and we’ve developed guidelines based on the examples we’ve seen.
To protect your customers from being scammed or victim of price-gouging and to preserve your user trust, we recommend you refuse ads or set up measures against stockpiling for the following items.
- Surgical masks and face masks (type ffp1, ffp2, ffp3, etc.) have been scarcely available and have seen their price tag spike dramatically. Overall, advertisements for all kinds of medical equipment associated with the Covid-19 should be refused.
- Hands sanitizer and disposable gloves are also very prone to being sold by scammers at incredibly high prices. We suggest either banning the ads altogether or setting regular prices on these items.
- Empty supermarket shelves of toilet paper have caused this usually cheap item to be sold online at extortionate prices, we suggest you monitor and ban these ads accordingly.
- Any ads with the mention of Coronavirus or Covid-19 in the text should be manually checked to ensure that they aren’t created with malicious intends.
- The sale of magic medicines pretending to miraculously cure the virus.
- Depending on the country and its physical distancing measures, ads for home services such as hairdressers, nail technicians and beauticians should be refused.
- In these uncertain times, scammers have been selling loans or cash online, preying on the most vulnerable. Make sure to look for these scams on your platform.
- Similarly, scammers have been targeting students talking about interest rates being adjusted.
Optimize your filters
Ever since the crisis started, scammers have become more sophisticated as days go by, finding loopholes to circumvent security measures. By finding alternative ways to promote their scams, they use different wordings such as Sars-CoV-2 or describing masks by their reference numbers such as 149:2001, A1 2009 etc. Make sure your filters are optimized and your moderators continuously briefed and educated to catch all coronavirus-related ads.
Right now, we suggest that tweak your policies and moderation measures daily to stay ahead of the scammers. As the crisis evolves malicious actors will without doubt continue to find new ways to exploit the situation. As such it’s vital that you pay extra attention to your moderation efforts over the following weeks.
If you need help tackling coronavirus-related scams on your platform, get in touch with us.
The biggest challenge facing technology today isn’t adoption, it’s regulation. Innovation is moving at such a rapid pace that the legal and regulatory implications are lagging behind what’s possible.
Artificial Intelligence (AI) is one particularly tricky area for regulators to reach consensus on; as is content moderation.
With the two becoming increasingly crucial to all kinds of businesses – especially to online marketplaces, sharing economy and dating sites – it’s clear that more needs to be done to ensure the safety of users.
But to what extent are regulations stifling progress? Are they justified in doing so? Let’s consider the current situation.
AI + Moderation: A Perfect Pairing
Wherever there’s User Generated Content (UGC), there’s a need to moderate it; whether we’re talking about upholding YouTube censorship or netting catfish on Tinder.
Given the vast amount of content that’s uploaded daily and the volume of usage – on a popular platform like eBay – it’s clear that while action needs to be taken, it’s unsustainable to rely on human moderation alone.
Enter AI – but not necessarily as most people will know it (we’re still a long way from sapient androids). Mainly, where content moderation is concerned, the use of AI involves machine learning algorithms – which platform owners can configure to filter out words, images, and video content that contravenes policies, laws, and best practices.
AI not only offers the scale, capacity, and speed needed to moderate huge volumes of content; it also limits the often-cited psychological effects many people suffer from viewing and moderating harmful content.
Understanding The Wider Issue
So what’s the problem? Issues arise when we consider content moderation on a global scale. Laws governing online censorship (and the extent to which they’re enforced) vary significantly between continents, nations, and regions.
What constitutes ‘harmful’, ‘illicit’ or ‘bad taste’ isn’t always as clear cut as one might think. And from a sales perspective, items that are illegal in one nation aren’t always illegal in another. A lot needs to be taken into account.
But what about the role of AI? What objections could there be for software that’s able to provide huge economies of scale, operational efficiency, and protect people from harm – both users and moderators?
The broader context of AI as a technology needs to be better understood – which itself presents several key ethical questions over its use and deployment, which vary in a similar way – country-to-country – to efforts designed to regulate content moderation.
To understand this better, we need to look at ways in which the different nations are addressing the challenges of digitalisation – and what their attitudes are towards both online moderation and AI.
The EU: Apply Pressure To Platforms
As an individual region, the EU arguably is leading the global debate on online safety. However, the European Commission continues to voice concerns over (a lack of) efforts made by large technology platforms to prevent the spread of offensive and misleading content.
Following the introduction of its Code Of Practice on Disinformation in 2018, numerous high profile tech companies – including Google, Facebook, Twitter, Microsoft and Mozilla – voluntarily provided the Commission with self-assessment reports in early 2019.
These reports document the policies and processes these organisations have undertaken to prevent the spread of harmful content and fake news online.
While a thorough analysis is currently underway (with findings to be reported in 2020), initial responses show significant dissatisfaction relating to the progress being made – and with the fact that no additional tech companies have signed up to the initiative.
AI In The EU
In short, expectations continue to be very high – as evidenced by (and as covered in a previous blog) the European Parliament’s vote to give online businesses one hour to remove terrorist-related content.
Given the immediacy, frequency, and scale that these regulations require, it’s clear that AI has a critical and central role to play in meeting these moderation demands. But, as an emerging technology itself, the regulations around AI are still being formalised in Europe.
However, the proposed Digital Services Act (set to replace the now outdated eCommerce Directive) goes a long way to address issues relating to online marketplaces and classified sites – and AI is given significant consideration as part of these efforts.
Last year the EU published its guidelines on ethics in Artificial Intelligence, citing a ‘human-centric approach’ as one of its key concerns – as it deems that ‘AI poses risks to the right to personal data protection and privacy’ – as well as a ‘risk of discrimination when algorithms are used for purposes such as to profile people or to resolve situations in
While these developments are promising, in that they demonstrate the depth and importance which the EU is tackling these issues, problems will no doubt arise when adoption and enforcement by 27 different member states are required.
Britain Online Post-Brexit
One nation that no longer needs to participate in EU-centric discussions is the UK – following its departure in January this year. However, rather than deviate from regulation, Britain’s stance on online safety continues to set a high bar.
An ‘Online Harms’ whitepaper produced last year (pre-Brexit) sets out Britain’s ambition to be ‘the safest place in the world to go online’ and proposes a revised system of accountability which moves beyond self-regulation and the need to establish a new independent regulator.
Included in this is a commitment to uphold GDPR and Data Protection laws – including a promise to ‘inspect’ AI and penalise those who exploit data security. The whitepaper also acknowledges the ‘complex, fast-moving and far-reaching ethical and economic issues that cannot be addressed by data-protection laws alone’.
To this end, a Centre for Data Ethics and Innovation has been established in the UK – complete with a two-year strategy setting out its aims and ambitions, which largely involves cross-industry collaboration, greater transparency, and continuous governance.
Numerous other countries – from Canada to Australia – have expressed a formal commitment to addressing the challenges facing AI, data protection, and content moderation. However, on a broader international level, the Organisation for Economic Co-operation and Development (OECD) has established some well-respected Principles on Artificial Intelligence.
Set out in May 2019. as five simple tenets designed to encourage successful ‘stewardship’ of AI, these principles have since been co-opted by the G20 in their stance on AI.
They are defined as:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
While not legally binding, the hope is that the level of influence and reach these principles have on a global scale will eventually encourage wider adoption. However, given the myriad of cultural and legal differences, the tech sector faces, international standardisation remains a massive challenge.
The Right Approach – Hurt By Overt Complexity
All things considered, while the right strategic measures are no doubt in place for the most part – helping perpetuate discussion around the key issues – the effectiveness of many of these regulations largely remains to be seen.
Outwardly, many nations seem to share the same top-line attitudes towards AI and content moderation – and their necessity in reducing harmful content. However, applying policies from specific countries to global content is challenging and adds to the overall complexity, as content may be created in one country and viewed in another.
This is why the use of AI machine learning is so critical in moderation – algorithms can be trained to do all of the hard work at scale. But it seems the biggest stumbling block in all of this is a lack of clarity around what artificial intelligence truly is.
As one piece of Ofcom research notes, there’s a need to develop ‘explainable systems’ as so few people (except for computer scientists) can legitimately grasp the complexities of these technologies.
The problem posed in this research is that some aspects of AI – namely neural networks which are designed to replicate how the human brain learns – are so advanced that even the AI developers who create them cannot understand how or why the algorithm outputs what it does.
While machine learning moderation doesn’t delve as far into the ‘unknowable’ as neural networks, it’s clear to see why discussions around regulation persist at great length.
But, as is the case with most technologies themselves, staying ahead of the curve from a regulatory and commercial standpoint is a continuous improvement process. That’s something that won’t change anytime soon.
New laws and legislations can be hard to navigate. Besedo helps businesses like yours get everything in place quickly and efficiently to adhere to these new legislations. Get in touch with us!
Scammers are unrelenting. And smart. They’re active right throughout the year. This means there’s no particular season when online marketplace and classified site owners need to be extra vigilant. The pressure’s always on them to maintain user safety.
However, scammers know when and how to tailor their activities to maximise opportunities. That’s why they’ll often latch onto different events, trends, seasons, sales, and other activities throughout the year – using a variety of techniques to lure in users, under the guise of an offer or piece of information.
With so much going on in 2020 – from the Tokyo Olympics to US election – scammers will almost certainly be more active than usual. Here’s what consumers and marketplaces need to be aware of this year.
If you want to learn more about the specific scam spikes, visit our scam awareness calendar where we predict spikes on a month-by-month basis.
When the nights draw in and temperatures drop, many begin to dream of sunnier climes and set about searching for their next holiday.
But whether it’s a summer booking or winter getaway, price is always an issue. Cue thousands of holiday comparison sites, booking portals, and savings sites. While many of these are legitimate outfits, often the convoluted online booking experience – as a consequence of using aggregation sites – can confuse would-be travellers.
They’re right to be cautious. As with buying any other goods or services online, even the most reputable travel sites can fall victim to scammers – with scammers advertising cheap flights, luxury lodgings at 2 Star prices, and also offering ‘free’ trips (before being lured into attending a pressured timeshare sales pitch).
If in doubt, customers should always book the best-known travel sites, pay using their verified portal (rather than a link sent via email or direct bank transfer) to ensure that the company that they actually pay for their holiday is accredited by an industry body (such as ATOL in the UK).
From Valentine’s Day to Easter; Halloween to Hanukkah – seasonal scams return with perennial menace year-after-year. Designed to capitalise on themed web searches and impulse purchases, fraudsters play the same old tricks – and consumers keep falling for them.
Charity scams tend to materialise around gift-focused holidays, like Thanksgiving in the US, as well as at Christmas. Anyone can fall victim to them – such as the recent case of NFL player, Kyle Rudoph, who gave away his gloves after a high scoring game for what he thought was a charity auction; only to discover they were being sold on eBay a few days later.
Another popular seasonal scam is phishing emails offering limited-time discounts from well-known retailers, as well as romance scams (catfishers) in which some are prepared to cultivate entire relationships online with others simply to extract money from them.
The general rule with any of these is to be wary of anyone offering something that seems too good to be true – whether it’s a 75% off discount or unconditional love. Scammers prey on the vulnerable.
A whole summer of soccer is scheduled for June and July this year – thanks to the upcoming UEFA European Football Championship (Euro 2020) and the Copa America; both of which will run at the same time: on opposite sides of the World.
However, while you’d expect fake tournament tickets and counterfeit merchandise to be par for the course where events like these are concerned – and easily detectable. But the reality is that many fraudulent third party sites are so convincing, buyers are falling for the same scams experienced in previous years.
If in doubt, customers should always purchase from official websites — such as UEFA online and Copa America. While Euro 2020 tickets are sold out for now (over 19 million people applied for tickets), they’ll become available to buy again in April for those whose teams qualified during the playoffs.
While third party sites are the biggest culprits, marketplace owners should be extra vigilant where users are offering surplus or cheap tickets to any games at all. Although given the prices at which the tickets sell for, you’d be forgiven for thinking that the real scammers are the official vendors themselves.
The Summer Olympic Games is no stranger to scandals – of the sporting variety. However, In the same way as the soccer tournaments referenced above, fake tickets tend to surface in the run-up to the games themselves – on ‘pop-up’ sites as well as marketplaces.
Telltale signs of a scam include vendors asking to be paid in cryptocurrencies (such as Bitcoin), official-sounding domain names (that are far from official), as well as phishing emails, malware, and ransomware – all designed by scammers looking to cash in on the surrounding media hype and immediate public interest that high-profile events bring.
In addition to scams preceding the games, advice issued just prior to the 2016 Rio Olympics recommends visitors be wary of free public WiFi – at venues, hotels, cafes, and restaurants – and recommends travellers take other online security precautions; such as using a Virtual Private Network (VPN) in addition to antivirus software.
Lessons learned from the 2018 Winter Olympics in Pyeongchang shouldn’t be ignored either. Remember the ‘Olympics Destroyer’ cyber attack? That shut down the event’s entire IT infrastructure during the opening ceremony? There was little anyone could do to prevent that from happening (so advanced was the attack and so slick was its coordination). Still, it raised a lot of questions around cybersecurity generally – which no doubt have informed best practice elsewhere.
Also, visitors should avoid downloading unofficial apps or opening emails relating to Olympics information – unless they’re from an official news outlet, such as NBC, the BBC, or the Olympic Committee itself.
Probing Political Powers
While those in the public eye may seem to be the most at risk, ordinary citizens are too. We have Facebook and Cambridge Analytica to thank for that.
Despite this high profile case, while political parties themselves must abide by campaigning practices and even though data security laws – such as GDPR – exist to protect our data, it seems more work needs to be done – by social media companies and governments.
But what can people do? There are ways to limit the reach that political parties have, such as opting out of practices like micro-targeting and being more stringent with social media privacy settings, good old-fashioned caution and data hygiene are encouraged.
To help spread this message, marketplaces and classified sites should continue to remind users to change their passwords routinely, exercise caution when dealing with strangers, and advocate not sharing personal data off-platform with other users – regardless of their assumed intent.
Sale Of The Century?
From Black Friday to the New Year Sales – the end of one year and the early part of the next is a time when brands of all kinds slash the prices of excess stock – clearing inventory or paving the way for the coming season’s collection. It’s also a time when scammers prey upon online shoppers’ frenzied search for a bargain or last-minute gift purchase.
As we’ve talked about in previous blogs, the level of sophistication with which scammers operate in online marketplaces seems to get increasingly creative – from posting multiple listings for the same items, changing their IP addresses, or merely advertising usually expensive items at low prices to dupe those looking to save.
Prioritising Content Moderation
The worrying truth is that scammers are becoming increasingly sophisticated with the techniques they use. For online marketplace owners, not addressing these problems can directly impact their site’s credibility, user experience, safety, and the amount of trust that their users have for their service.
Most marketplaces are only too well aware of all of these issues, and many are doing a great deal to inform customers of what to look out for and how to conduct more secure transactions, online.
However, action always speaks louder than words – which is why many are now actively exploring content moderation – using dedicated expert teams and machine learning AI – the latter adds value to larger marketplaces.
Keeping customers informed around significant events and holidays – like those set out above – ensures that marketplaces are seen as transparent and active in combating fraud online.
This also paints sites in a favourable light when it comes to attracting new users, who may stumble upon a new listing in their search for seasonal goods and services.
Ultimately, the more a site does to keep its users safe, the more trustworthy it’ll be seen as.
You’d be forgiven for thinking that ensuring users aren’t subjected to bad content on dating sites and online marketplaces means waging war on trolling, nudity, and unsavoury content.
Sure, that’s a large part of it, but the fact is bad content has a broader meaning – it’s anything designed to harm or deceive users; images that can negatively impact their user experience, break their trust, or even – worst-case scenario – put them at risk of theft or abuse.
As a result, marketplaces and dating site owners need to ensure they’re aware of the potential outcomes bad images pose. Let’s take a look at the most common types of bad images and consider the impact on both dating app and marketplace users.
The trouble with watermarks is that they often look bad. Sure, they can be positioned more subtly on an image, but the overall impact is that they detract from the image focus. However, they’re still used by many vendors – often to avoid paying sellers’ fee to use an online marketplace.
For example, someone selling a high-priced item (or numerous items of the same price, like a TV, computer, tablet, or phone) may try to circumvent site policies by including their email address, website URL, or phone/WhatsApp number in the watermark itself.
The likelihood is, however, that those who try to lure users away are scammers – compromising user safety (and user trust too) as they’re directed away from legitimate marketplaces.
On marketplaces, watermarks on profile pics are mainly used in the same way as product photos. However, on dating sites, their use is much more frequent and more-often-than-not, used to promote escort services and prostitution. Watermarks are used similarly in 1-to-1 chats – to send contact information in a way that can’t be detected by text filters.
eBay banned watermarks a couple of years ago, initially stating they would monitor pictures – before quickly reneging on this to simply condemn and discourage watermarks rather than police the site itself.
Presumably, this change of heart was prompted by the seller community – or more specifically, the image creators. From a photographer or designer’s point of view, the argument for watermarks is to prevent the misuse of their work and to preserve copyright over them. Also, while watermarks don’t stop images from being copied, but creators can use services like Google Image Search and TinEye to monitor misuse.
Instead of watermarks, an alternative is only to provide low-resolution images – particularly where product photography’s concerned. Another way is to put copyright information in image metadata.
Ultimately watermarks can make images look clumsy and inauthentic – even though they’re designed to make them look more ‘official’. From a user experience perspective, they disrupt the overall image; masking the complete picture. However, for the marketplaces and dating sites themselves, by luring users away from the platform (using embedded contact information) they eliminate associated fees – and if everyone did that, there’d be a major problem.
An ongoing problem for many marketplaces, duplicate listings are something of a grey area in terms of their legitimacy. Essentially while many vendors mean no malice – other than to double their chances of a decent sale – the wider impact is that duplicate ads and images denigrate the browsing and search user experience.
But their troublesome nature doesn’t end there, unfortunately. The use of duplicate images is a tactic long practised by scammers on – as product images on online marketplaces and profile pictures on dating sites.
Duplicate images are commonly used by those selling counterfeit items – often featuring ‘real’ product images taken from genuine sources and repurposed on public marketplaces. Similarly, dating sites are all too aware of ‘catfishing’ and of profiles using images taken from stock photo libraries or even modelling agencies. It’s not just false advertising that users need to be aware of – but romance scammers too.
While duplicate listing scanners exist, most of these are text-based. More sophisticated image detectors do exist, and big marketplaces offer their own detection algorithms too. But many of these are still relatively rudimentary and open to misuse (case in point: Facebook Marketplace which is notoriously easy to ‘hack’).
As a result, many marketplaces are offering greater clarity in terms of how they define duplicates and won’t allow vendors to list the same item in different categories. eBay sternly warns that offenders will see a loss of visibility for their listings. After all, their site’s trust is at stake – which ultimately leads to fewer conversions.
Also, in a similar way to watermarks, duplicate images can also ruin the user experience – confusing customers as to which is the ‘real’ or original example of the product they’re considering buying. And on dating sites, well, there can’t be more than one person with the same profile picture can there? Honesty accounts for a great deal – and people can look very different from one picture to the next – so including a variety of different images is essential for this reason too.
Love or hate the idea, facial recognition technology is increasing in sophistication. It’s already being used in security tech – to do everything from unlocking phones to crossing borders. However, while Facebook might be making leaps and strides in facial recognition, on marketplaces and dating sites, they remain problematic from a content perspective.
As we’ve discussed already, where images of people – especially faces – are concerned, honesty is always the best policy. On dating sites, in particular, users often use images that make them look more attractive – often using different filters to enhance their appearance.
However, when there’s a lot of people in a photograph, it’s often hard to tell who the profile owner is. This has obvious complications on dating sites – where users could be easily misled. They could begin contact with one person thinking they’re another – something that could be disastrous for the user and the dating site – again, because misconceptions can break the trust bond.
Coupled with the proliferation of deep fakes and face/profile image searches and the problem gains another more complex layer – meaning there’s not just a threat to a user’s experience; their safety is at risk too.
In online marketplaces, this isn’t as big a problem, except that the use of people – or more specifically their faces – distracts from the product itself, so vendors should use as few as possible in photography, or not at all if they can help it.
Wherever users can upload their own content, there can be no denying that pornography, nudity, and sex-related images will appear – in both online marketplaces and dating sites.
Where affairs of the heart (or libido) are concerned, while consenting adults are free to share pictures of whatever body parts they like best; for the most part – on public forums and in private chats – it’s unwanted. And when that’s the case; it’s user harassment.
Harassment (of the pictorial and verbal variety) has become entrenched in dating app culture. Largely as a result of male behaviour toward women (check the Instagram account ‘ByeFelipe’ for some prime examples). So, efforts to get rid of it have spawned a whole new wave of female-initiated dating services; such as Bumble.
However, even this doesn’t prevent lewd images from being shared; which is why additional services are needed. Bumble’s Privacy Detector, for instance, which detects nudity, blurs it, and warn users that a picture or video message may be pornographic when it lands in their chat feed.
Anything nudity related is naturally more common on dating sites than marketplaces, but that doesn’t preclude them. Profile photos can often be revealing (which may or may not be ok depending on the site) and of course, as mentioned above, ‘escort’ services may advertise using images that push the boundaries.
The effect? Not keeping users safe from overtly sexual images is a big problem. As mentioned before, it breaks the trust established between user and site. While on dating sites unsolicited nudity is now frequent, that doesn’t make it acceptable. And where online marketplaces are concerned, user-generated content that contains nudity denigrates the site’s reputation.
However, it’s also essential to maintain a balanced view and offer a specific definition of what constitutes nudity on your own site – which might vary depending on the nature of your website.
Picture Of Success
All in all, you’re not going to be able to stop your users from seeing awful content. When users innocently browse a marketplace or look at dating profiles, there’s no guarantee that the images they’ll see will be legitimate, tasteful, or even legal.
What you can do, though, as a site owner is to ensure your site offers the right policies, definitions, and appropriate courses of action. Moderation is crucial to avoid the proliferation of bad images on your site. But it’s no easy task when it relies on user-generated content.
That’s why online content moderation tools are critical to helping online marketplaces and dating sites detect unwanted images and remove them instantly. At Besedo, we combine AI image moderation with human moderation to efficiently tackle the propagation of inappropriate or undesirable images you don’t want on your site.
Ready to find out the online dating trends for 2020? Here’s what industry experts are predicting for the coming year.
2019 is drawing to an end and with that starts a new decade.
It’s hard to remember that in 2010, Tinder didn’t even exist. Ten years later, dating apps have never been so popular, and meeting romantic connections has never been easier thanks to the developments in technology and the maturity of the industry.
Dating trends are popping up all the time, and online dating is getting rid of its taboo and is now universally accepted as a way of meeting new people. With many current and upcoming changes from hyper-niche platforms, video or new legislations, the dating industry is in full swing.
We have turned to experts in the online dating field to figure out what we should be looking out for in the coming year. Here are their expert predictions for the online dating industry in 2020.
What online marketplace trends are we going to see in 2020? Here’s what 8 industry experts predict for the coming year.
2019 is quickly coming to an end, and what an exciting year it has been for online marketplaces!
Marketplaces have been disrupting the way we shop with the rise of the conscious consumer looking for more ways to be sustainable.
As users’ expectations grow and user experience has become highly prioritized, marketplaces have been creative in finding ways to add value and services to their platforms. Convenience has been the keyword for online marketplaces to acquire and retain their customers.
Which trends will shape the marketplace industry in 2020? What can you expect to change in the online marketplace industry in the coming year? We gathered eight predictions from marketplace experts and professionals to see what’s coming up in the industry.
The circular economy is in full swing, and for an excellent reason: with finite natural resources on the planet and the demand for resources exceeding what the earth can regenerate each year, we simply cannot carry on with our current linear production path.
The circular economy aims to transition into consuming less natural resources by reusing, resetting, upgrading and recycling products, giving items we’ve fallen out of love with a chance for another day in a new home.
The resale of used items – an integral part of the circular economy – is booming as people grow more accustomed to buying second hand. Resale has grown 21 times faster than the retail apparel market over the past three years and is on the way to become larger than fast fashion by 2028, according to a thredUP and GlobalData report.
And what better place than online marketplaces for second-hand trade. Indeed, the rise of the conscious consumer has become a perk for marketplaces. With the re-selling of used items, marketplaces are playing a considerable role in the circular economy and its sustainability concerns. In 2018 in France, Le Bon Coin users potentially saved 7.7 million tons of greenhouse gas emissions and 431,992 tons of plastic according to a report by Schibsted and Adevinta.
Buying and selling second-hand items has been favored for numerous reasons. If people can find what they want for less money, they’re likely to purchase that item even if it’s been used. Additionally, the rising awareness surrounding the climate crisis and sustainability issues over the past few years has boosted second-hand selling and buying. Producing less waste and consuming resources responsibly is growing higher in people’s agendas, in turn, increasing the popularity of online marketplaces.
Increased risks for your online marketplace
However, with that rise, comes increased risks for your online marketplace, as more fraudsters will be attracted to your site seeking to deceive your users.
Fraud can jeopardize your user trust and ultimately scare off your customers. In the luxury resale market especially, the sale of counterfeit goods has increased over the years. It’s then essential for online marketplaces to establish strong user trust, particularly in the trade of secondhand items.
In a two-sided marketplace, buyers have to trust that sellers describe the items on sale accurately and that they will eventually receive their goods. In the same way, sellers have to trust that buyers provide payment in good and due form.
You should make sure to protect your user trust and tackle these challenges head-on by putting efficient and accurate content moderation processes in place.
Featuring poor or fraudulent content on your marketplace has a cost. Fraudulent activity on your marketplace can have disastrous effects on your reputation and the trust users have in your platform.
Second-hand is not a fad, and resale is turning to the mainstream. Marketplaces need to find new ways of dealing with fraud as the demand for the second-hand trade grows.
To learn more about how you can protect your users from fraud, have a look at our article about the different types of content moderation we offer to protect your brand.