The outbreak of COVID-19 or Coronavirus has thrown people all over the world into fear and panic for their health and economic situation. Many have been flocking to stores to stock up on some essentials, emptying the shelves one by one. Scammers are taking advantage of the situation by maliciously playing on people’s fear. They’re targeting items that are hard to find in stores and make the internet – and especially online marketplaces – their hunting ground, to exploit desperate and vulnerable individuals and businesses. Price gouging – or charging unfairly high prices – fake medicine or non-existent loans are all ways scammers try to exploit marketplace users.
In this worldwide crisis, now is a great time for marketplaces to step up and show social responsibility by making sure that vulnerable individuals don’t fall victim to corona related scams and that malicious actors can’t gain on stockpiling and selling medical equipment sorely needed by nurses and doctors fighting to save lives.
Since the start of the Covid-19 epidemic we’ve worked closely with our clients to update moderation coverage to include Coronavirus related scams and have helped them put in place new rules and policies.
We know that all marketplaces currently will be struggling to get on top of the situation and to help we’ve decided to share some best practices to handle moderation during the epidemic.
Here are our recommendations on how to tackle the Covid-19 crisis to protect your users, your brand and retain the trust users have in your platform.
Refusal of coronavirus related items
Ever since the outbreak started, ill-intentioned individuals have made the price of some items spike to unusually high rates. Many brands have already taken the responsible step of refusing certain items they wouldn’t usually reject, and some have set bulk-buying restrictions (just like some supermarkets have done) on ethical and integrity grounds.
Google stopped allowing ads for masks, and many other businesses have restricted the sale or price of certain items. Amazon removed thousands of listings for hand sanitizer, wipes and face masks and has suspended hundreds of sellers for price gouging. Similarly, eBay banned all sales of hand sanitizer, disinfecting wipes and healthcare masks on its US platform and announced it would remove any listings mentioning Covid-19 or the Coronavirus except for books.
In our day to day work with moderation for clients all over the world we’ve seen a surge of Coronavirus related scams and we’ve developed guidelines based on the examples we’ve seen.
To protect your customers from being scammed or victim of price-gouging and to preserve your user trust, we recommend you refuse ads or set up measures against stockpiling for the following items.
- Surgical masks and face masks (type ffp1, ffp2, ffp3, etc.) have been scarcely available and have seen their price tag spike dramatically. Overall, advertisements for all kinds of medical equipment associated with the Covid-19 should be refused.
- Hands sanitizer and disposable gloves are also very prone to being sold by scammers at incredibly high prices. We suggest either banning the ads altogether or setting regular prices on these items.
- Empty supermarket shelves of toilet paper have caused this usually cheap item to be sold online at extortionate prices, we suggest you monitor and ban these ads accordingly.
- Any ads with the mention of Coronavirus or Covid-19 in the text should be manually checked to ensure that they aren’t created with malicious intends.
- The sale of magic medicines pretending to miraculously cure the virus.
- Depending on the country and its physical distancing measures, ads for home services such as hairdressers, nail technicians and beauticians should be refused.
- In these uncertain times, scammers have been selling loans or cash online, preying on the most vulnerable. Make sure to look for these scams on your platform.
- Similarly, scammers have been targeting students talking about interest rates being adjusted.
Optimize your filters
Ever since the crisis started, scammers have become more sophisticated as days go by, finding loopholes to circumvent security measures. By finding alternative ways to promote their scams, they use different wordings such as Sars-CoV-2 or describing masks by their reference numbers such as 149:2001, A1 2009 etc. Make sure your filters are optimized and your moderators continuously briefed and educated to catch all coronavirus-related ads.
Right now, we suggest that tweak your policies and moderation measures daily to stay ahead of the scammers. As the crisis evolves malicious actors will without doubt continue to find new ways to exploit the situation. As such it’s vital that you pay extra attention to your moderation efforts over the following weeks.
If you need help tackling coronavirus-related scams on your platform, get in touch with us.
The biggest challenge facing technology today isn’t adoption, it’s regulation. Innovation is moving at such a rapid pace that the legal and regulatory implications are lagging behind what’s possible.
Artificial Intelligence (AI) is one particularly tricky area for regulators to reach consensus on; as is content moderation.
With the two becoming increasingly crucial to all kinds of businesses – especially to online marketplaces, sharing economy and dating sites – it’s clear that more needs to be done to ensure the safety of users.
But to what extent are regulations stifling progress? Are they justified in doing so? Let’s consider the current situation.
AI + Moderation: A Perfect Pairing
Wherever there’s User Generated Content (UGC), there’s a need to moderate it; whether we’re talking about upholding YouTube censorship or netting catfish on Tinder.
Given the vast amount of content that’s uploaded daily and the volume of usage – on a popular platform like eBay – it’s clear that while action needs to be taken, it’s unsustainable to rely on human moderation alone.
Enter AI – but not necessarily as most people will know it (we’re still a long way from sapient androids). Mainly, where content moderation is concerned, the use of AI involves machine learning algorithms – which platform owners can configure to filter out words, images, and video content that contravenes policies, laws, and best practices.
AI not only offers the scale, capacity, and speed needed to moderate huge volumes of content; it also limits the often-cited psychological effects many people suffer from viewing and moderating harmful content.
Understanding The Wider Issue
So what’s the problem? Issues arise when we consider content moderation on a global scale. Laws governing online censorship (and the extent to which they’re enforced) vary significantly between continents, nations, and regions.
What constitutes ‘harmful’, ‘illicit’ or ‘bad taste’ isn’t always as clear cut as one might think. And from a sales perspective, items that are illegal in one nation aren’t always illegal in another. A lot needs to be taken into account.
But what about the role of AI? What objections could there be for software that’s able to provide huge economies of scale, operational efficiency, and protect people from harm – both users and moderators?
The broader context of AI as a technology needs to be better understood – which itself presents several key ethical questions over its use and deployment, which vary in a similar way – country-to-country – to efforts designed to regulate content moderation.
To understand this better, we need to look at ways in which the different nations are addressing the challenges of digitalisation – and what their attitudes are towards both online moderation and AI.
The EU: Apply Pressure To Platforms
As an individual region, the EU arguably is leading the global debate on online safety. However, the European Commission continues to voice concerns over (a lack of) efforts made by large technology platforms to prevent the spread of offensive and misleading content.
Following the introduction of its Code Of Practice on Disinformation in 2018, numerous high profile tech companies – including Google, Facebook, Twitter, Microsoft and Mozilla – voluntarily provided the Commission with self-assessment reports in early 2019.
These reports document the policies and processes these organisations have undertaken to prevent the spread of harmful content and fake news online.
While a thorough analysis is currently underway (with findings to be reported in 2020), initial responses show significant dissatisfaction relating to the progress being made – and with the fact that no additional tech companies have signed up to the initiative.
AI In The EU
In short, expectations continue to be very high – as evidenced by (and as covered in a previous blog) the European Parliament’s vote to give online businesses one hour to remove terrorist-related content.
Given the immediacy, frequency, and scale that these regulations require, it’s clear that AI has a critical and central role to play in meeting these moderation demands. But, as an emerging technology itself, the regulations around AI are still being formalised in Europe.
However, the proposed Digital Services Act (set to replace the now outdated eCommerce Directive) goes a long way to address issues relating to online marketplaces and classified sites – and AI is given significant consideration as part of these efforts.
Last year the EU published its guidelines on ethics in Artificial Intelligence, citing a ‘human-centric approach’ as one of its key concerns – as it deems that ‘AI poses risks to the right to personal data protection and privacy’ – as well as a ‘risk of discrimination when algorithms are used for purposes such as to profile people or to resolve situations in
While these developments are promising, in that they demonstrate the depth and importance which the EU is tackling these issues, problems will no doubt arise when adoption and enforcement by 27 different member states are required.
Britain Online Post-Brexit
One nation that no longer needs to participate in EU-centric discussions is the UK – following its departure in January this year. However, rather than deviate from regulation, Britain’s stance on online safety continues to set a high bar.
An ‘Online Harms’ whitepaper produced last year (pre-Brexit) sets out Britain’s ambition to be ‘the safest place in the world to go online’ and proposes a revised system of accountability which moves beyond self-regulation and the need to establish a new independent regulator.
Included in this is a commitment to uphold GDPR and Data Protection laws – including a promise to ‘inspect’ AI and penalise those who exploit data security. The whitepaper also acknowledges the ‘complex, fast-moving and far-reaching ethical and economic issues that cannot be addressed by data-protection laws alone’.
To this end, a Centre for Data Ethics and Innovation has been established in the UK – complete with a two-year strategy setting out its aims and ambitions, which largely involves cross-industry collaboration, greater transparency, and continuous governance.
Numerous other countries – from Canada to Australia – have expressed a formal commitment to addressing the challenges facing AI, data protection, and content moderation. However, on a broader international level, the Organisation for Economic Co-operation and Development (OECD) has established some well-respected Principles on Artificial Intelligence.
Set out in May 2019. as five simple tenets designed to encourage successful ‘stewardship’ of AI, these principles have since been co-opted by the G20 in their stance on AI.
They are defined as:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
While not legally binding, the hope is that the level of influence and reach these principles have on a global scale will eventually encourage wider adoption. However, given the myriad of cultural and legal differences, the tech sector faces, international standardisation remains a massive challenge.
The Right Approach – Hurt By Overt Complexity
All things considered, while the right strategic measures are no doubt in place for the most part – helping perpetuate discussion around the key issues – the effectiveness of many of these regulations largely remains to be seen.
Outwardly, many nations seem to share the same top-line attitudes towards AI and content moderation – and their necessity in reducing harmful content. However, applying policies from specific countries to global content is challenging and adds to the overall complexity, as content may be created in one country and viewed in another.
This is why the use of AI machine learning is so critical in moderation – algorithms can be trained to do all of the hard work at scale. But it seems the biggest stumbling block in all of this is a lack of clarity around what artificial intelligence truly is.
As one piece of Ofcom research notes, there’s a need to develop ‘explainable systems’ as so few people (except for computer scientists) can legitimately grasp the complexities of these technologies.
The problem posed in this research is that some aspects of AI – namely neural networks which are designed to replicate how the human brain learns – are so advanced that even the AI developers who create them cannot understand how or why the algorithm outputs what it does.
While machine learning moderation doesn’t delve as far into the ‘unknowable’ as neural networks, it’s clear to see why discussions around regulation persist at great length.
But, as is the case with most technologies themselves, staying ahead of the curve from a regulatory and commercial standpoint is a continuous improvement process. That’s something that won’t change anytime soon.
New laws and legislations can be hard to navigate. Besedo helps businesses like yours get everything in place quickly and efficiently to adhere to these new legislations. Get in touch with us!
Scammers are unrelenting. And smart. They’re active right throughout the year. This means there’s no particular season when online marketplace and classified site owners need to be extra vigilant. The pressure’s always on them to maintain user safety.
However, scammers know when and how to tailor their activities to maximise opportunities. That’s why they’ll often latch onto different events, trends, seasons, sales, and other activities throughout the year – using a variety of techniques to lure in users, under the guise of an offer or piece of information.
With so much going on in 2020 – from the Tokyo Olympics to US election – scammers will almost certainly be more active than usual. Here’s what consumers and marketplaces need to be aware of this year.
If you want to learn more about the specific scam spikes, visit our scam awareness calendar where we predict spikes on a month-by-month basis.
When the nights draw in and temperatures drop, many begin to dream of sunnier climes and set about searching for their next holiday.
But whether it’s a summer booking or winter getaway, price is always an issue. Cue thousands of holiday comparison sites, booking portals, and savings sites. While many of these are legitimate outfits, often the convoluted online booking experience – as a consequence of using aggregation sites – can confuse would-be travellers.
They’re right to be cautious. As with buying any other goods or services online, even the most reputable travel sites can fall victim to scammers – with scammers advertising cheap flights, luxury lodgings at 2 Star prices, and also offering ‘free’ trips (before being lured into attending a pressured timeshare sales pitch).
If in doubt, customers should always book the best-known travel sites, pay using their verified portal (rather than a link sent via email or direct bank transfer) to ensure that the company that they actually pay for their holiday is accredited by an industry body (such as ATOL in the UK).
From Valentine’s Day to Easter; Halloween to Hanukkah – seasonal scams return with perennial menace year-after-year. Designed to capitalise on themed web searches and impulse purchases, fraudsters play the same old tricks – and consumers keep falling for them.
Charity scams tend to materialise around gift-focused holidays, like Thanksgiving in the US, as well as at Christmas. Anyone can fall victim to them – such as the recent case of NFL player, Kyle Rudoph, who gave away his gloves after a high scoring game for what he thought was a charity auction; only to discover they were being sold on eBay a few days later.
Another popular seasonal scam is phishing emails offering limited-time discounts from well-known retailers, as well as romance scams (catfishers) in which some are prepared to cultivate entire relationships online with others simply to extract money from them.
The general rule with any of these is to be wary of anyone offering something that seems too good to be true – whether it’s a 75% off discount or unconditional love. Scammers prey on the vulnerable.
A whole summer of soccer is scheduled for June and July this year – thanks to the upcoming UEFA European Football Championship (Euro 2020) and the Copa America; both of which will run at the same time: on opposite sides of the World.
However, while you’d expect fake tournament tickets and counterfeit merchandise to be par for the course where events like these are concerned – and easily detectable. But the reality is that many fraudulent third party sites are so convincing, buyers are falling for the same scams experienced in previous years.
If in doubt, customers should always purchase from official websites — such as UEFA online and Copa America. While Euro 2020 tickets are sold out for now (over 19 million people applied for tickets), they’ll become available to buy again in April for those whose teams qualified during the playoffs.
While third party sites are the biggest culprits, marketplace owners should be extra vigilant where users are offering surplus or cheap tickets to any games at all. Although given the prices at which the tickets sell for, you’d be forgiven for thinking that the real scammers are the official vendors themselves.
The Summer Olympic Games is no stranger to scandals – of the sporting variety. However, In the same way as the soccer tournaments referenced above, fake tickets tend to surface in the run-up to the games themselves – on ‘pop-up’ sites as well as marketplaces.
Telltale signs of a scam include vendors asking to be paid in cryptocurrencies (such as Bitcoin), official-sounding domain names (that are far from official), as well as phishing emails, malware, and ransomware – all designed by scammers looking to cash in on the surrounding media hype and immediate public interest that high-profile events bring.
In addition to scams preceding the games, advice issued just prior to the 2016 Rio Olympics recommends visitors be wary of free public WiFi – at venues, hotels, cafes, and restaurants – and recommends travellers take other online security precautions; such as using a Virtual Private Network (VPN) in addition to antivirus software.
Lessons learned from the 2018 Winter Olympics in Pyeongchang shouldn’t be ignored either. Remember the ‘Olympics Destroyer’ cyber attack? That shut down the event’s entire IT infrastructure during the opening ceremony? There was little anyone could do to prevent that from happening (so advanced was the attack and so slick was its coordination). Still, it raised a lot of questions around cybersecurity generally – which no doubt have informed best practice elsewhere.
Also, visitors should avoid downloading unofficial apps or opening emails relating to Olympics information – unless they’re from an official news outlet, such as NBC, the BBC, or the Olympic Committee itself.
Probing Political Powers
While those in the public eye may seem to be the most at risk, ordinary citizens are too. We have Facebook and Cambridge Analytica to thank for that.
Despite this high profile case, while political parties themselves must abide by campaigning practices and even though data security laws – such as GDPR – exist to protect our data, it seems more work needs to be done – by social media companies and governments.
But what can people do? There are ways to limit the reach that political parties have, such as opting out of practices like micro-targeting and being more stringent with social media privacy settings, good old-fashioned caution and data hygiene are encouraged.
To help spread this message, marketplaces and classified sites should continue to remind users to change their passwords routinely, exercise caution when dealing with strangers, and advocate not sharing personal data off-platform with other users – regardless of their assumed intent.
Sale Of The Century?
From Black Friday to the New Year Sales – the end of one year and the early part of the next is a time when brands of all kinds slash the prices of excess stock – clearing inventory or paving the way for the coming season’s collection. It’s also a time when scammers prey upon online shoppers’ frenzied search for a bargain or last-minute gift purchase.
As we’ve talked about in previous blogs, the level of sophistication with which scammers operate in online marketplaces seems to get increasingly creative – from posting multiple listings for the same items, changing their IP addresses, or merely advertising usually expensive items at low prices to dupe those looking to save.
Prioritising Content Moderation
The worrying truth is that scammers are becoming increasingly sophisticated with the techniques they use. For online marketplace owners, not addressing these problems can directly impact their site’s credibility, user experience, safety, and the amount of trust that their users have for their service.
Most marketplaces are only too well aware of all of these issues, and many are doing a great deal to inform customers of what to look out for and how to conduct more secure transactions, online.
However, action always speaks louder than words – which is why many are now actively exploring content moderation – using dedicated expert teams and machine learning AI – the latter adds value to larger marketplaces.
Keeping customers informed around significant events and holidays – like those set out above – ensures that marketplaces are seen as transparent and active in combating fraud online.
This also paints sites in a favourable light when it comes to attracting new users, who may stumble upon a new listing in their search for seasonal goods and services.
Ultimately, the more a site does to keep its users safe, the more trustworthy it’ll be seen as.
You’d be forgiven for thinking that ensuring users aren’t subjected to bad content on dating sites and online marketplaces means waging war on trolling, nudity, and unsavoury content.
Sure, that’s a large part of it, but the fact is bad content has a broader meaning – it’s anything designed to harm or deceive users; images that can negatively impact their user experience, break their trust, or even – worst-case scenario – put them at risk of theft or abuse.
As a result, marketplaces and dating site owners need to ensure they’re aware of the potential outcomes bad images pose. Let’s take a look at the most common types of bad images and consider the impact on both dating app and marketplace users.
The trouble with watermarks is that they often look bad. Sure, they can be positioned more subtly on an image, but the overall impact is that they detract from the image focus. However, they’re still used by many vendors – often to avoid paying sellers’ fee to use an online marketplace.
For example, someone selling a high-priced item (or numerous items of the same price, like a TV, computer, tablet, or phone) may try to circumvent site policies by including their email address, website URL, or phone/WhatsApp number in the watermark itself.
The likelihood is, however, that those who try to lure users away are scammers – compromising user safety (and user trust too) as they’re directed away from legitimate marketplaces.
On marketplaces, watermarks on profile pics are mainly used in the same way as product photos. However, on dating sites, their use is much more frequent and more-often-than-not, used to promote escort services and prostitution. Watermarks are used similarly in 1-to-1 chats – to send contact information in a way that can’t be detected by text filters.
eBay banned watermarks a couple of years ago, initially stating they would monitor pictures – before quickly reneging on this to simply condemn and discourage watermarks rather than police the site itself.
Presumably, this change of heart was prompted by the seller community – or more specifically, the image creators. From a photographer or designer’s point of view, the argument for watermarks is to prevent the misuse of their work and to preserve copyright over them. Also, while watermarks don’t stop images from being copied, but creators can use services like Google Image Search and TinEye to monitor misuse.
Instead of watermarks, an alternative is only to provide low-resolution images – particularly where product photography’s concerned. Another way is to put copyright information in image metadata.
Ultimately watermarks can make images look clumsy and inauthentic – even though they’re designed to make them look more ‘official’. From a user experience perspective, they disrupt the overall image; masking the complete picture. However, for the marketplaces and dating sites themselves, by luring users away from the platform (using embedded contact information) they eliminate associated fees – and if everyone did that, there’d be a major problem.
An ongoing problem for many marketplaces, duplicate listings are something of a grey area in terms of their legitimacy. Essentially while many vendors mean no malice – other than to double their chances of a decent sale – the wider impact is that duplicate ads and images denigrate the browsing and search user experience.
But their troublesome nature doesn’t end there, unfortunately. The use of duplicate images is a tactic long practised by scammers on – as product images on online marketplaces and profile pictures on dating sites.
Duplicate images are commonly used by those selling counterfeit items – often featuring ‘real’ product images taken from genuine sources and repurposed on public marketplaces. Similarly, dating sites are all too aware of ‘catfishing’ and of profiles using images taken from stock photo libraries or even modelling agencies. It’s not just false advertising that users need to be aware of – but romance scammers too.
While duplicate listing scanners exist, most of these are text-based. More sophisticated image detectors do exist, and big marketplaces offer their own detection algorithms too. But many of these are still relatively rudimentary and open to misuse (case in point: Facebook Marketplace which is notoriously easy to ‘hack’).
As a result, many marketplaces are offering greater clarity in terms of how they define duplicates and won’t allow vendors to list the same item in different categories. eBay sternly warns that offenders will see a loss of visibility for their listings. After all, their site’s trust is at stake – which ultimately leads to fewer conversions.
Also, in a similar way to watermarks, duplicate images can also ruin the user experience – confusing customers as to which is the ‘real’ or original example of the product they’re considering buying. And on dating sites, well, there can’t be more than one person with the same profile picture can there? Honesty accounts for a great deal – and people can look very different from one picture to the next – so including a variety of different images is essential for this reason too.
Love or hate the idea, facial recognition technology is increasing in sophistication. It’s already being used in security tech – to do everything from unlocking phones to crossing borders. However, while Facebook might be making leaps and strides in facial recognition, on marketplaces and dating sites, they remain problematic from a content perspective.
As we’ve discussed already, where images of people – especially faces – are concerned, honesty is always the best policy. On dating sites, in particular, users often use images that make them look more attractive – often using different filters to enhance their appearance.
However, when there’s a lot of people in a photograph, it’s often hard to tell who the profile owner is. This has obvious complications on dating sites – where users could be easily misled. They could begin contact with one person thinking they’re another – something that could be disastrous for the user and the dating site – again, because misconceptions can break the trust bond.
Coupled with the proliferation of deep fakes and face/profile image searches and the problem gains another more complex layer – meaning there’s not just a threat to a user’s experience; their safety is at risk too.
In online marketplaces, this isn’t as big a problem, except that the use of people – or more specifically their faces – distracts from the product itself, so vendors should use as few as possible in photography, or not at all if they can help it.
Wherever users can upload their own content, there can be no denying that pornography, nudity, and sex-related images will appear – in both online marketplaces and dating sites.
Where affairs of the heart (or libido) are concerned, while consenting adults are free to share pictures of whatever body parts they like best; for the most part – on public forums and in private chats – it’s unwanted. And when that’s the case; it’s user harassment.
Harassment (of the pictorial and verbal variety) has become entrenched in dating app culture. Largely as a result of male behaviour toward women (check the Instagram account ‘ByeFelipe’ for some prime examples). So, efforts to get rid of it have spawned a whole new wave of female-initiated dating services; such as Bumble.
However, even this doesn’t prevent lewd images from being shared; which is why additional services are needed. Bumble’s Privacy Detector, for instance, which detects nudity, blurs it, and warn users that a picture or video message may be pornographic when it lands in their chat feed.
Anything nudity related is naturally more common on dating sites than marketplaces, but that doesn’t preclude them. Profile photos can often be revealing (which may or may not be ok depending on the site) and of course, as mentioned above, ‘escort’ services may advertise using images that push the boundaries.
The effect? Not keeping users safe from overtly sexual images is a big problem. As mentioned before, it breaks the trust established between user and site. While on dating sites unsolicited nudity is now frequent, that doesn’t make it acceptable. And where online marketplaces are concerned, user-generated content that contains nudity denigrates the site’s reputation.
However, it’s also essential to maintain a balanced view and offer a specific definition of what constitutes nudity on your own site – which might vary depending on the nature of your website.
Picture Of Success
All in all, you’re not going to be able to stop your users from seeing awful content. When users innocently browse a marketplace or look at dating profiles, there’s no guarantee that the images they’ll see will be legitimate, tasteful, or even legal.
What you can do, though, as a site owner is to ensure your site offers the right policies, definitions, and appropriate courses of action. Moderation is crucial to avoid the proliferation of bad images on your site. But it’s no easy task when it relies on user-generated content.
That’s why online content moderation tools are critical to helping online marketplaces and dating sites detect unwanted images and remove them instantly. At Besedo, we combine AI image moderation with human moderation to efficiently tackle the propagation of inappropriate or undesirable images you don’t want on your site.
Ready to find out the online dating trends for 2020? Here’s what industry experts are predicting for the coming year.
2019 is drawing to an end and with that starts a new decade.
It’s hard to remember that in 2010, Tinder didn’t even exist. Ten years later, dating apps have never been so popular, and meeting romantic connections has never been easier thanks to the developments in technology and the maturity of the industry.
Dating trends are popping up all the time, and online dating is getting rid of its taboo and is now universally accepted as a way of meeting new people. With many current and upcoming changes from hyper-niche platforms, video or new legislations, the dating industry is in full swing.
We have turned to experts in the online dating field to figure out what we should be looking out for in the coming year. Here are their expert predictions for the online dating industry in 2020.
What online marketplace trends are we going to see in 2020? Here’s what 8 industry experts predict for the coming year.
2019 is quickly coming to an end, and what an exciting year it has been for online marketplaces!
Marketplaces have been disrupting the way we shop with the rise of the conscious consumer looking for more ways to be sustainable.
As users’ expectations grow and user experience has become highly prioritized, marketplaces have been creative in finding ways to add value and services to their platforms. Convenience has been the keyword for online marketplaces to acquire and retain their customers.
Which trends will shape the marketplace industry in 2020? What can you expect to change in the online marketplace industry in the coming year? We gathered eight predictions from marketplace experts and professionals to see what’s coming up in the industry.
The circular economy is in full swing, and for an excellent reason: with finite natural resources on the planet and the demand for resources exceeding what the earth can regenerate each year, we simply cannot carry on with our current linear production path.
The circular economy aims to transition into consuming less natural resources by reusing, resetting, upgrading and recycling products, giving items we’ve fallen out of love with a chance for another day in a new home.
The resale of used items – an integral part of the circular economy – is booming as people grow more accustomed to buying second hand. Resale has grown 21 times faster than the retail apparel market over the past three years and is on the way to become larger than fast fashion by 2028, according to a thredUP and GlobalData report.
And what better place than online marketplaces for second-hand trade. Indeed, the rise of the conscious consumer has become a perk for marketplaces. With the re-selling of used items, marketplaces are playing a considerable role in the circular economy and its sustainability concerns. In 2018 in France, Le Bon Coin users potentially saved 7.7 million tons of greenhouse gas emissions and 431,992 tons of plastic according to a report by Schibsted and Adevinta.
Buying and selling second-hand items has been favored for numerous reasons. If people can find what they want for less money, they’re likely to purchase that item even if it’s been used. Additionally, the rising awareness surrounding the climate crisis and sustainability issues over the past few years has boosted second-hand selling and buying. Producing less waste and consuming resources responsibly is growing higher in people’s agendas, in turn, increasing the popularity of online marketplaces.
Increased risks for your online marketplace
However, with that rise, comes increased risks for your online marketplace, as more fraudsters will be attracted to your site seeking to deceive your users.
Fraud can jeopardize your user trust and ultimately scare off your customers. In the luxury resale market especially, the sale of counterfeit goods has increased over the years. It’s then essential for online marketplaces to establish strong user trust, particularly in the trade of secondhand items.
In a two-sided marketplace, buyers have to trust that sellers describe the items on sale accurately and that they will eventually receive their goods. In the same way, sellers have to trust that buyers provide payment in good and due form.
You should make sure to protect your user trust and tackle these challenges head-on by putting efficient and accurate content moderation processes in place.
Featuring poor or fraudulent content on your marketplace has a cost. Fraudulent activity on your marketplace can have disastrous effects on your reputation and the trust users have in your platform.
Second-hand is not a fad, and resale is turning to the mainstream. Marketplaces need to find new ways of dealing with fraud as the demand for the second-hand trade grows.
To learn more about how you can protect your users from fraud, have a look at our article about the different types of content moderation we offer to protect your brand.
When Facebook CEO, Mark Zuckerberg recently came under fire for the company’s admittedly tepid approach to political fact-checking (as well as some revelations about just what constitutes ‘impartial press’), it became clear that where content moderation is concerned, there’s still a big learning curve – for large and small companies.
So given that a company like Facebook with all of the necessary scale, money, resources, and influence, struggles to keep on top of moderation activities – what chance do smaller online marketplaces and classified sites have?
When the stakes are so high, marketplaces need to do everything they can to detect and remove negative, biased, fraudulent, or just plain nasty content. Not doing so will seriously damage their credibility, popularity, and ultimately, their trustworthiness – which, as we’ve discussed previously, is a surefire recipe for disaster.
However, we can learn a lot from the mistakes of others and by putting the right moderation measures in place. Let’s take a closer look at the cost of bad content and at ways to prevent it from your online marketplace.
The cost of fraudulent ads
Even though we live in a world in which very sophisticated hackers can deploy some of the most daring and devastating viruses and malware out there – from spearphishing to zero-day attacks – there can be little doubt that the most common scams still come from online purchases.
While there are stacks of advice out there for consumers on what to be aware of, marketplace owners can’t solely rely on their customers to take action. Being able to identify the different types of fraudulent ads – as per our previous article – is a great start, but for marketplace owners, awareness goes beyond mere common sense. They too need to take responsibility for their presence – otherwise, it’ll come with a cost.
Having content moderation guidelines or community that give your employees clear advice on how to raise the alarm on everything from catfishers to Trojan ads is crucial too. However, outside of any overt deception or threatening user behaviors, the very existence of fraudulent content negatively impacts online marketplaces as essentially, it gradually erodes the sense of trust that they have worked so hard to build. Resulting in lowered conversion rates and, ultimately, reduced revenue.
One brand that seems to be at the center of this trust quandary is Facebook. It famously published a public version of its own handbook last year, following a leak of its internal handbook. While these take a clear stance on issues like hate speech, sexual, and violent content; there’s little in the way of guidance on user behavior on its Marketplace feature.
The fact is, classified sites present a unique set of moderation challenges – that must be addressed in a way that’s sympathetic to the content forms being used. A one-size-fits-all approach doesn’t work. It’s too easy to assume that common sense and decency prevail where user-generated content is concerned. The only people qualified to determine what’s acceptable – and what isn’t – on a given platform are the owners themselves: whether that relates to ad formats, content types, and the products being sold.
Challenging counterfeit goods
With the holiday season fast approaching, and two of the busiest shopping days of the year – Black Friday and Cyber Monday – just a few weeks away, one of the biggest concerns online marketplaces face is the sale of counterfeit goods.
It’s a massive problem: one that’s projected to cost $1.8 trillion by 2020. It’s not dodgy goods sites should be wary of; there’s a very real threat of being sued by an actual brand for millions of dollars: if sites enable vendors to use their name on counterfeit products: as was the case when Gucci sued Alibaba in 2015.
However, the financial cost is compounded by an even more serious one – particularly where fake electrical items are concerned.
According to a Guardian report, research by the UK charity, Electrical Safety First shows that 18 million people have mistakenly purchased a counterfeit electrical item online. As a result, there are hundreds of thousands of faulty products in circulation. Some faults may be minor; glitches in Kodi boxes and game consoles, for example. Others, however, are a potential safety hazard – such as the unbranded mobile phone charger which caused a fire at an apartment in London last year.
The main issue is the presence of fraudulent third-party providers setting up shop on online marketplaces; advertising counterfeit products as a genuine article.
Staying vigilant on issues affecting consumers
It’s not just counterfeit products that marketplaces need to counter; fake service providers can be just as tough to crack down on too.
Wherever there’s misery, there’s opportunity. And you can be sure someone will try to capitalize on it. Consider the collapse of package holiday giant, Thomas Cook, a couple of months ago – which saw thousands of holidaymakers stranded and thousands more have their vacations canceled.
Knowing consumer compensation would be sought, a fake service calling itself thomascookrefunds.com quickly set to work gathering bank details, promising to reimburse those who’d booked holidays.
While not an online marketplace-related example per se, cases like this demonstrate the power of fake flags planted by those intent on using others’ misfortune to their own advantage.
Similarly, given the dominance of major online marketplaces, as trusted brands in their own right, criminals may even pose as company officials to dupe consumers. Case in point: the Amazon Prime phone scam, in which consumers received a phone call telling them their bank account had been hacked and they were now paying for Amazon Prime – before giving away their bank details to claim a non-existent refund.
While this was an offline incident, Amazon was swift to respond with advice on what consumers should be aware of. In this situation, there was no way that moderating site content alone could have indicated any wrongdoing.
However, it stands to reason that marketplaces should have a broader awareness of the impact of their brand, and a handle on how the issues affecting consumers should be aligned with their moderation efforts.
Curbing illegal activity & extremism
One of the most effective ways of ensuring the wrong kind of content doesn’t end up on an online marketplace or classifieds site is to use a combination of AI moderation and human expertise to accurately find criminal activity, abuse, or, extremism.
However, in some cases, it’s clear that those truly intent on making their point still can find ways around these restrictions. In the worst cases, site owners themselves will unofficially enable and advise users on ways to circumvent their site’s policies for financial gain.
This was precisely what happened at the classifieds site Backpage. It transpired that top executives at the company – including the CEO, Carl Ferrer – didn’t just turn a blind eye to the advertisement of escort and prostitution services; but actively encouraged the rewording and editing of such ads to give Backpage ‘a veneer of plausible deniability’.
As a result of this, along with money laundering charges, and for hosting child sex trafficking ads; not only was the site taken down for good, but officials were jailed – following Ferrer’s admission of guilt for all of these crimes.
While this was all conducted knowingly, sites that are totally against these kinds of actions, but don’t police their content effectively enough, are putting themselves at risk too.
Getting the balance right
Given the relative ease with which online marketplaces can be infiltrated, can’t site owners just tackle the problem before it happens? Unfortunately, that’s not the way they were set up. User-generated content has long been regarded as a bastion of free speech, consumer-first commerce, and individual expression. Trying to quell that would completely negate their reason for being. A balance is needed.
The real problem may be that ‘a few users are ruining things for everyone else’, but ultimately marketplaces can only distinguish between intent and context after content has been posted. Creating a moderation backlog when there’s such a huge amount of content isn’t a viable option either.
Combining man & machine in moderation
While solid moderation processes are crucial for marketplace success, relying on human moderation alone is unsustainable. It’s for many sites just not physically possible to review every single piece of user-generated content in real-time.
That’s why online content moderation tools and technology are critical to helping marketplace owners identify anything suspicious. When combining AI moderation with human moderation, you’re able to efficiently find the balance between time-to-site and user safety; which is what we offer here at Besedo.
Ultimately, the cost of bad content – or more specifically, not moderating it – isn’t just a loss of trust, customers, and revenue. Nor is it just a product quality or safety issue. It’s also the risk of enabling illegal activity, distributing abusive content, and giving extremists a voice. Playing a part in perpetuating this comes at a much heavier price.
This past September, Global Dating Insights (GDI) – the leading source of news and information for the online dating industry – gathered the international dating industry during an engaging conference in London.
With renowned speakers and insightful presentations of more than twenty leading industry-disruptors, the conference tackled some of the industry’s most important questions and exciting foreseeable trends of the future.
The dating industry might be facing some challenges with retention and engagement, but online dating sites are increasingly more creative in retaining customers and improving their user experience continually.
Here is our recap of the GDI Conference in London 2019, and the trends emerging in the near future.
Bringing the online dating world into the real one
Bringing online and offline together with real-life experiences such as members events and marketing campaigns has become an important strategy for many dating sites.
Online dating sites need to keep up with the latest consumer and demographic trends to improve their retention and experience. That is, what their users are seeking. Thus, many sites and apps have gone the route of making their dating apps feel more like a community.
Millennials and Gen Z are authentic experience seekers. From that insight, UK dating app Clikd has found creative ways of engaging their demographic. The company started organizing events to bring together its users. For one night in a venue, people meet 20 fellow users to find their ideal partner and are picked to jet off to a lavish holiday together.
Clikd has also launched its popular marketing campaign ‘the Clikd Summer Internship’, the world’s best internship to find love. The winning applicant is paid to go to 10 dates over 10 weeks and produce content to engage fellow users, as well as many other fancy perks obtained through the internship.
For selective dating site the Inner Circle, according to an interview of co-founder Michael Krayenhoff, offline events help to emphasize stronger brand loyalty from users through deeper connections.
Venturing into real-life experiences is a trend that has proven to work for businesses to strengthen their brands and match their users’ needs.
Engaging users through video
Standing out in the crowded dating industry is no small feat. Users often drift between two or three different dating apps daily, so it is essential for companies looking to overcome fierce competition to improve their user retention.
Video is dubbed to be the next big thing in the dating industry. CEO of location-based app happn, Didier Rappaport has emphasized that video is the significant future development in the industry, and many businesses are going down that route.
He said in an interview: “We need to allow people to hear the voice, to see the mannerisms and understand the person better than just looking at their picture,” adding that happn is working on developments to bring more real life into online dating with video interaction.
Video features are launched as we speak. The Meet Group’s app MeetMe has recently implemented a one-on-one video chat feature to facilitate confident connections and user safety before meeting. Members can start video-chatting with users they have already exchanged with to get a better sense of the person on the other side of the screen.
Expanding platforms’ value propositions, such as including video features, is an upcoming trend that will most likely improve user engagement and usage.
Making apps more human and creating better interactions
According to a study by eHarmony, 70% of American singles are looking for a serious relationship. It is no surprise then that singles are looking for value in dating, and not just mindless swiping anymore.
Many value-driven, and niche, online dating apps have popped up in the last few years and grown exponentially, privileging quality over quantity. It’s not just about the profile picture anymore.
For instance, Neargroup puts personalities before pictures by matching users before they can see each others’ pictures – ending the profile picture swiping craze.
Another example of a value-driven app is Say Allo, also present at the conference. Say Allo is a self-proclaimed ‘relationship app’ for matures singles focusing on compatible connections without wasting time swiping away.
Focusing on quality over quantity, dating app Once follows the trend of slow-dating with only providing their users with one match per day.
Dating fatigue and burnout are now so common; it has become a new challenge threatening retention for apps to tackle. Many companies have taken action against certain online behaviors.
Ghosting – which is the practice of ignoring dates and leaving messages unanswered after speaking to or going on a date with them – has been an issue for singles on dating apps for a while. And it is an issue for apps themselves driving disillusioned users to delete their accounts. To tackle the problem, some companies have launched anti-ghosting features. Dating app Hinge has rolled out a feature dubbed “Your Turn” pushing users to answer their abandoned matches.
Similar strategies are used by apps Bumble and Badoo to avoid the ghosting scourge threatening their retention and usage.
Online dating safety comes first
Another challenge to overcome for dating companies is to increase the number of women on dating sites. Dating sites still have a majority of male users. Some reasons being the fear of bad encounters or inappropriate experiences such as indecent pictures or sexual harassment. According to a study, across the dating services, 18% of participants reported having an issue with another user in the past.
Bumble CEO Whitney Wolfe Herd asked in an interview “why is it allowed digitally when it’s not allowed in the streets? People are operating on their phone. We need to keep the internet safe.” To attract women, new ideas are tested to give women more control over their dating experiences, including video interactions.
Contrarily to Chatroulette’s community with its lack of safety and unfortunate experiences reviewed online, dating sites are betting on restricted video features to enhance their community and make their users feel safer – especially women – before meeting their potential love-interest.
The Online Dating Association (ODA) is an international nonprofit organization dedicated to safety and standardizing best practices in the online connection space. At the GDI conference, the ODA emphasized the importance of setting ethical frameworks and standards for the industry as authorities and governments will demand more control and more regulations may come.
This is something online dating sites will need to pay close attention to. Stay ahead of the game and learn more about liability and moderation regulations in our interview with law professor Eric Goldman.
Dating apps thrive not merely by using technology to enhance user experience and retention, but also by creating safety features and guidelines to protect their users and brand reputation.
As any app (or online service) provider knows – in their quest to hit an all-important network effect – it’s not just downloads and user numbers that indicate success. Revenue generation is ultimately needed to ensure longevity.
Dating apps have established some of the most forward-thinking and innovative monetization methods in technology today. But finding a perfectly matching monetization strategy for your app or dating site means adopting a method that reflects its content, style, and user experience.
Luckily, there are lots of different tried and true monetization strategies out there already. Although they broadly fall into two major categories – in which the user pays or a third party pays – there are many different variations.
Here are some ways dating site owners can monetize their operations or improve their current strategy.
Advertising: Great When There’s Scale
Allowing other brands to advertise on your site has been part of the online world since the first sites went up. A natural extension of the broadcast media commercial model, passing the cost onto third party advertisers allows dating sites and apps to offer services for free: albeit for the price of the user’s attention.
Ad formats themselves come in all shapes and sizes – from simple PPC text ads to full-page interstitials, as well as native ads (more consistent with a site’s overall inventory), in-line carousel ads, in-feed content stream ads; among many others. Revenue is either generated via clicks, views, or transactions.
However, dating apps offer higher click-through rates and eCPMs (effective cost per thousand impressions) than games or other types of apps. Despite this, brands still need to work harder to make an impact as consumers have grown weary/immune/resistant to digital advertising.
Where online dating apps and sites are concerned, third party commercial affiliations range from the sublime to the ridiculous. Some pairings – like Tinder’s Dominos and Bud Light beer partnerships – might appear odd at first but, considering the importance of food and drink in the dating/socializing scene, actually make perfect sense.
From a business perspective, campaigns like these are a testament to a dating app’s ability to engage certain demographics (usually millennials) at scale; demonstrating the pulling power of a specific dating platform.
However, it’s not necessarily a technique that can be relied on to monetize a digital dating service from its very inception. Other methods are much more effective at doing that – often by selling their features and benefits. But this involves the cost being pushed back onto the user.
Subscriptions: Luring Users Behind The Paywall
Subscriptions ain’t what they used to be. Consumers are a lot more reluctant to part with their cash if they can’t see a genuine benefit for the service they’re from the very outset.
For some, better user experience is enough to sway them to part with a little cash each month. However for others, given that so many ‘free’ dating apps exist (admittedly of varying quality), unless they can clearly see what they’ll be getting for their money, they’ll take their chances elsewhere.
To overcome this, dating sites and apps offer varying degrees of ‘membership’ which can seem a little muddled to the uninitiated. So let’s consider the main contenders.
Firstly there’s the ‘free and paid app versions’ model: in which the free version has limited functionality, meaning the user must upgrade to fully benefit. Stalwarts like OkCupid and Plenty of Fish were among the first pioneers here – but many others champion this model too, including EliteSingles, Jaumo, Zoosk, Grindr and HowAboutDating – offering monthly and annual subscriptions.
The ‘Freemium’ model offers a similar experience – providing basic functionality for free – such as finding and communicating with others. However, other perks are available for an additional cost.
Badoo’s ‘Superpowers’ feature is probably the best known: letting users see who ‘liked’ them and added them to their favorites, as well as giving access to invisible mode, having their messages highlighted – plus they don’t see any ads. In fact, the popularity of Tinder’s ‘Rewind’ feature (taking back your swipe) led the company to start charging for it via it’s Tinder Plus and Tinder Gold packages. Bumble Boost, Hinge’s Preferred Membership, and Happn’s Premium are other scope-widening freemium services worth mentioning too.
A slight variation is the ‘free app with in-app purchases’ model. In addition to greater functionality – like a broader search radius and more daily matches – users can buy virtual and actual gifts and services. For example, Plenty Of Fish lets users digitally buy in-app ‘Goldfish’ credits to send virtual gifts to their potential dates – a folly to break the ice basically.
However, those that don’t want to pay, but are keen to test a few additional dating app perks, can often complete in-app tasks for limited-time access to premium accounts. Users are usually presented with an ‘offerwall’ detailing tasks to complete and the rewards to be reaped. MeetMe’s rewarded videos are a great example of this, as are rewarded surveys which seem to become increasingly common – and were trialed by dating app Matcher (now Dangle) a while back.
Activities like these indicate dating sites’ key asset: their audience data. Given that 15% of Americans use dating services and that the average user dedicates around 8 minutes to every session, the opportunity is real for those that achieve a certain scale.
But you can’t just sell data – can you?
Data Monetization: Insights For Sale
The sale of user data is a big no-no when specific information is involved (remember Cambridge Analytica?). But when the user grants consent and the data remains anonymous, well, that’s a different story.
Companies operating in EU countries need to abide by GDPR regulations or risk severe penalties, and other international data security initiatives, such as the EU-US Privacy Shield Framework are held in high esteem. So how can dating sites use their rich data sources as a revenue generation tool?
The only kind of data that can be sold is non-personal data – with a user’s consent. Even then, the type of data source is restricted to basic parameters: device types, mobile operator, country, screen size – among others.
The good news is that there’s significant demand for all of this data – from market researchers across many different sectors for a range of purposes; including optimizing user experiences and understanding buying choices.
On another positive note, according to one research survey, 95% of respondents are content to use apps that collect anonymous usage statistics.
However, unless your dating app has more than 50,000 daily active users, it won’t offer a large enough pool to draw from; and it will prove difficult to find a buyer for it.
Which Monetization Strategy Works Best?
All things considered, as with many types of online businesses, the greater the combination of monetization methods, the more profit there is to be had. Perhaps that can explain Tinder’s phenomenal global success.
But in isolation, each method has its drawbacks. Advertising only reaps a reward when a service offers scale; otherwise, where’s the value for brands? Conversely, charging users for a new service can be tricky to justify – unless the cost unlocks some additional never-seen-before feature. And without scale, charging marketers for data insights is pretty much impossible.
What is crucial, however, from the very outset, is that dating platforms establish a strong, dedicated user base. This means doubling down on trust and user safety, and finding ways to keep users engaged.
Despite the many positive things about dating sites, for some, the negative connotations persist. While sites and apps are a lot more conscious of preventing these, as with any platform that relies on user-generated content, the risk of users being catfished, shown inappropriate content, or de-frauded is always prevalent.
However, there are lots that digital dating platforms can do to build trust in their platforms and boost conversions. Content moderation is just one area – but it’s one that any dating service looking to expand its user base can’t ignore.
Ultimately there’s no substitute for getting the service right, knowing your users’ wants and needs (and there are many different dating services!), and developing a safe, secure and engaging environment for them to interact in. With these established, and when active usage hits a critical mass, monetization becomes a natural next step.