The Future Of Dating Is Video: How Do You Keep Singles Safe Online?

When it comes to affairs of the heart – at a time when physical contact is off-limits – it’s time to get creative. And that exactly what online dating platforms are doing: using video interaction as the ‘date’ itself.

While it’s clear that dating is innovative, exciting, and evolving at a rapid pace, how can dating site owners ensure they keep users safe?

 

Necessity Breeds Dating Invention

Video dating is nothing new. Well, in its current, interactive form it’s brand new, but use of video as a way of introducing yourself to potential dating partners took off in the 1980s and 90s. But back then, agencies were involved. And like recruiters, it was their job to vet, interview, and engineer love matches based on compatibility and common likes and dislikes.

However, fast forward 35 years, and the ways in which we interact have shifted significantly. And they just keep innovating. Services like eHarmony, Tinder, and Bumble, each offer their own unique approach to self-service matchmaking. And while social media platforms (Facebook Dating, anyone?) have been dipping their toes into the dating pool for a little while now, nothing groundbreaking has taken the sector by storm.

Most industry insiders saw the use of video as an ‘add-on’ to dating platforms but no-one was entirely sure how this would play out. And then, in March 2020, the COVID-19 pandemic hit. Lockdowns ensued internationally. Suddenly video took on a whole new role.

Communication evolved in one major direction – online video calls and meetings. Replacing face-to-face with face-to-screen encounters in a time of social distancing represents a huge cultural shift, unimaginable back in 2019.

Whether we’re learning at home or working remotely, how we stay connected has changed significantly. Substituting in-person conversations with video meetings is now par for the course.

Despite the ensuing Zoom fatigue, being advised to stay at home has undoubtedly led to a spike in online dating. And with traditional dating venues no longer a COVID-safe option, video dating has organically risen to the forefront.

 

Why Video Dating?

While not every dating site or user is engaging with video dating yet, many are trying it out.  But what are the benefits of video dating? If your online dating platform is not already providing that service, are your users missing out?

Compared with traditional online dating, video dating has some great benefits. The most obvious reason to choose video dating is that it enables participants to experience the social presence that’s lacking in written communication. As a result, it can feel much more real and authentic than just exchanging messages or swiping photos.

With a video date, users have that experience of getting to know someone more slowly, finding out if they’re a good match in terms of personality, sense of humour, and other qualities. This means if you don’t click with someone, you’re more likely to find out sooner. Particularly at a time when in-person meetings are restricted, this is a huge advantage in terms of making the leap to meeting in person.

But swapping a bar or restaurant for a video meeting carries a different set of risks for participants. And for online dating platforms, video dating poses tough new challenges for content moderation. Especially when it comes to livestream dating with an interactive audience.

 

Dating Live & In Public

Dating in front of a live audience is nothing new. In the 1980s, television dating shows like ‘Blind Date’ in the UK experienced huge popularity. Contestants performed in front of a live studio audience and put themselves at the mercy of the general public – and the tabloid press(!).

In the 2010s, the television dating game show-style format was revived – though it followed a wider trend for ‘reality TV’ with dating shows such as ‘Love Island’ emerging and growing in popularity. However, the legacies of these shows have been tainted by a small number of poorly-vetted contestants – some even had previous convictions for sex offences – suffering serious mental-health conditions as a result of their appearance on the show.

Despite these warning signs, it seems inevitable that the trend for dating-related entertainment has been adopted by interactive online technologies – livestream dating. Often described as ‘speed dating in a public forum’, the trend for watching and participating in live video dating seems a logical extension of platforms like Twitch and TikTok.

But sites like MeetMe, Skout, and Tagged aren’t just a way of making connections – they’re also an opportunity for daters to generate revenue. Some platforms even provide users with the functionality to purchase virtual gifts which have real monetary value.

Needless to say, these kinds of activities continue to raise questions about users’ authenticity: in terms of dating in pursuit of love. This is why, over the last decade, many industries have made a conscious move towards authenticity – in order to build better brand trust. The dating industry is no different, especially since – despite exponential growth – there are still major retention and engagement issues.

Video offers that sense of authenticity, particularly as we’re now so accustomed to communicating with trusted friends and family via live video.

Dating also has universal appeal, even to people already in committed relationships. There is an undeniable voyeuristic aspect to watching a dating show or watching live streamed daters. And of course there are inherent safety risks in that.

Like other interactive social technologies, the livestream dating trend carries its own intrinsic dangers in terms of mental health and user experience. And just like any other interactive social media, there are always going to be users who are there to make inappropriate comments and harass people.

That’s where content moderation comes into play.

 

So How Can Content Moderation Support Safer Dating?

One-to-one video dating and livestream dating is happening right now. Who knows where they will evolve?

Setting your brand apart in an already crowded dating industry is becoming more complicated in a time when social media technologies are rapidly evolving. How will you stay ahead of the curve and keep your users safe?

Of course, video moderation is not the only challenge you’re going to face. The associated unwanted user-generated content that goes with running an online dating platform includes:

  • Romance scams
  • prostitution
  • online harassment
  • catfishing
  • profanity
  • nudity
  • image quality
  • underaged users
  • escort promotion.

After all, brand trust means a better user experience. And a better user experience increases user lifetime value – and revenue.

On average, 1 in 10 dating profiles created is fake. Scammers and inappropriate content hurt your platform’s reliability. Left unanswered, undesirable content undermines user trust and can take a heavy toll on your acquisition and retention.

– but it does mean taking a leap in terms of your overall digital transformation strategy, and adding AI and machine learning to your service.

With an all-in-one package from Besedo, you can get your content moderation in order across multiple areas. It’s built on over 20 years experience and now has manual video moderation capabilities.

This means you can now review play, pause, timestamp and volume control videos. More importantly you can delete videos which don’t meet your site’s standards for user-generated content. Take a look at our short video guide to discover more.

 

Make dating online safer. Find out more about working with us and request a demo today.

 

Martin Wåhlstrand

By Martin Wåhlstrand

Regional Sales Director – Americas

Why creating sustainable growth means looking beyond the digital present

Over the past decade, it has become common to suggest that every company is now a tech company.

The exponential growth in digital usage quickly outgrew what we traditionally think of as the technology sector and, for users, the agility of the internet didn’t stay confined to the online world. Technology has shifted expectations about how everything can or should work. Soon, companies selling everything from furniture to financial services started to look and act more like innovative tech companies. They find new ways to solve old problems through digital channels.

In other words, business leaders seeking to guarantee growth turned to digital technology – to the point that, now, the Chief Technology Officer is a key part of the C-suite.

After a year when we’ve all relied on the internet more than ever, in every aspect of our lives, growth through digital has never been more apparent. For business, digital communication has at times been the only possible way of staying in touch with customers, and there’s no sign that the CEO’s focus on agility and technology is fading. In recent surveys, IBM found that 56% of CEOs are ‘aggressively pursuing operational agility and flexibility’, PwC found that they see cyber threats as the second biggest risk to business, and Deloitte found that 85% think the pandemic accelerated digital transformation.

If the exponential growth of digital has made every company a technology company, though, it has also made terms like ‘technology’ and ‘agility’ less useful. If every CEO is pursuing a digital strategy, that term must be encompassing a vast range of different ideas. As we look towards the next decade of growth – focused on managing the challenge of achieving more responsible and sustainable business along the way – we will need to think carefully about what comes next once digitalisation is universal.

Supercharged tech growth has skyrocketed user-generated content

Of course, the importance of agile technology has never been the tech itself, but what people do with it. For customers we’ve seen tech innovation create new ways of talking, direct access to brands, and large changes in how we consume media and make purchases.

As digital channels take on a greater share of activity than ever, one of the effects of an exponential growth in digital is an exponential growth in user-generated content (UGC).

This user-led interaction, from product reviews to marketplace listings to social interactions, fully embodies the agility that companies have spent the last decade trying to bring to their processes; because it is made by people, UGC is rapid, diverse, and flexible by default. While it may be too soon to say that every business will become a content business, it’s clear that this will become an increasingly important part of how businesses operate. Certainly, it’s already a major driving force for sectors as diverse as marketplaces, gaming, and dating.

A UGC business must be protected to maximise opportunity

In the move towards UGC, a business’s user interaction and user experience will have consequences across the organisation – from profit margin, to brand positioning, to reputational risk, to technological infrastructure. Across all of these, there will be a need to uphold users’ trust that content is being employed responsibly, that they are being protected from malign actors, and that their input is being used for their benefit. Turning content into sustainable growth, then, is a task that needs to be addressed across the company, not confined to any one business function.

Marketers, for instance, have benefited from digitalisation’s capacity to make the customer experience richer and more useful – but it has also introduced an element of unpredictability in user interactions. When communities are managed and shaped, marketers need to ensure that those efforts produce a public face in line with the company’s ethos and objectives.

While tech teams need to enable richer user interaction, their rapid ascent to become a core business function has left them under pressure to everything, everywhere. Their innovation in how content is managed, therefore, needs a middle path between the unsustainable workload of in-house development and the unsustainable compromises of off-the-shelf tooling.

With the ultimate outcomes of building user trust being measured in terms of things like brand loyalty and lifetime user value, finance departments will also need to adapt to this form of customer relationship. The creation of long-term financial health needs investments and partnerships which truly understand how the relationship between businesses and customers is changing.

UGC as a vital asset for sustainable business growth

Bringing this all together will be the task needed to create sustainable growth – growth which is fit for and competitive in the emerging context of UGC, sensitive to the increasing caution that users will have around trusting businesses, and transparent about the organisations ethos, purpose, and direction. It will require not just investing in technology, but understanding how tech is leading us to a more interactive economy at every scale.

As digitalisation continues to widen and deepen, we may find UGC, and the trust it requires, becoming just as vital an asset for businesses as product stock or intellectual property. To prepare for that future and maximise their business growth from their UGC, businesses need to start thinking and planning today.

Petter Nylander - Besedo CEO

By Petter Nylander

CEO Besedo Global Services

If you keep your eye on content moderation as we do, you’ll be aware that the EU’s Digital Services Act (DSA) is on the road to being passed, after the European Commission submitted its proposals for legislation last December.

You’ll also know, of course, that the last year has been a tumultuous time for online content. Between governments trying to communicate accurately about the pandemic, a tumultuous US election cycle, and a number of protest movements moving from social media to the streets, it’s felt like a week hasn’t passed without online content – and how to moderate it – hitting the headlines.

All of which makes the DSA (though at least partly by accident) extremely well-timed. With expectations that it will overhaul the rules and responsibilities for online businesses around user-generated content, EU member states will be keen to ensure that it offers an effective response to what many are coming to see as the dangers of unmanaged online discourse, without hindering the benefits of digitalized society that we’ve all come to rely on.

There’s a lot we still don’t know about the DSA. As it is reviewed and debated by the European Council and the European Parliament, changes might be made to everything from its definition of illegal content to the breadth of companies that are affected by each of its various new obligations. It’s absolutely clear, though, that businesses will be affected by the DSA – and not only the ‘Very Large Platforms’ like Google and Facebook which are expected to be most heavily targeted.

Many people looking at the DSA will instinctively think back to the last time the EU made significant new law around online business with the GDPR. The impact of that regulation is still growing, with larger fines being levied year-on-year, but it’s perhaps more important that internet users’ sense of what companies can or should do with data has been shifted by the GDPR. Likewise, the DSA will alter the terrain for all online businesses, and many industries will have to do some big thinking over the coming years as the act moves towards being agreed upon.

Content moderation, of course, is our expertise here at Besedo, and making improvements to how content is managed will be a big part of how businesses adapt to the DSA. That’s why we decided to help get this conversation started by finding out how businesses are currently thinking about it. Surveying UK-based businesses with operations in the EU across the retail, IT, and media sectors, we wanted to take the temperature of firms that will be at the forefront of the upcoming changes.

We found that, while the act is clearly on everyone’s radar, there is a lot of progress to be made if businesses are to get fully prepared. Nearly two-thirds of our respondents, for example, knew that the DSA is a wide-ranging set of rules which applies beyond social media or big tech. However, a similar proportion stated that they understand what will be defined as ‘illegal content’ under the act – despite the fact that that definition is yet to be finalized.

Encouragingly, we also found that 88% of respondents are confident that they will be ready for the DSA when it comes into force. For most, that will mean changing their approach to moderation: 92% told us that achieving compliance will involve upgrading their moderation systems, their processes, or both.

As the DSA is discussed, debated, and decided, we’ll continue to look at numbers like these and invite companies together to talk about how we can all make the internet a safer, fairer place for all its users. If you’d like to get involved or want insight on what’s coming down the road, our new research report, ‘Are you ready for the Digital Services Act?’, is the perfect place to start.

March marks the 1-year anniversary of WHO declaring Covid 19 a global pandemic. While vaccines are now being rolled out and a return to normality is inching closer, online trade is still heavily influenced and characterized by a year in and out of lockdown. And so are the content moderation challenges we meet in our day-to-day work with platforms across the globe.

Shortage in graphics cards increases electronic frauds.

Whether for work or entertainment, being homebound has caused people to shop for desktop computers at a level we haven’t seen for a decade. For the past 10 years, mobile-first has been preached by any business advisor worth listening to, but lockdowns have given desktop computers a surprising comeback and increased demands for PC parts.

The increased interest in PCs combined with the late 2020 release of the new console generation and the reduced production caused by pandemic mandated lockdowns has created an unexpected niche for scammers.

Google trend for buying graphics cardWe’re currently seeing a worldwide shortage of graphic cards, needed for both consoles and desktop computers and scammers haven’t wasted a second to jump on the opportunity.

In March we’ve seen a significant increase in fraud cases related to graphics cards with gaming capabilities. In some cases, more than 50% of fraud cases we deal with have been related to graphics cards.

Puppy scams are still sky-high.

In March we post-reviewed puppy scams on 6 popular online marketplaces in the UK. We found that almost 50% of live listings showed signs of being fraudulent.

Pet trade has exploded since the beginning of the pandemic and scammers are still trying to take advantage of those looking for new furry family members.

Sleeper accounts awaken.

Our moderators warn that this month they’ve seen an increase in sleeper accounts engaging in Trojan scams. The accounts post a low-risk item, then lays dormant for a while before they start posting high-value items. The method is used to circumvent moderation setups that only moderate the first items posted by new accounts.

High-risk items posted by these accounts are often expensive electronics in high demand, such as cameras or the Nintendo Switch.

April is looking to be an interesting month in terms of content moderation challenges. With many countries tentatively opening up and others concerned about a 3rd wave, we recommend that all marketplace owners keep a close eye on corona-related scams. From masks to fake vaccines and a potential incoming surge of forged corona passports staying alert, up to date, and keeping your moderators educated will be as important as ever.

If you need help reviewing your content moderation setup or are looking for an experienced team to take it off your hands, we’re here to help.

It’s been a long time since social media was simply a recreational diversion for its users. While the early days of social networks were dominated by the excitement of reconnecting with old school friends and staying in touch with distant relatives, they have continued to grow rapidly, and in the process have become embedded in every aspect of society.

Today, it’s unremarkable to hear a tweet being read out on the news – fifteen years ago when Twitter was founded, having social media form part of current affairs reporting would have been unimaginable. This growth has been so fast that it’s easy to believe that we have hit a ceiling and that these platforms couldn’t take center stage any more strongly than they already have.

Even though we’re just a couple of months in, 2021 is shaping up to be a year which, once again, proves that belief wrong. The fact that the gravitational pull of social media on the rest of the world is continuing to grow has enormous consequences for businesses: not just the platforms themselves, but every business that deals with user-generated content.

Content moderation: a high priority with high stakes

Late last year, Gartner predicted that “30% of large organizations will identify content moderation services for user-generated content as a C-suite priority” by 2024. It’s not hard to guess why it was on their radar. All of the biggest global stories of 2020 were marked, in one way or another, by the influence of social media.

Facing the pandemic, governments across the world needed to communicate vital health information with their citizens and turned to social media as a fast, effective channel – as did conspiracy theorists and fraudsters. Over the summer, Black Lives Matter protests swept America and spread globally, sparked by a viral video and driven by online organizing. Later in the year, the drama of the US Presidential election happened as much on Facebook and Twitter as it did on American doorsteps and the nightly news.

Across these events, and more, businesses have been at pains to communicate the right things in the right ways, always aware that missteps (even the mishandling of interactions with members of the public whose communication they cannot influence) will be publicized and indelible. As Gartner summarises, social media is “besieged by polarizing content, [and] brand advertisers are increasingly concerned about brand safety and reputational risk on these platforms”.

This year, social is driving the agenda

Content moderation is therefore becoming an essential tool for operating (as almost all companies now do) online. However, while the suggestion that it will rise to be a priority for 30% of C-suites over the next three years certainly isn’t modest, it already feels like Gartner was perhaps thinking too small.

We have since seen an attack on the US Capitol which was, in large part, organized by users on Parler; a mini-crisis on Wall Street spontaneously emerging from conversations on Reddit; and, most recently, an argument between Facebook and the Australian government which resulted in a number of official COVID-19 communications pages on the platform being temporarily blocked.

These are not just social media reactions to ongoing external stories – they are events driven by social media, with user-generated content at their heart. The power of social platforms to affect people, businesses and society at large have not peaked yet.

That’s the context that the UK’s Online Safety Bill and the EU’s Digital Services Act are emerging into, promising to apply new rules and give governments greater influence. As we wait for such legislation to come into force, however, there are immediate questions to consider: how should social platforms move forward, and how should businesses mitigate their own risks?

The path forward for content moderation

These are fraught questions. One reason for the reticence of social media giants to speak openly about content moderation may be that, simply, outlining new processes for ensuring user safety could be taken as an admission of past failure. Another is that content moderation is too often seen as being just one small, careless step away from censorship – which is an outcome nobody wants to see. For businesses that rely on social, meanwhile, handling a flood of content across multiple platforms and their own sites can quickly become overwhelming and unmanageable.

For all of these challenges, the best way forward starts with having a more open conversation. Social media companies and other businesses founded on user-generated content, such as dating and marketplaces, have so far tended to be fairly quiet about innovating new content moderation approaches. We can say from experience, however, that in private many such businesses are actively seeking new technology and smarter approaches. As with any common goal, collaboration and shared learning would benefit all partners here.

It’s encouraging to see partnerships like the Global Alliance for Responsible Media sowing the seeds of these conversations, but more is needed. For our part, Besedo believes that the right technology and processes can make censorship-free moderation a reality. This is not just about the technical definition of censorship: it’s about online spaces that feel fair, allowing free speech but not hate speech within clear rules.

We also believe that good moderation will spread the benefits of social media and user-generated content to everyone. Ultimately, this is now a key part of how we buy, learn, work, and live, and everyone from multinationals to small businesses to end-users need it to be safe. Finding new ways to answer the challenges of harmful content is in everyone’s best interests.

Amongst all of this, of course, one thing is certain: in 2021, content moderation will not be missing from anyone’s radar.

Every month we collect insights; from the clients, we work with, through external audits, and from mystery shopping on popular marketplaces across the world. The goal is to understand current global trends; within online marketplace scam, fraud, and other content challenges and to track how it evolves and changes over time.

The information is shared with clients and internally in our operations with our teams. Recurring trends are also used in the training of new content moderation specialists and to build new generic filters for Implio and to support the training of AI models.

Here’s an overview of some of February’s moderation trends:

 

Courier frauds increased by 107%

In February we saw a concerning increase in “courier frauds” with 107% more compared to normal levels. Courier fraud is a scammer pretending to be interested in buying an item, then asking the seller to register at a fake courier site. Once the victim has registered, they’re asked to share their credit card information. To circumvent moderation, scammers often redirect the conversation of the marketplace and the scam is performed through offline communication platforms like WhatsApp. However, with good moderation processes and awareness of how the fraudsters operate, users can be protected.

 

New console releases are still a major scam driver.

Together with cell phones, which remains the top targeted category for scammers with 39% of all scams, consoles are still leading the challenge by constituting 24.66% of fraudulent cases. Most scams in these 2 categories are tied to the release of the new iPhone and the launch of PlayStation 5. After rumors started floating around of a new Nintendo Switch release in 2021, we’ve also begun seeing scams related to the popular handheld console.

 

Marketplaces now a hub for exam cheats.

As lockdowns make physical tests an impossibility, we’ve seen a surge of offers to take tests and exams on behalf of others.

While the offers themselves may be genuine, the practice is unethical and if discovered could lead to students being expelled and a devaluation of the educational system. As such we generally recommend removing listings advertising these sorts of services.

 

Valentine’s Day scammers tried to be extra cuddly.

During the lead-up to Valentine’s Day, we did an audit of 6 popular, non-client marketplaces and saw a worrying number of scams. In particular, puppy scams were abundant. In one instance 90% of all puppy listings were fraudulent. The issue isn’t only limited to Valentine’s Day either.

search trend for buy puppy

Due to pandemic enforced social distancing and recurring lockdowns, there’s been a rise in pet purchases over the past year and scammers are taking advantage. As such it pays to stay vigilant and keep an extra focus on pet-related listings and categories.

With this quick overview of current trends, we hope to provide you with the tools needed to focus your content moderation efforts where they’re most needed. If you would like input specifically for your site, feel free to reach out.

From dating sites and online marketplaces to social media and video games – content moderation has a huge remit of responsibility.

It’s the job of both AI and human content moderators to ensure the material being shared is not illegal or inappropriate: always acting in the best interest of the end-users.

And if you’re getting the content right for your end-users, they’re going to want to return and hopefully bring others with them. But is content moderation actually a form of censorship?

If every piece of content added to a platform is checked and scrutinized – isn’t ‘moderation’ essentially just ‘policing’? Surely, it’s the enemy of free speech?

Well actually, no. Let’s consider the evidence.

 

Moderating content vs censoring citizens

Content moderation is not a synonym for censorship. In fact, they’re two different concepts.

Back in 2016, we looked at this in-depth in our Is Moderation Censorship? article – which explains the relationship between content moderation and censorship. It also gives some great advice on empowering end-users so that they don’t feel censored.

But is it really that important in the wider scheme of things?

Well, content moderation continues to make headline news due to the actions taken by high-profile social media platforms, like Twitter and Facebook, against specific users – including, but not limited to, the former US President.

There’s a common misconception that the actions taken by these privately-owned platforms constitute censorship. In the US, this can be read as a violation of the First Amendment rights in relation to free speech. However, the key thing to remember here is that the First Amendment protects citizens against government censorship.

That’s not to say privately-owned platforms have an inalienable right to censorship, but it does mean that they’re not obliged to host material deemed unsuitable for their community and end-users.

The content moderation being enacted by these companies is based on their established community standards and typically involves:

  • Blocking harmful or hate-related content
  • Fact-checking
  • Labeling content correctly
  • Removing potentially damaging disinformation
  • Demonetizing pages by removing paid ads and content

These actions have invariably impacted individual users because that’s the intent – to mitigate content which breaks the platform’s community standards. In fact, when you think about it, making a community a safe place to communicate actually increases the opportunity for free speech.

“Another way to think about content moderation is to imagine an online platform as a real world community – like a school or church. The question to ask is always: would this way of behaving be acceptable within my community?”

It’s the same with online platforms. Each one has its own community standards. And that’s okay.

 

Content curators – Still culpable?

Putting it another way, social media platforms are in fact curators of content – as are online marketplaces and classified sites. When you consider the volume of content being created, uploaded, and shared monitoring it is no easy feat. Take, for example, YouTube. As of May 2019, Statista reported that in excess of 500 hours of video were uploaded to YouTube every minute. That’s just over three weeks of content per minute!

These content sharing platforms actually have a lot in common with art galleries and museums. The items and artworks in these public spaces are not created by the museum owners themselves –they’re curated for the viewing public and given contextual information.

That means the museums and galleries share the content but they’re not liable for it.

However, an important point to consider is, if you’re sharing someone else’s content there’s an element of responsibility. As a gallery owner, you’ll want to ensure it doesn’t violate your values as an organization and community. And like online platforms, art curators should have the right to take down material deemed to be objectionable. They’re not saying you can’t see this painting; they’re saying, if you want to see this painting you’ll need to go to a different gallery.

 

What’s the benefit of content moderation to my business?

To understand the benefits of content moderation, let’s look at the wider context and some of the reasons why online platforms use content moderation to help maintain and generate growth.

Firstly, we need to consider the main reason for employing content moderation. Content moderation exists to protect users from harm. Each website or platform will have its own community of users and its own priorities in terms of community guidelines.

“Where there is an opportunity for the sharing of user-generated content, there is the potential for misuse. To keep returning to a platform or website, users need to feel a sense of trust. They need to feel safe.”

Content moderation can help to build that trust and safety by checking posts and flagging inappropriate content. Our survey of UK and US users showed that even on a good classified listing site, one-third of users still felt some degree of mistrust.

Secondly, ensuring users see the right content at the right time is essential for keeping them on a site. Again, in relation to the content of classified ads, our survey revealed that almost 80% of users would not return to the site where an ad lacking relevant content was posted – nor would they recommend it to others. In effect, this lack of relevant information was the biggest reason for users clicking away from a website. Content moderation can help with this too.

Say you run an online marketplace for second-hand cars, you don’t want it to suddenly be flooded with pictures of cats. In a recent example from the social media site Reddit, the subreddit r/worldpolitics started getting flooded with inappropriate pictures because the community was tired of it being dominated by posts about American politics and that moderators were frequently ignoring posts that were deliberately intended to gain upvotes. Moderating and removing the inappropriate pictures isn’t censorship, it’s directing the conversation back to what the community originally was about.

Thirdly, content moderation can help to mitigate against scams and other illegal content. Our survey also found that 72% of users who saw inappropriate behavior on a site did not return.

A prime example of inappropriate behavior is hate speech. Catching it can be a tricky business due to coded language and imagery. However, our blog about identifying hate speech on dating sites gives three great tips for dealing with it:

 

Three ways to regulate content

A good way to imagine content moderation is to view it as one of three forms of regulation. This is a model that’s gained a lot of currency recently and it really helps to explain the role of content moderation.

Firstly, let’s start with discretion. In face-to-face interactions, most people will tend to pick up on social cues and social contexts which causes them to self-regulate. For example, not swearing in front of young children. This is personal discretion.

When a user posts or shares content, they’re making a personal choice to do so. Hopefully, for many users discretion will also come into play: will what I’m about to post cause offense or harm to others? Do I want others to feel offended?

Discretion tells you not to do or say certain things in certain contexts. We all get it wrong sometimes, but self-regulation is the first step in content moderation.

Secondly, at the other end of the scale, we have censorship. By definition, censorship is the suppression or prohibition of speech or materials deemed obscene, politically unacceptable, or a threat to security.

Censorship has government-imposed law behind it and carries the message that the censored material is unacceptable in any context because the government and law deem it to be so.

Thirdly, in the middle of both of these, we have content moderation.

“Unlike censorship, content moderation empowers private organizations to establish community guidelines for their sites and demand that users seeking to express their viewpoints are consistent with that particular community’s expectations.”

This might include things like flagging harmful misinformation, eliminating obscenity, removing hate speech, and protecting public safety. Content moderation is discretion at an organizational level – not a personal one.

Content moderation is about saying what you can and can’t do in a particular online social context.

 

So what can Besedo do to help moderate your content?

  • Keep your community on track
  • Facilitate the discussion you’ve built your community for (your house, your rules)
  • Allow free speech, but not hate speech
  • Protect monetization
  • Keep the platform within legal frameworks
  • Keep a positive, safe, and engaging community

All things considered, content moderation is a safeguard. It upholds the ‘trust contract’ users and site owners enter into. It’s about protecting users, businesses, and maintaining relevance.

The internet’s a big place and there’s room for everyone.

To find out more about what we can do for your online business contact our team today.

If you want to learn more about content moderation, take a look at our handy guide. In the time it takes to read, another 4,000 YouTube videos will have been uploaded!

Self-regulation is never easy. Most of us have, at some point, set ourselves New Year’s resolutions, and we all know how hard it can be to put effective rules on our own behavior and stick to them consistently. Online communities and platforms founded in the ever-evolving digital landscape may also find themselves in a similar predicament: permitted to self-regulate, yet struggling to consistently provide protection for users. Governments have noticed. Different standards and approaches to online user safety during the last two decades has left them scratching their heads, wondering how to protect users without compromising ease of use and innovation.

Yet, with the pandemic giving rise to more consumers using these platforms to shop, date, and connect in a socially distanced world, the opportunity for fraudulent, harmful, and upsetting content has also risen. As a result, the era of self-regulation – and specifically the ability to use degrees of content moderation – is coming to an end. In fact, during the first lockdown in 2020, the UK fraud rate alone had risen by 33%, according to research from Experian.

In response, legislation such as the Online Safety Bill and the Digital Services Act is set to change the way platforms are allowed to approach content moderation. These actions have been prompted by a rapid growth in online communities which has come with a rise in online harassment, misinformation, and fraud. This often affects the most vulnerable users: statistics from the British government published last year, for example, suggest that one in five children aged 10-15 now experience cyberbullying.

Some platforms have argued that they are already doing everything they can to prevent harmful content and that the scope for action is limited. Yet, there are innovative new solutions, expertise, and technology, such as AI which can help platforms ensure such content does not slip through the net of their moderation efforts. There is an opportunity to get on the front foot when tackling these issues and safeguarding their reputations.

And, getting ahead in the content moderation game is important. For example, YouTube only sat up and took notice of the issue when advertisers such as Verizon and Walmart pulled adverts because they were appearing next to videos promoting extremist views. Faced with reputational and revenue damage, YouTube was forced to get serious about preventing harm by disabling some comments sections and protecting kids with a separate, more limited app. While this a cautionary tale, when platforms are focused on different priorities such as improving search, monetization, and user numbers, it can be easy to forget content moderation, leaving it to an afterthought until it’s too late.

The Online Safety Bill: new rules to manage social media chaos

In the UK, the Online Safety Bill will hold big tech responsible on the same scale at which it operates. The legislation will be social media-focused, applying to companies which host user-generated content that can be accessed by British users, or which facilitate interactions between British users. The duties that these companies will have under the Online Safety Bill will likely include:

  • Taking action to eliminate illegal content and activity
  • Assessing the likelihood of children accessing their services
  • Ensuring that mechanisms to report harmful content are available
  • Addressing disinformation and misinformation that poses a risk of harm

Companies failing to meet these duties will face hefty fines of up to £18m or 10% of global revenue.

The Digital Safety Act: taking aim at illegal content

While the Online Safety Bill targets harmful social content in the UK, the Digital Services Act will introduce a new set of rules to create a safer digital space across the EU. These will apply more broadly, forcing not just social media networks, but also e-commerce, dating platforms, and, in fact, all providers of online intermediary services to remove illegal content.
The definition of illegal content, however, is yet to be defined: many propose that this will relate not only to harmful content but also content that is fraudulent, which offers counterfeit goods, or even content that seeks to mislead consumers, such as fake reviews. This means that marketplaces may become directly liable if they do not correct the wrongdoings of third-party traders.

How to get ahead of the legislation

Online communities might be worried about how to comply with these regulations, but ultimately it should be seen as an opportunity for them to protect their customers, while also building brand loyalty, trust, and revenue. Finding the right content moderation best practice, processes, and technology, in addition to the right expertise and people, will be the cornerstone to remaining compliant.

Businesses often rely on either turnkey AI solutions or entirely human teams of moderators, but as the rules of operation are strengthened, bespoke solutions that use both AI and human intervention will be needed to achieve the scalability and accuracy that the new legislation demands. In the long term, the development of more rigorous oversight for online business – in the EU, the UK, and elsewhere across the world – will benefit companies as well as users.

In the end, most, if not all, platforms want to enable consumers to use services safely, all the time. Browsing at a toy store in Düsseldorf, purchasing something from Amazon, making a match on a dating app, or connecting on a social network should all come with the same level of protection from harm. When everyone works together, a little bit harder, to make that happen, it turns from a complex challenge into a mutual benefit.

From controversial presidential social media posts to an increased number of scams relating to the COVID-19 pandemic, 2020 has been a challenging year for everyone – not least content moderation professionals and online marketplace owners.

Let’s take a look back at some of the major industry stories of 2020 (so far – and who knows what December may yet bring…).

January

After a number of ill-fated decisions regarding content moderation, social media giant Facebook recorded its first fall in profits for five years.

In previous years, the company faced mounting criticism for its data sharing with Cambridge Analytica, as well as its failure to moderate political adverts for false content, and its handling of fake news during the 2016 elections.

However, in 2018, Facebook announced that efforts to toughen its privacy protections and increase content moderation would negatively impact profits – which was in fact the case. In January 2020, Facebook reported that the company had seen a 16% drop in profits across 2019 – despite significant increases in advertising revenue.

By the end of January, Facebook had named British human rights expert, Thomas Hughes, as the administrative leader of its new oversight board, set up to review user-generated and other content removed from its site, who was quoted as saying: “The job aligns with what I’ve been doing over the last couple of decades – which is promoting the rights of users and freedom of expression”.

February

The following month, San Francisco’s Ninth Circuit Appeals Court ruled that YouTube had not breached the US constitution’s First Amendment when it decided to censor a right-wing channel.

The court ruled that YouTube, the world’s biggest video-sharing platform is a private company and not a “public forum” and therefore not subject to the First Amendment. The US Bill of Rights (1791), declares that the government will not abridge the freedom of speech in law.

However, this guarantee is between the US Government and its people – not private companies (except when they perform a public function) – meaning the ruling could have huge ramifications for future cases of freedom of speech online.

March

Back in March, we ran an article about protecting users from emerging Corona virus scams. As the pandemic took hold globally and lockdowns were put in place around the world, online scammers were deliberately exploiting vulnerable individuals. Scammers were charging exorbitant prices – for everything from hand sanitizer to fake medicine – and even offering non-existent loans via online marketplaces and advertising.

European regulators rallied by calling upon digital platforms, social media platforms, and search engines to unite against corona virus-related fraud.

In an effort to try to coordinate efforts, the European Commissioner For Justice and Consumers, Didier Reynders, sent a letter to Facebook, Google, Amazon, and other digital platforms.

April

Despite the best efforts, some third-party merchants managed to find a loop-hole on Amazon which enabled them to claim that products prevented corona virus. The scammers managed to evade automated detection by inserting claims into product images. After being contacted by the Washington Post, Amazon subsequently removed the product listings.

Another casualty of the global pandemic was the short-term/holiday rentals sector. In April, bookings made through global property rental giant, Airbnb, were down by 85%, with cancellation rates at almost 90% – an estimated cost to the company of $1 billion.

In retail, there were inevitable winners and losers as a result of lockdown. Comscore reported that whilst an increase in remote working prompted rises for the home furnishings and grocery categories, the tickets and events’ sector understandably plummeted.

May

Following calls to step up its content moderation, as part of efforts to combat hate speech, Facebook partnered with 60 fact-checking organizations.

According to a company blog: “AI now proactively detects 88.8 percent of the hate speech content we remove, up from 80.2 percent the previous quarter. In the first quarter of 2020, we took action on 9.6 million pieces of content for violating our hate speech policies – an increase of 3.9 million.”

Facebook even took it one step further by targeting hate-speech memes, creating a database of multi-modal examples with which to train future AI moderation software.

Despite the negative impacts of corona virus and lockdown on multiple-sectors, Besedo reported on two e-marketplace success stories this month.

Germany’s number one classifieds site, eBayK revealed the strategies they employed to reach a record 40 million live ads in the middle of the pandemic. Over in Norway, FINN.no told us about how they managed to grow traffic during COVID – simply by supporting their users.

June

Halfway through the year, content moderation issues were at the forefront of the news again with Facebook forced to remove more than 80 posts by President Trump’s campaign team – which contained imagery linked to Nazism. The inverted red triangle, which was used to identify political prisoners in Nazi death camps, appeared in the posts without context.

In France, proposals for tough reforms on hate-speech were reduced to a few moderate reforms after a group of 60 senators from ‘Les Republicans’ mounted a challenge through the French Constitutional Council.

The tougher reforms would have mandated platforms to remove certain types of illegal content within 24 hours of a user flagging it.

July

Facebook found itself in the spotlight again after a study was published by The Institute For Strategic Dialogue (ISD). The study revealed that Facebook accounts linked to the Islamic State group (ISIS) were exploiting loopholes in content moderation.

Using a variety of tactics, the terrorist group was able to exploit gaps in manual and automated moderation systems – and consequently gain thousands of views. It hacked Facebook accounts and posted tutorial videos, as well as blending content from news outlets – including real TV news and theme music.

Planned raids on other high-profile Facebook pages were also revealed. Facebook removed all of the accounts identified.

August

Targeting two of China’s biggest apps, President Trump signed special executive orders to stop US businesses from working with TikTok and WeChat – admit fears that the social networking services posed a threat to national security.

The President claimed that parent company, ByteDance, would give the Chinese government access to user data. He gave ByteDance 90 days to sell up (to American stakeholders) or face shutdown.

In August, Microsoft were mooted as front-runners for the buyout, but eventually dropped out of the race. Although set up as a fun video-sharing platform, TikTok has unwittingly become embroiled in conspiracy theories and hate-content. As other platforms have discovered, trying to moderate this content can be exceptionally complicated.

September

After a summer of lockdown, the world witnessed widespread calls for reforms to regulate online speech. Brookings noticed a discernible shift from the protection of innovation to the protection to the safeguarding of citizens. Countries such as France, Germany, Brazil, and the US explored options for the legislation of content moderation.

Also this month, YouTube revealed it was bringing back teams of human moderators after AI systems were found to be over-censoring and doubling incorrect takedowns. The company also claimed that AI alone failed to match the accuracy of human moderators.

Meanwhile, there was a seismic shift in the online marketplace sector, with Adevinta announcing their acquisition of eBay Classifieds Group to create the world’s largest online classified group.

October

Over a year after a US data scientist raised concerns about the way in which Instagram handles children’s data, Ireland’s Data Protection Commission (DPC) opened two further investigations, following fears that the contact details of minors were being leaked in exchange for free analytics. It was also revealed that users who changed their account settings to ‘business’ had their contact details revealed.

October also saw eBay launch a sneaker authentication scheme in a bid to tackle counterfeits. Limited edition sneakers can be produced in batches of just a few thousand and sometimes only a dozen, driving up the resale value on online marketplaces.

Unfortunately, this has created a market for counterfeits with one US Customs and Border Protection operation last year alone yielding over 14,000 fake Nikes. Products will have to pass to an authentication facility before being passed on to the buyer.

November

In November, Zoom became the latest online platform to come under fire for its content moderation practices.

This happened after the platform blocked public and politically sensitive events planned for its service – where it felt that users has broke local laws or its rules which require users “to not break the law, promote violence, display nudity or commit other infractions”.

Zoom was accused of censorship in the debate surrounding Section 230 which gives online companies immunity from legal liability for user-generated content.

While December’s content moderation events are underway, what’s become clear in recent months, is that given how much we all rely on online platforms – for everything from shopping to study, work, rest, and play – companies of all kinds continue to struggle with moderation.

Continued uncertainty doesn’t help – in fact it highlights vulnerabilities and loopholes. But at the very least, knowing where potential pitfalls lie enables them to better protect their users, which ultimately is at the heart of all good content moderation efforts.

This starts with having the right systems and processes in place.

The Christmas season is here and while the festivities kick off online retailers hold their breath and wait to see whether all of the preparations they have diligently made will pay off in revenue and sales during this ‘Golden Quarter.’ Will the website be able to handle extra demand? Will all orders be able to be shipped before Christmas?

Yet, The National Cyber Security Centre (NCSC) has highlighted another pressing concern which can have a lasting impact on revenue. Last week it launched a major awareness campaign called Cyber Aware advising potential customers to be aware of an increase in fraud on online platforms this year. This is because millions of pounds are stolen from customers through fraud every year – including a loss of £13.5m from November 2019 to the end of January 2020 – according to the National Fraud Intelligence Bureau.

Fraud is a major concern for marketplaces who are aware of the trust and reputational damage that such nefarious characters on their platform can create. While consumer awareness and education can help, marketplaces know that only keeping one eye on the ball when it comes to fraud, especially within User Generated Content (UGC), is not enough. Fraudulent activity deserves full attention and careful monitoring. Trying to tackle fraud is not a one-off activity but a dedication to constant, consistent, rigorous, and quality moderation where learnings are continuously applied, for the on-going safety of the community.

With that in mind, our certified moderators investigated nearly three thousand listings of popular items on six popular UK online marketplaces, in order to understand whether marketplaces have content moderation pinned down, or, whether fraudulent activity is still slipping through the net. After conducting the analysis during the month of November, including the busy Black Friday and Cyber Monday shopping weekend, we found that:

· 15% of items reviewed showed signs of being fraudulent or dangerous, this rose to 19% on Black Friday and Cyber Monday

· Pets and popular consumer electronics are particular areas of concern, with 22% of PlayStation 5 listings likely to be scams, rising to more than a third of PS5 listings being flagged over the Black Friday weekend

· 19% of listings on marketplaces for the iPhone 12 were also found to show signs of being scams

· Counterfeit fashion items are also rife on popular UK marketplaces, with 15% of listings found to be counterfeits.

The research demonstrates that, even after any filtering and user protection measures marketplaces have a significant number of the products for sale on them are leaving customers open to having their personal details stolen or receiving counterfeit goods. We know that many large marketplaces have a solution in place already, but are still allowing scams to pass through the net, while smaller marketplaces may not have thought about putting robust content moderation practices and processes in place.

Both situations are potentially dangerous if not tackled. While it is certainly a challenging process to quickly identify and remove problematic listings, it is deeply concerning that we are seeing such a high rates of scams and counterfeiting in this data. Powerful technological approaches, using AI in conjunction with human analysts, can very effectively mitigate against these criminals. Ultimately, it should be the safety of the user placed at the heart of every marketplace’s priorities. It’s a false dichotomy that fail safe content moderation is too expensive a problem to deal with – in the longer term, addressing even small amounts of fraud that is slipping through the net can have a large and positive long term impact on the financial health of the marketplace through increased customer trust, acquisition and retention.

2020 was a year we would not want to repeat from a fraud perspective – we have not yet won the battle against criminals. As we move into 2021, we’ll be hoping to help the industry work towards a zero-scam future, one where we take the learnings and lessons together from 2020 to provide a better, safer community for users and customers, both for their safety, but also for the long term, sustainable and financial health of marketplaces.