🚀 Free eBook: Build vs. Buy – The Case for Outsourcing Content Moderation Download → ×

Contents

    Self-regulation is never easy. Most of us have, at some point, set ourselves New Year’s resolutions, and we all know how hard it can be to set effective rules for our behavior and stick to them consistently. Online communities and platforms founded in the ever-evolving digital landscape may also find themselves in a similar predicament: permitted to self-regulate yet struggling to consistently provide protection for users.

    Governments have noticed.

    Different standards and approaches to online user safety during the last two decades have left them scratching their heads, wondering how to protect users without compromising ease of use and innovation.

    Yet, with the pandemic giving rise to more consumers using these platforms to shop, date, and connect in a socially distanced world, the opportunity for fraudulent, harmful, and upsetting content has also risen. As a result, the era of self-regulation—and specifically the ability to use degrees of content moderation—is coming to an end. In fact, during the first lockdown in 2020, the UK fraud rate alone rose by 33%, according to research from Experian.

    In response, legislation such as the Online Safety Bill and the Digital Services Act is set to change the way platforms are allowed to approach content moderation. These actions have been prompted by a rapid growth in online communities, resulting in a rise in online harassment, misinformation, and fraud. This often affects the most vulnerable users: statistics from the British government published last year, for example, suggest that one in five children aged 10-15 now experience cyberbullying.

    Some platforms have argued that they are already doing everything they can to prevent harmful content and that the scope for action is limited. Yet, innovative new solutions, expertise, and technology, such as AI, can help platforms ensure such content does not slip through the net of their moderation efforts. There is an opportunity to get on the front foot when tackling these issues and safeguarding their reputations.

    It’s important to get ahead in the content moderation game. For example, YouTube only noticed the issue when advertisers such as Verizon and Walmart pulled ads that appeared next to videos promoting extremist views.

    Faced with reputational and revenue damage, YouTube was forced to get serious about preventing harm by disabling some comment sections and protecting kids with a separate, more limited app. While this is a cautionary tale, when platforms are focused on different priorities, such as improving search, monetization, and user numbers, it can be easy to forget content moderation, leaving it to an afterthought until it’s too late.

    The Online Safety Bill: new rules to manage social media chaos

    In the UK, the Online Safety Bill will hold big tech responsible on the same scale at which it operates. The legislation will be social media-focused, applying to companies which host user-generated content that can be accessed by British users, or which facilitate interactions between British users. The duties that these companies will have under the Online Safety Bill will likely include:

    • Taking action to eliminate illegal content and activity
    • Assessing the likelihood of children accessing their services
    • Ensuring that mechanisms to report harmful content are available
    • Addressing disinformation and misinformation that poses a risk of harm

    Companies failing to meet these duties will face hefty fines of up to ÂŁ18m or 10% of global revenue.

    The Digital Safety Act: taking aim at illegal content

    While the Online Safety Bill targets harmful social content in the UK, the Digital Services Act will introduce new rules to create a safer digital space across the EU. These will apply more broadly, forcing not just social media networks, but also e-commerce, dating platforms, and, in fact, all providers of online intermediary services to remove illegal content.

    However, the definition of illegal content is yet to be defined: many propose that this will relate to harmful content and fraudulent content, which offers counterfeit goods or even content that seeks to mislead consumers, such as fake reviews. This means that marketplaces may become directly liable if they do not correct the wrongdoings of third-party traders.

    How to get ahead of the legislation

    Online communities might be worried about how to comply with these regulations. Still, ultimately it should be seen as an opportunity for them to protect their customers, while also building brand loyalty, trust, and revenue. Finding the right content moderation best practice, processes, and technology, in addition to the right expertise and people, will be the cornerstone to remaining compliant.

    Businesses often rely on either turnkey AI solutions or entirely human teams of moderators, but as the rules of operation are strengthened, bespoke solutions that use both AI and human intervention will be needed to achieve the scalability and accuracy that the new legislation demands. In the long term, the development of more rigorous oversight for online business – in the EU, the UK, and elsewhere across the world – will benefit companies as well as users.

    In the end, most, if not all, platforms want to enable consumers to use services safely, all the time. Browsing at a toy store in DĂĽsseldorf, purchasing something from Amazon, making a match on a dating app, or connecting on a social network should all come with the same level of protection from harm.

    When everyone works together a little bit harder to make that happen, it turns from a complex challenge into a mutual benefit.

    Ahem… tap, tap… is this thing on? 🎙️

    We’re Besedo and we provide content moderation tools and services to companies all over the world. Often behind the scenes.

    Want to learn more? Check out our homepage and use cases.

    And above all, don’t hesitate to contact us if you have questions or want a demo.

    Contents