✨ New ✨ The Digital Services Act: A fireside chat covering all angles Watch it here → ×

The Era of Self-Regulation is Coming to An End

Contents

    Self-regulation is never easy. Most of us have, at some point, set ourselves New Year’s resolutions, and we all know how hard it can be to put effective rules on our own behavior and stick to them consistently. Online communities and platforms founded in the ever-evolving digital landscape may also find themselves in a similar predicament: permitted to self-regulate, yet struggling to consistently provide protection for users. Governments have noticed. Different standards and approaches to online user safety during the last two decades has left them scratching their heads, wondering how to protect users without compromising ease of use and innovation.

    Yet, with the pandemic giving rise to more consumers using these platforms to shop, date, and connect in a socially distanced world, the opportunity for fraudulent, harmful, and upsetting content has also risen. As a result, the era of self-regulation – and specifically the ability to use degrees of content moderation – is coming to an end. In fact, during the first lockdown in 2020, the UK fraud rate alone had risen by 33%, according to research from Experian.

    In response, legislation such as the Online Safety Bill and the Digital Services Act is set to change the way platforms are allowed to approach content moderation. These actions have been prompted by a rapid growth in online communities, resulting in a rise in online harassment, misinformation, and fraud. This often affects the most vulnerable users: statistics from the British government published last year, for example, suggest that one in five children aged 10-15 now experience cyberbullying.

    Some platforms have argued that they are already doing everything they can to prevent harmful content and that the scope for action is limited. Yet, there are innovative new solutions, expertise, and technology, such as AI which can help platforms ensure such content does not slip through the net of their moderation efforts. There is an opportunity to get on the front foot when tackling these issues and safeguarding their reputations.

    And, getting ahead in the content moderation game is important. For example, YouTube only sat up and took notice of the issue when advertisers such as Verizon and Walmart pulled adverts because they were appearing next to videos promoting extremist views. Faced with reputational and revenue damage, YouTube was forced to get serious about preventing harm by disabling some comments sections and protecting kids with a separate, more limited app. While this a cautionary tale, when platforms are focused on different priorities such as improving search, monetization, and user numbers, it can be easy to forget content moderation, leaving it to an afterthought until it’s too late.

    The Online Safety Bill: new rules to manage social media chaos

    In the UK, the Online Safety Bill will hold big tech responsible on the same scale at which it operates. The legislation will be social media-focused, applying to companies which host user-generated content that can be accessed by British users, or which facilitate interactions between British users. The duties that these companies will have under the Online Safety Bill will likely include:

    • Taking action to eliminate illegal content and activity
    • Assessing the likelihood of children accessing their services
    • Ensuring that mechanisms to report harmful content are available
    • Addressing disinformation and misinformation that poses a risk of harm

    Companies failing to meet these duties will face hefty fines of up to £18m or 10% of global revenue.

    The Digital Safety Act: taking aim at illegal content

    While the Online Safety Bill targets harmful social content in the UK, the Digital Services Act will introduce a new set of rules to create a safer digital space across the EU. These will apply more broadly, forcing not just social media networks, but also e-commerce, dating platforms, and, in fact, all providers of online intermediary services to remove illegal content.
    However, the definition of illegal content is yet to be defined: many propose that this will relate not only to harmful content but also fraudulent content, which offers counterfeit goods or even content that seeks to mislead consumers, such as fake reviews. This means that marketplaces may become directly liable if they do not correct the wrongdoings of third-party traders.

    How to get ahead of the legislation

    Online communities might be worried about how to comply with these regulations. Still, ultimately it should be seen as an opportunity for them to protect their customers, while also building brand loyalty, trust, and revenue. Finding the right content moderation best practice, processes, and technology, in addition to the right expertise and people, will be the cornerstone to remaining compliant.

    Businesses often rely on either turnkey AI solutions or entirely human teams of moderators, but as the rules of operation are strengthened, bespoke solutions that use both AI and human intervention will be needed to achieve the scalability and accuracy that the new legislation demands. In the long term, the development of more rigorous oversight for online business – in the EU, the UK, and elsewhere across the world – will benefit companies as well as users.

    In the end, most, if not all, platforms want to enable consumers to use services safely, all the time. Browsing at a toy store in Düsseldorf, purchasing something from Amazon, making a match on a dating app, or connecting on a social network should all come with the same level of protection from harm. When everyone works together, a little bit harder, to make that happen, it turns from a complex challenge into a mutual benefit.

    This is Besedo

    Global, full-service leader in content moderation

    We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

    Form background

    Contents