Contents

    The biggest challenge facing technology today isn’t adoption. It’s regulation. Innovation is moving so rapidly that the legal and regulatory implications are lagging behind what’s possible.

    Artificial Intelligence (AI) is one particularly tricky area for regulators to reach a consensus on, as is content moderation.

    With the two becoming increasingly crucial to all kinds of businesses – especially to online marketplaces, sharing economy, and dating sites – it’s clear that more needs to be done to ensure the safety of users.

    But to what extent are regulations stifling progress? Are they justified in doing so? Let’s consider the current situation.

    AI + Moderation: A perfect pairing

    Wherever there’s User Generated Content (UGC), there’s a need to moderate it; whether we’re discussing upholding YouTube censorship or netting catfish on Tinder.

    Given the vast amount of content uploaded daily and the volume of usage – on a popular platform like eBay – it’s clear that while action must be taken, relying on human moderation alone is unsustainable.

    Enter AI – but not necessarily as most people will know it (we’re still far from sapient androids). Mainly, where content moderation is concerned, AI involves machine learning algorithms – which platform owners can configure to filter out words, images, and video content that contravenes policies, laws, and best practices.

    AI offers the scale, capacity, and speed needed to moderate huge volumes of content and limits the often-cited psychological effects many people suffer from viewing and moderating harmful content.

    Understanding the wider issue

    So what’s the problem? Issues arise when we consider content moderation on a global scale. Laws governing online censorship (and the extent to which they’re enforced) vary significantly between continents, nations, and regions.

    What constitutes ‘harmful’, ‘illicit’ or ‘bad taste’ isn’t always as clear cut as one might think. And from a sales perspective, illegal items in one nation aren’t always illegal in another. A lot needs to be taken into account.

    But what about the role of AI? What objections could there be to software that can provide huge economies of scale and operational efficiency and protect people from harm – both users and moderators?

    The broader context of AI as a technology needs to be better understood – which itself presents several key ethical questions over its use and deployment, which vary similarly – country-to-country – to efforts designed to regulate content moderation.

    To understand this better, we need to look at how the different nations are addressing the challenges of digitalization – and what their attitudes are towards online moderation and AI.

    The EU: Apply pressure to platforms

    As an individual region, the EU arguably leads the global debate on online safety. However, the European Commission continues to voice concerns over (a lack of) efforts made by large technology platforms to prevent the spread of offensive and misleading content.

    Following the introduction of its Code Of Practice on Disinformation in 2018, numerous high-profile tech companies – including Google, Facebook, Twitter, Microsoft, and Mozilla – voluntarily provided the Commission with self-assessment reports in early 2019.

    These reports document the policies and processes these organizations have undertaken to prevent the spread of harmful content and fake news online.

    While a thorough analysis is currently underway (with findings to be reported in 2020), initial responses show significant dissatisfaction relating to the progress being made – and with the fact that no additional tech companies have signed up for the initiative.

    AI in the EU

    In short, expectations continue to be very high – as evidenced by (and as covered in a previous blog) the European Parliament’s vote to give online businesses one hour to remove terrorist-related content.

    Given the immediacy, frequency, and scale that these regulations require, it’s clear that AI has a critical and central role to play in meeting these moderation demands. But, as an emerging technology itself, the regulations around AI are still being formalized in Europe.

    However, the proposed Digital Services Act (set to replace the now outdated eCommerce Directive) goes a long way to address issues relating to online marketplaces and classified sites – and AI is given significant consideration as part of these efforts.

    Last year the EU published its guidelines on ethics in Artificial Intelligence, citing a ‘human-centric approach as one of its key concerns – as it deems that ‘AI poses risks to the right to personal data protection and privacy – as well as a ‘risk of discrimination when algorithms are used for purposes such as to profile people or to resolve situations in
    criminal justice.

    While these developments are promising in that they demonstrate the depth and importance of the EU tackling these issues, problems will no doubt arise when adoption and enforcement by 27 member states are required.

    Britain online post-Brexit

    One nation that no longer needs to participate in EU-centric discussions is the UK – following its departure in January this year. However, rather than deviate from regulation, Britain’s stance on online safety continues to set a high bar.

    An ‘Online Harms’ whitepaper produced last year (pre-Brexit) sets out Britain’s ambition to be ‘the safest place in the world to go online’ and proposes a revised system of accountability that moves beyond self-regulation and the need to establish a new independent regulator.

    This is a commitment to uphold GDPR and Data Protection laws – including a promise to ‘inspect’ AI and penalise those who exploit data security. The whitepaper also acknowledges the ‘complex, fast-moving and far-reaching ethical and economic issues that cannot be addressed by data-protection laws alone’.

    To this end, a Centre for Data Ethics and Innovation has been established in the UK – complete with a two-year strategy setting out its aims and ambitions, which largely involves cross-industry collaboration, greater transparency, and continuous governance.

    Expansion elsewhere

    Numerous other countries – from Canada to Australia – have formally committed to addressing the challenges facing AI, data protection, and content moderation. However, on a broader international level, the Organisation for Economic Co-operation and Development (OECD) has established some well-respected Principles on Artificial Intelligence.

    Set out in May 2019. as five simple tenets designed to encourage successful ‘stewardship’ of AI, these principles have since been co-opted by the G20 in their stance on AI.

    They are defined as:

    • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
    • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
    • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
    • AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
    • Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

    While not legally binding, the hope is that the level of influence and reach these principles have on a global scale will eventually encourage wider adoption. However, given the myriad cultural and legal differences, the tech sector faces, international standardization remains a massive challenge.

    The right approach – overt complexity

    All things considered, while the right strategic measures are no doubt in place for the most part – helping perpetuate discussion around the key issues – the effectiveness of many of these regulations largely remains to be seen.

    Outwardly, many nations seem to share the same top-line attitudes towards AI and content moderation – and their necessity in reducing harmful content. However, applying policies from specific countries to global content is challenging and adds to the overall complexity, as the content may be created in one country and viewed in another.

    This is why AI machine learning is so critical in moderation – algorithms can be trained to do all the hard work at scale. But it seems the biggest stumbling block in all of this is a lack of clarity around what artificial intelligence truly is.

    As one piece of Ofcom research notes, there’s a need to develop ‘explainable systems’ as so few people (except for computer scientists) can legitimately grasp the complexities of these technologies.

    The problem posed in this research is that some aspects of AI – namely neural networks which are designed to replicate how the human brain learns – are so advanced that even the AI developers who create them cannot understand how or why the algorithm outputs what it does.

    While machine learning moderation doesn’t delve as far into the ‘unknowable’ as neural networks, it’s clear why discussions around regulation persist greatly.

    But, as with most technologies, staying ahead of the curve from a regulatory and commercial standpoint is a continuous improvement process. That’s something that won’t change anytime soon.

    New laws and legislation can be hard to navigate. Besedo helps businesses like yours get everything in place quickly and efficiently to adhere to these new legislations. 

    Contents