🚀 Free eBook: Build vs. Buy – The Case for Outsourcing Content Moderation Download → ×

How the tech titans tackle content moderation challenges

Contents

    Think the big tech players don’t tackle content moderation in the same way as your classifieds business? Think again! At a recent European Parliament conference, leading lights from some of the world’s best-known technology companies gathered to share their ideas, and challenges. But exactly what are they up against and how do they resolve issues?

    No doubt about it: content moderation is a big issue – for classifieds sites, as well as content and social platforms. In fact, anywhere that users generate content online, actions must be taken to ensure compliance.

    This applies to both small businesses as well as to the likes of Facebook, Medium, Wikipedia, Vimeo, Snapchat, and Google – which became quite clear back in February when these tech giants (and a host of others) attended the Digital Agenda Intergroup’s ‘Content Moderation & Removal At Scale’ conference, held at the European Parliament in Brussels on 5 February 2019.

    What came out of the meeting was a frank and insightful discussion of free speech, the need to prevent discrimination and abuse, and the need to balance copyright infringement with business sensibilities – discussions that any online platform can easily relate to.

    Balancing free speech with best practice

    The conference, chaired by Dutch MEP, Marietje Schaake of the Digital Agenda Intergroup, was an opportunity to explore how internet companies develop and implement internal content moderation rules and policies.

    Key issues included the challenges of moderating and removing illegal and controversial user-generated content – including hate speech, terrorist content, disinformation, and copyright infringing material – whilst ensuring that people’s rights and freedoms are protected and respected.

    As well as giving participants an opportunity to share insights on effectively managing content moderation, Ms. Schaake also expressed her desire to better understand what big companies are doing – what capacity they have, what legal basis, terms of use, and what their own criteria for taking down content were.

    Or, as Eric Goldman, Professor of Law at the High-Tech Law Institute, Santa Clara University, put it ‘addressing the culture of silence on the operational consequences of content moderation’.

    Addressing the status quo

    Given the diverse array of speakers invited, and the sheer difference in the types of platforms they represented, it’s fair to say that their challenges, while inherently similar, manifest in different ways.

    For example, Snapchat offers two main modes on its platform. The first is a person-to-person message service, and the other – Discover mode – allows content to be broadcast more widely. Both types of content need to be moderated in very different ways. And even though Snapchat content is ephemeral and the vast majority of it disappears within a 24-hour period, the team aims to remove anything that contravenes its policies within two hours.

    By contrast, Medium – an exclusively editorial platform – relies on professional, commissioned, and user-generated content. But though the latter only needs to be moderated – that doesn’t necessarily make the task of doing so any easier. Medium relies on community participation as well as its own intelligence to moderate.

    A massive resource like Wikipedia, which relies on community efforts to contribute information – rely on the same communities to create the policies by which they abide. And given that the vast wealth of information is available in 300 different language versions, there’s also some local flexibility in how these policies are upheld.

    Given the 2 billion users it serves, Facebook offers a well-organized approach to content moderation; tasking several teams with different trust and safety responsibilities. Firstly, there’s the Content Policy team, who develop global policies – the community standards, which outline what is and is not allowed on Facebook. Secondly, the Community Operations team is charged with enforcing community standards. Thirdly, the Engineering & Product Team build the tools needed to identify and remove content quickly.

    In a similar way, Google’s moderation efforts are equally as wide-reaching as Facebook. As you’d expect, Google has a diverse and multilingual team of product and policy specialists – over 10,000 people who work around the clock, tackling everything from malware, financial fraud and spam, to violent extremism, child safety, harassment, and hate speech.

    What was interesting here were the very different approaches taken by companies experiencing the same problems. In a similar way that smaller sites would address user-generated content, the way in which each larger platform assumes responsibility for UGC differs, which has an impact on the stances and actions each one takes.

    Agenda item 1: Illegal content – Inc. terrorist content & hate speech

    One of the key topics the event addressed was the role content moderation plays in deterring and removing illegal and terrorist content, as well as hate speech – issues that are starting to impact classifieds businesses too. However, as discussions unfolded it seemed that often what should be removed is not as clear cut as many might imagine.

    All of the representatives spoke of wanting to offer freedom of speech and expression – taking into account the fact that things like irony and satire can mimic something harmful in a subversive way.

    Snapchat’s Global Head of Trust, Agatha Baldwin, reinforced this idea by stating that ‘context matters’ where legal content and hate speech are concerned. “Taking into account the context of a situation, when it’s reported and how it’s reported, help you determine what the right action is.”

    Interestingly, she also admitted that Snapchat doesn’t tend to be affected greatly by terrorist content – unlike Google which, in one quarter of 2017 alone, removed 160,000 pieces of violent extremist content.

    In discussing the many ways in which the internet giant curbs extremist activity, Google’s EMEA Head of Trust & Safety, Jim Gray, referred to Google’s Redirect program – which uses Adwords targeting tools and curated YouTube videos to confront online radicalization by redirecting those looking for this type of content.

    Facebook’s stance on hate speech is, again, to exercise caution and interpret context. However, one of the other reasons they’ve gone to such efforts to engage a range of individual country and language experts in their content moderation efforts – by recruiting them to their Content Policy and Community Operations teams – is to ensure they uphold the rule of law within each nation they operate in.

    However, as Thomas Myrup Kristensen – Managing Director at Facebook’s Brussels office – explained the proactive removal of content is another key priority; citing that in 99% of cases, given the size and expertise of Facebook’s moderation teams, they’re now able to remove content uploaded by groups such as Al-Qaeda and ISIS before it’s even published.

    The second topic of discussion was the issue of copyright, and again it was particularly interesting to understand how large tech businesses curating very different types of content tackle the inherent challenges in similar ways – as each other and smaller sites.

    Despite being a leading software developer community and code repository, the vast majority of copyrighted content on GitHub poses no infringement issues, according to Tal Niv, GitHub’s Vice President, Law and Policy. This is largely down to the work developers do to make sure that they have the appropriate permissions to do build software together.

    However, when copyright infringement is identified, a ‘notice and takedown system’ comes into play – meaning the source needs to be verified, which is often a back-and-forth process involving several individuals, mostly developers, who review content. But, as a lot of projects are multilayered, the main difficulty lies in unraveling and understanding each contribution’s individual legal status.

    Dimitar Dimitrov, EU Representative, at Wikimedia (Wikipedia’s parent company) outlined a similar way in which his organization relies on its volunteer community to moderate copyright infringement. Giving the example of Wikimedia’s media archive, he explained how the service provides public domain and freely licensed images to Wikipedia and other services.

    About a million images are uploaded every six weeks, and they’re moderated by volunteers – patrollers – who can nominate files for deletion if they believe there’s any copyright violation. They can then put it forward for ‘Speedy Deletion’ for very obvious copyright infringement, or ‘Regular Deletion’ which begins a seven-day open discussion period (which anyone can participate in) after which a decision to delete or keep it takes place.

    Citing further examples, Mr. Dimitrov recalled a drawing used on the site that was taken from a public domain book, published in 1926. While the book’s author had died some time ago, it turned out the drawing was made by someone else, who’d died in 1980 – meaning that the specific asset was still under copyright and had to be removed from the site.

    Vimeo’s Sean McGilvray – the video platform’s Director of Legal Affairs in its Trust & Safety team – addressed trademark infringement complaints, noting that these often took a lot of time to resolve because there’s no real structured notice and takedown regime for these complaints, and so a lot of analysis is often needed to determine if a claim is valid.

    On the subject of copyright specifically, Mr. McGilvray referenced Vimeo’s professional user base – musicians, video editors, film directors, choreographers, and more.

    As an ad-free platform, Vimeo’s reliant on premium subscriptions, and one of the major issues is that users often upload their work for brands and artists as part of their showreel or portfolio; without obtaining the necessary licenses allowing them to do so.

    He noted how to help resolve these issues, Vimeo supports users when their content is taken down – explaining to them how the copyright issues work, and walking them through Vimeo’s responsibilities as a user-generated content platform; whilst giving them all the information they need to ensure the content remains visible and compliant.

    Looking ahead to sustainable moderation solutions

    There can be no doubt that moderation challenges manifest in different ways and are tackled in numerous ways by tech giants. But the common factor these massively influential businesses share is that they take moderation very seriously and dedicate a lot of time and resources to getting it right for their users.

    Ultimately, there continues to be a lack of clarity between what is illegal – according to the law of the land – and what constitutes controversial content. That’s why trying to maintain a balance between free speech, controversial content, and removing anything that’s hateful, radical, or indecent is an ongoing battle.

    However, as these discussions demonstrate, no single solution can win in isolation. More and more companies are looking to a combination of machine and human moderation to address their content moderation challenges. And this combined effort is crucial. Machines work quickly and at scale, and people can make decisions based on context and culture.

    Whatever size of business you are – from a niche classified site covering a local market to a multinational content platform – no-one knows your users better than you. That’s why it’s so critical that companies of all shapes and sizes continue to work towards best practice goals.

    As Kristie Canegallo, Vice President, Trust and Safety, Google said “We’ll never claim to have all the answers to these issues. But we are committed to doing our part.”

    Want to learn more about liability and the main takeaways from the content moderation at scale conference? Check out our interview with Eric Goldman.

    This is Besedo

    Global, full-service leader in content moderation

    We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

    Form background

    Contents