Why 2021 Might Be a Tipping Point for Social Media Content Moderation

Contents

    It’s been a long time since social media was simply a recreational diversion for its users. While the early days of social networks were dominated by the excitement of reconnecting with old school friends and staying in touch with distant relatives, they have continued to grow rapidly, and in the process have become embedded in every aspect of society.

    Today, it’s unremarkable to hear a tweet being read out on the news – fifteen years ago, when Twitter was founded, having social media form part of current affairs reporting would have been unimaginable. This growth has been so fast that it’s easy to believe that we have hit the ceiling and that these platforms couldn’t take the center stage any more strongly than they already have.

    Even though we’re just a couple of months in, 2021 is shaping up to be a year which, once again, proves that belief wrong. The fact that the gravitational pull of social media on the rest of the world is continuing to grow has enormous consequences for businesses: not just the platforms themselves, but every business that deals with user-generated content.

    Content moderation: a high priority with high stakes

    Late last year, Gartner predicted that “30% of large organizations will identify content moderation services for user-generated content as a C-suite priority” by 2024. It’s not hard to guess why it was on their radar. All of the biggest global stories of 2020 were marked, in one way or another, by the influence of social media.

    Facing the pandemic, governments across the world needed to communicate vital health information with their citizens and turned to social media as a fast, effective channel – as did conspiracy theorists and fraudsters. Over the summer, Black Lives Matter protests swept America and spread globally, sparked by a viral video and driven by online organizing. Later in the year, the drama of the US Presidential election happened as much on Facebook and Twitter as it did on American doorsteps and the nightly news.

    Across these events, and more, businesses have been at pains to communicate the right things in the right ways, always aware that missteps (even the mishandling of interactions with members of the public whose communication they cannot influence) will be publicized and indelible. As Gartner summarises, social media is “besieged by polarizing content, [and] brand advertisers are increasingly concerned about brand safety and reputational risk on these platforms”.

    This year, social is driving the agenda

    Content moderation is therefore becoming an essential tool for operating (as almost all companies now do) online. However, while the suggestion that it will rise to be a priority for 30% of C-suites over the next three years certainly isn’t modest, it already feels like Gartner was perhaps thinking too small.

    We have since seen an attack on the US Capitol which was, in large part, organized by users on Parler; a mini-crisis on Wall Street spontaneously emerging from conversations on Reddit; and, most recently, an argument between Facebook and the Australian government which resulted in a number of official COVID-19 communications pages on the platform being temporarily blocked.

    These are not just social media reactions to ongoing external stories – they are events driven by social media, with user-generated content at their heart. The power of social platforms to affect people, businesses and society at large have not peaked yet.

    That’s the context that the UK’s Online Safety Bill and the EU’s Digital Services Act are emerging into, promising to apply new rules and give governments greater influence. As we wait for such legislation to come into force, however, there are immediate questions to consider: how should social platforms move forward, and how should businesses mitigate their own risks?

    The path forward for content moderation

    These are fraught questions. One reason for the reticence of social media giants to speak openly about content moderation may be that, simply, outlining new processes for ensuring user safety could be taken as an admission of past failure. Another is that content moderation is too often seen as being just one small, careless step away from censorship – which is an outcome nobody wants to see. For businesses that rely on social, meanwhile, handling a flood of content across multiple platforms and their own sites can quickly become overwhelming and unmanageable.

    For all of these challenges, the best way forward starts with having a more open conversation. Social media companies and other businesses founded on user-generated content, such as dating and marketplaces, have so far tended to be fairly quiet about innovating new content moderation approaches. We can say from experience, however, that in private many such businesses are actively seeking new technology and smarter approaches. As with any common goal, collaboration and shared learning would benefit all partners here.

    It’s encouraging to see partnerships like the Global Alliance for Responsible Media sowing the seeds of these conversations, but more is needed. For our part, Besedo believes that the right technology and processes can make censorship-free moderation a reality. This is not just about the technical definition of censorship: it’s about online spaces that feel fair, allowing free speech but not hate speech within clear rules.

    We also believe that good moderation will spread the benefits of social media and user-generated content to everyone. Ultimately, this is now a key part of how we buy, learn, work, and live, and everyone from multinationals to small businesses to end-users need it to be safe. Finding new ways to answer the challenges of harmful content is in everyone’s best interests.

    Amongst all of this, of course, one thing is certain: in 2021, content moderation will not be missing from anyone’s radar.

    Contents