From controversial presidential social media posts to an increased number of scams relating to the COVID-19 pandemic, 2020 has been a challenging year for everyone – not least content moderation professionals and online marketplace owners.
Let’s take a look back at some of the major industry stories of 2020 (so far – and who knows what December may yet bring…).
After a number of ill-fated decisions regarding content moderation, social media giant Facebook recorded its first fall in profits for five years.
In previous years, the company faced mounting criticism for its data sharing with Cambridge Analytica, as well as its failure to moderate political adverts for false content, and its handling of fake news during the 2016 elections.
However, in 2018, Facebook announced that efforts to toughen its privacy protections and increase content moderation would negatively impact profits – which was in fact the case. In January 2020, Facebook reported that the company had seen a 16% drop in profits across 2019 – despite significant increases in advertising revenue.
By the end of January, Facebook had named British human rights expert, Thomas Hughes, as the administrative leader of its new oversight board, set up to review user-generated and other content removed from its site, who was quoted as saying: “The job aligns with what I’ve been doing over the last couple of decades – which is promoting the rights of users and freedom of expression”.
The following month, San Francisco’s Ninth Circuit Appeals Court ruled that YouTube had not breached the US constitution’s First Amendment when it decided to censor a right-wing channel.
The court ruled that YouTube, the world’s biggest video-sharing platform is a private company and not a “public forum” and therefore not subject to the First Amendment. The US Bill of Rights (1791), declares that the government will not abridge the freedom of speech in law.
However, this guarantee is between the US Government and its people – not private companies (except when they perform a public function) – meaning the ruling could have huge ramifications for future cases of freedom of speech online.
Back in March, we ran an article about protecting users from emerging Corona virus scams. As the pandemic took hold globally and lockdowns were put in place around the world, online scammers were deliberately exploiting vulnerable individuals. Scammers were charging exorbitant prices – for everything from hand sanitizer to fake medicine – and even offering non-existent loans via online marketplaces and advertising.
European regulators rallied by calling upon digital platforms, social media platforms, and search engines to unite against corona virus-related fraud.
In an effort to try to coordinate efforts, the European Commissioner For Justice and Consumers, Didier Reynders, sent a letter to Facebook, Google, Amazon, and other digital platforms.
Despite the best efforts, some third-party merchants managed to find a loop-hole on Amazon which enabled them to claim that products prevented corona virus. The scammers managed to evade automated detection by inserting claims into product images. After being contacted by the Washington Post, Amazon subsequently removed the product listings.
Another casualty of the global pandemic was the short-term/holiday rentals sector. In April, bookings made through global property rental giant, Airbnb, were down by 85%, with cancellation rates at almost 90% – an estimated cost to the company of $1 billion.
In retail, there were inevitable winners and losers as a result of lockdown. Comscore reported that whilst an increase in remote working prompted rises for the home furnishings and grocery categories, the tickets and events’ sector understandably plummeted.
Following calls to step up its content moderation, as part of efforts to combat hate speech, Facebook partnered with 60 fact-checking organizations.
According to a company blog: “AI now proactively detects 88.8 percent of the hate speech content we remove, up from 80.2 percent the previous quarter. In the first quarter of 2020, we took action on 9.6 million pieces of content for violating our hate speech policies – an increase of 3.9 million.”
Facebook even took it one step further by targeting hate-speech memes, creating a database of multi-modal examples with which to train future AI moderation software.
Despite the negative impacts of corona virus and lockdown on multiple-sectors, Besedo reported on two e-marketplace success stories this month.
Germany’s number one classifieds site, eBayK revealed the strategies they employed to reach a record 40 million live ads in the middle of the pandemic. Over in Norway, FINN.no told us about how they managed to grow traffic during COVID – simply by supporting their users.
Halfway through the year, content moderation issues were at the forefront of the news again with Facebook forced to remove more than 80 posts by President Trump’s campaign team – which contained imagery linked to Nazism. The inverted red triangle, which was used to identify political prisoners in Nazi death camps, appeared in the posts without context.
In France, proposals for tough reforms on hate-speech were reduced to a few moderate reforms after a group of 60 senators from ‘Les Republicans’ mounted a challenge through the French Constitutional Council.
The tougher reforms would have mandated platforms to remove certain types of illegal content within 24 hours of a user flagging it.
Facebook found itself in the spotlight again after a study was published by The Institute For Strategic Dialogue (ISD). The study revealed that Facebook accounts linked to the Islamic State group (ISIS) were exploiting loopholes in content moderation.
Using a variety of tactics, the terrorist group was able to exploit gaps in manual and automated moderation systems – and consequently gain thousands of views. It hacked Facebook accounts and posted tutorial videos, as well as blending content from news outlets – including real TV news and theme music.
Planned raids on other high-profile Facebook pages were also revealed. Facebook removed all of the accounts identified.
Learn how to moderate without censoring
Why moderating content without censoring users demands consistent, transparent policies.
Targeting two of China’s biggest apps, President Trump signed special executive orders to stop US businesses from working with TikTok and WeChat – admit fears that the social networking services posed a threat to national security.
The President claimed that parent company, ByteDance, would give the Chinese government access to user data. He gave ByteDance 90 days to sell up (to American stakeholders) or face shutdown.
In August, Microsoft were mooted as front-runners for the buyout, but eventually dropped out of the race. Although set up as a fun video-sharing platform, TikTok has unwittingly become embroiled in conspiracy theories and hate-content. As other platforms have discovered, trying to moderate this content can be exceptionally complicated.
After a summer of lockdown, the world witnessed widespread calls for reforms to regulate online speech. Brookings noticed a discernible shift from the protection of innovation to the protection to the safeguarding of citizens. Countries such as France, Germany, Brazil, and the US explored options for the legislation of content moderation.
Also this month, YouTube revealed it was bringing back teams of human moderators after AI systems were found to be over-censoring and doubling incorrect takedowns. The company also claimed that AI alone failed to match the accuracy of human moderators.
Meanwhile, there was a seismic shift in the online marketplace sector, with Adevinta announcing their acquisition of eBay Classifieds Group to create the world’s largest online classified group.
Over a year after a US data scientist raised concerns about the way in which Instagram handles children’s data, Ireland’s Data Protection Commission (DPC) opened two further investigations, following fears that the contact details of minors were being leaked in exchange for free analytics. It was also revealed that users who changed their account settings to ‘business’ had their contact details revealed.
October also saw eBay launch a sneaker authentication scheme in a bid to tackle counterfeits. Limited edition sneakers can be produced in batches of just a few thousand and sometimes only a dozen, driving up the resale value on online marketplaces.
Unfortunately, this has created a market for counterfeits with one US Customs and Border Protection operation last year alone yielding over 14,000 fake Nikes. Products will have to pass to an authentication facility before being passed on to the buyer.
In November, Zoom became the latest online platform to come under fire for its content moderation practices.
This happened after the platform blocked public and politically sensitive events planned for its service – where it felt that users has broke local laws or its rules which require users “to not break the law, promote violence, display nudity or commit other infractions”.
Zoom was accused of censorship in the debate surrounding Section 230 which gives online companies immunity from legal liability for user-generated content.
While December’s content moderation events are underway, what’s become clear in recent months, is that given how much we all rely on online platforms – for everything from shopping to study, work, rest, and play – companies of all kinds continue to struggle with moderation.
Continued uncertainty doesn’t help – in fact it highlights vulnerabilities and loopholes. But at the very least, knowing where potential pitfalls lie enables them to better protect their users, which ultimately is at the heart of all good content moderation efforts.
This starts with having the right systems and processes in place.
The latest around content moderation, straight in your inbox
Subscribe to get our newsletter to stay updated.
Keeping Your Gaming Platform Safe And Enhancing Your User Experience
Prevent bullying, grooming, and harassment on the gaming platform you’re running. In-app messaging should be a safe place for all gamers – your users’ safety, and your reputation is on the line.
Welcome to the Age of Fake Dating Profiles
With the rise of online dating comes the problem of fake profiles. So why do people create fake dating profiles and what is done to stop it?
How Bad UX Can Ruin Your Online Brand
With user-generated content platforms you’re essentially handing over a massive chunk of your user experience to your community.
The World’s Top Online Marketplaces 2022
Find out which online marketplaces are the biggest in various countries, categories, and more in our list of the biggest marketplaces online.
How can dating apps be flirty but not dirty?
Evolution of language, part three
Making sure dating apps are about ’amore’ not fraud
Evolution of language, part two
Evolution of language, part one
All change: a quick look at content moderation’s big trends
This is Besedo
Global, full-service leader in content moderation
We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.