No online marketplace founder or entrepreneur set out to fail. The world loves a romantic success story, where a disruptive idea changes how we look at an entire industry. Two examples that immediately come to mind are Airbnb and Uber.
Yet, 90% of startups fail and that is something we don’t talk about enough.
Failure in itself may not be something glorious, but it’s an important ingredient for success. From failure comes learnings, and hearing the mistakes of other marketplaces can be very useful to followers looking to avoid the same pitfalls.
Learnings from a failed online marketplace
Anton Koval is the founder of Brainjobs.pl – a failed online marketplace. Today he’s moved on and is helping founders and companies build and grow their own online marketplaces, through his agency Braincode.
We caught up with him to hear the story of his failed online marketplace, what went wrong, and the lessons he learned from the experience.
In the first part of the interview, Anton shares the business idea, USP, and operational setup.
In part two, Anton shares what went wrong, the actions they took to turn it around, and the main lessons learned he took with him from the experience.
Anton Koval is a founder of Braincode an agency that works with founders and companies to help them build their own online marketplaces. Previously Anton bootstrapped his own marketplace in HR- Tech area. Anton is a big advocate of the platform economy and remote work.
Trust is a key component of a successful marketplace and there are many small parts that help achieve it. One element that plays a major role in trust-building is of course how you present your platform to your users and the experience they have while using it. But how can you use UX design to build trust in your marketplace?
UX design is often described as the process of enhancing user satisfaction by improving the usability, efficiency, and accessibility of a website.
This definition is true when designing for online marketplace too. A marketplace’s UX design should be viewed and function as the spine of the platform. Its task is to efficiently guide users through the site to the desired end destination (oftentimes transaction completion).
What’s different for online marketplaces, is that most of them rely heavily on user-generated content. This dependency limits the level of control you have over a vast majority of the user experience. Since you are not the one choosing the images and creating the text, it’s harder to ensure that it aligns with your brand, tone of voice and messaging. A marketplace’s role is to help strangers find and transact with each other. Without the important physical clues, we’d normally use to establish trust and the added challenge of limited content control it can be a struggle to achieve high enough trust levels for strangers to engage.
That’s why it’s vital for online marketplaces to include trust-building elements in their UX design. It’s also imperative that this is combined with a highly selective content curation and reviewing strategy since low quality and irrelevant content can quickly destroy any trust gained from trust-inducing UX design.
Keep in mind that trust building isn’t a one-off effort. In order to achieve a truly trustworthy marketplace, your trust-building elements need to become an integral part of your marketplace’s UX design, from pre-acquisition and throughout the entire user journey. On top of that, you need to continuously deliver on the trust promise you make with your UX design. This means following through and actually making your users safe for instance by offering great and timely customer support, curating and reviewing content diligently and providing secure payment channels.
How do I build trust through UX design?
Make sure to design and develop the user journey for trust. Whether it’s keeping your top listings on the home page, ensuring quality suppliers, presenting honest reviews, or offer easy support, UX elements like these will help build trust in both your platform and users.
Want more detailed info on how you can build trust into your UX design? We invited Bec Faye, Marketplace Optimization & Growth Specialist, for a webinar to share her knowledge and expertise. Watch the full webinar recording here.
We are seeing an increase in legislation aimed at the digital world across the globe. What does that mean for online marketplaces, are there any trends we can see already now and what can we expect from the future? We’ve taken a deep dive into the legal pool to see if we can make sense of it all.
The line between the digital and offline society gradually gets blurrier as human interaction increasingly happen on and jump between online platforms and digital spaces.
Unsurprisingly this merge of tech-driven and traditional doesn’t always happen smoothly. Governments have been particularly slow at catching up to the new world order leaving the digital society to its own devices when it comes to upholding law and order.
Recent events, like election meddling, the increased suicide rate attributed to cyberbullying and clashes between online and offline workforce has however kickstarted government involvement across the globe and we are starting to see an increased interest in, and legislation aimed at taming the digital world.
For those of us who operate in the space and must navigate the legislative jungle, it can be a challenge as politicians scramble to catch up and implement regulations.
With so much going on, it can also be hard for site owners to keep track of these different developments. But doing so is critical to stay compliant and it’s especially important for digital businesses looking to scale or expand into new markets.
It’s increasingly clear that there’s going to be a conflict of interest as user privacy, businesses goals, and government interests clash. Because of the complexity of the digital landscape and as many politicians don’t really understand the inner workings of the Internet and the businesses that operate through it, many laws come out vague, impossible to fulfill or are drawn up without a true understanding of the full impact they will have. This means that many of the recent legislative initiatives are hard to interpret and often highly controversial. Operating in this environment making sure your business adhere to all relevant laws can be a legal minefield.
It also raises the question of just how effective are the different regulations are? Do they really tackle the problems they mean to solve? Are they too fixated on holding online marketplaces and other digital players accountable for harmful user-generated content (UGC)? To what extent do they curb users’ rights rather than empower them? Can there ever be a ‘one size fits all’ solution that works both at a global and a local level?
Let’s take a closer look at some regulatory developments from around the world, consider the most prominent global trends in online safety legislation, and speculate what’s coming next.
Online regulations around the world
The following stories feature synopses of some of the most interesting safety-related stories from the last year or so. They all impact online marketplaces and classified sites in different ways; evidencing the complexities associated with featuring and curating UGC.
India: Banning sales of exotic animals
Online marketplaces in India have cracked down on attempts by users to disguise the illegal sale of rare and exotic animals (and their parts).
This comes after sites such as Amazon India, eBay, OLX, and Snapdeal were revealed to be among over 100 marketplaces where such items can be bought (an issue we covered in a blog post a while back).
Many items are listed under code names – such as ‘Australian Teddy Bear’ for koalas and the Hindi term for ‘Striped Sheet’ in place of tiger skin – but bigger sites are now actively working with government and wildlife protection officials to weed out offending posts.
EU: Take down terror content sooner or face fines
In April this year, the European Parliament voted in favor of a law which would give online businesses one hour (from being contacted by law enforcement authorities) to remove terrorist-related content, which remains more dangerous the longer they’re kept live online.
Failure to comply with the proposed ruling could incur businesses a fine of up to 4% of their global revenue. However, for smaller sites, a 12 hour grace period could be put in place.
US: Safeguarding Children’s Data From Commercial Availability
In America, online shopping giant, Amazon, recently attracted scrutiny over the launch of its brightly-colored kids’ Echo Dot Alexa device – and the use and storage of children’s data.
Despite the company’s assertion that its services comply with child protection legislation, privacy advocates and children’s rights groups are now urging the US Federal Trade Commission to investigate.
Canada: Illegal Online Sales Of Legal Marijuana Sparks Cybersecurity Worries
America’s northern neighbor made medical and recreational cannabis completely legal last year. Since then, the Canadian government has taken significant steps to regulate the sale and distribution of marijuana – restricting it to licensed on- and offline dispensaries.
However, unlicensed black market Mail Order Marijuana services (MOMs) still dominate online sales – given their ability to undercut regulated sales on price, as well as their broader product variety and availability.
While many lawmakers are content to dismiss this gray area as ‘teething issues’, law enforcement agencies are taking it more seriously, citing cybersecurity concerns: as in many cases, buyers are essentially financing and handing their data to, organized crime syndicates.
Britain: An online safety paradise?
In the UK, there have been several interesting developments in the online safety space. Firstly, in a bid to prevent youngsters from accessing sexual content online Britain is banning access to online pornography for those who can’t legitimately verify that they’re of adult age.
In addition, a government whitepaper issued in April aims to make Britain the safest place to be online and calls for an independent regulator to ‘set clear safety standards, backed up by reporting requirements and effective enforcement powers’.
The paper, titled ‘Online Harms’ sets out plans to take tech companies beyond self-regulation to develop ‘a new system of accountability’. This would see a number of key developments take shape, including social media transparency reports; greater scrutiny checks to prevent fake news from spreading, and a new framework to help companies incorporate online safety features into apps and other online platforms.
Which trends will impact online marketplaces & classifieds sites the most?
It’s clear that there’s a lot of hype around online safety. But reading between the lines, it’s crucial to keep in mind the issues that are most likely to have a bearing on UGC-focused companies operating online.
Safety first, but liability still a grey area
Safeguarding users seems to be a prominent issue in all of this. However, there’s also an overwhelming need to protect the innocent victims featured in malicious and harmful user-generated content – as is the case with sex trafficking, revenge porn, and even exotic animals being sold.
However, there’s a strong argument in that unless there’s clear evidence of a crime, the true perpetrator cannot be punished. A piece of UGC provides proof that could hold criminals accountable.
But should facilitation and curation of harmful content be punishable? As we discussed in our recent video interview with Eric Goldman, law professor at Santa Clara University School of Law and co-founder of four ‘Content Moderation at Scale’ conferences, there’s a marked difference between how moderation, liability, and activity are treated, which has a number of bearings on how companies operating online should behave.
For example, in the US, the Communications Decency Act (aka Section 230) relinquishes users and site owners of any wrongdoing and therefore responsibility. However, the clause here is that the site itself is free to remove ‘obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable’ content.
In other countries, as in the UK and EU, where governments are setting their own frameworks, online marketplaces face prosecution in the event of a breach. The danger here is that companies instead focus on compliance rather than the needs of their customers and communities.
Limits on personal freedoms drive dangerous workarounds
Although the safety and liability message is being heard loud and clear, the need to balance personal freedoms with the eradication of harmful content is a key concern. While the intent is protection, the notion of ‘enforcement’ remains at odds with the notion of individual freedom.
For example, Britain’s online porn ban could arguably push youngsters into more nefarious ways of circumventing the restrictions. For example, the easiest way for users to bypass online blocks is to use TOR browsers and Virtual Private Networks.
As demonstrated in Canada, forcing users to explore darker, more unregulated areas of the web potentially makes users more vulnerable to attack by cybercriminals.
International enforcement remains problematic
Perhaps one of the biggest trends we can see which is of particular concern to online marketplaces is being able to monitor, regulate, and abide by laws across different areas.
Laws pertaining to the sale of weapons, drugs, and other restricted items differ between countries, regions, and states. In addition, age restrictions can vary too.
For example, in Canada, edible marijuana products aren’t yet legal – and therefore cannot be sold online. However, in the US states where recreational cannabis is legal, so too are ‘edibles’.
While it’s not hard to imagine that an online age/ISP/location verification (or a simple ‘Where We Deliver’ policy) would solve such issues, the fact remains that these factors have major ramifications for sites that operate internationally.
And given that there’s rife speculation that Amazon could soon sell cannabis, it’s only a matter of time before these issues take center stage – which can ultimately only be positive for governments and marketplaces alike.
One size doesn’t fit all
Scale is also an important factor to keep in mind. Laws and regulations that are designed to curb the huge amount of data that larger marketplaces curate can’t be deployed in the same way by smaller outfits. And vice versa.
Governments are holding online businesses to task for failing to police their sites appropriately. And while they’re maybe right to do so, it can be tough for marketplaces of all sizes to employ enough resources to professionally cover content moderation needs.
Ultimately, we’re still in the ‘Wild West era’ of online regulation. What’s acceptable is very much culture-led; which is why we continue to see such diverse developments at a global and local level.
For example, in Thailand, where the King is held in utmost regard, any content pertaining to him must be strictly moderated and often removed – unthinkable even in another ‘royal’ nation like the UK. General common sense can’t prevail in such a disparate regulatory environment where user attitudes are so polarized.
In addition, the involvement of governments in setting a best practice framework all too often means that those championing issues like censorship, privacy, and accessibility online aren’t the experts in these matters.
What we’d hope is that moving forward, governments continue to work proactively alongside large and small industry players to understand the true nature of the challenges they face, and foster better relationships with them, in order to create an effective, lasting, best practice solution that benefits users but is also realistically achievable by the online businesses.
We saw this recently at a European Parliament-run content moderation conference, where leading lights from some of the world’s best-known technology companies gathered to share their ideas and challenges with politicians.
However, variety (as they say) is the spice of life. Standardizing the international regulatory environment wouldn’t be effective given the rich diversity of content moderation practices and culturally driven needs. What could work though is an adaptable set of guidelines that nations could adopt and customize to suit their user base – a framework that could be informed by both users and online marketplace owners themselves to map out the limit of acceptability. The only problem could be that the nature of UGC constantly change in line with the way in which technology impacts our lives.
All things considered, going forward online marketplaces and classifieds sites will need to pay even closer attention to the trends, safety regulations, and legislation being set locally and globally.
Otherwise, they may quickly be shut down for being non-compliant.
The new laws can be hard to navigate, and it can be even harder to implement the actions, manpower, and tech needed to be compliant. Companies Besedo are set up to help businesses like yours get everything in place in a fraction of the time and at a lower cost than having to go it alone.
If you feel you could use a hand with your content moderation strategy let us know and we’ll be happy to review your current setup and suggest beneficial alterations.
We recently had the pleasure to interview the former CEO of dubizzle, and founder of Working in Digital, Arto Joensuu. Throughout the interview, Arto shares his knowledge, experiences, and advice on how to establish a successful online marketplace, as we explore numerous key areas vital to marketplace success including lead generation, monetization, expansion, trust building and much more.
Q: Hi Arto, thank you for taking the time from your busy schedule to talk with us. Can you begin by sharing some insights about your career so far?
Arto Joensuu: Hi Emil and thanks for getting in touch. Sure. I’ve been working in the digital side or marketing throughout the past 20 years. It all started in the late ’90s with a startup in the mobile services space (we’re talking ringtones, SMS groups, mobile wallpapers, and WAP-based games back then). Iobox was doing some pioneering work in this space and I was happy to be a part of it until it got sold off to Terra Mobile. I then went on to do a “quick” visit at Nokia, which spanned across 12 years and included pretty much the “A-Z” across the digital customer journey. After Nokia, I spent 6 years in Dubai, UAE, where I had the opportunity of being involved in one of the early stage success startup stories: dubizzle.com. After the company was sold off to Naspers, I’ve been involved as an investor and advisor with multiple startups, ranging from job finding solutions for the emerging markets, cryptocurrencies for classifieds and once in a lifetime golf trips across the world. Time flies…
Q: Your first steps into the world of marketplaces were as Head of Marketing at dubizzle in the UAE. How did you strategically align the business and expanded the growth?
Arto Joensuu: Dubizzle was at an interesting stage of development when I joined. It had become a success in Dubai (one of the emirates within the UAE) and was looking to expand further within the country, as well as across the MENA region. The company was led by 2 smart and driven entrepreneurs who wanted to spread the concept across the region and simultaneously increase the revenue streams within the UAE.
In order to prepare ourselves for growth expansion, we needed to ensure we had a common understanding of WHY we exist as a company as well as how we could go about driving this vision forward across markets that we had little personal understanding on. This sparked an internal workstream to define our overall purpose/vision (WHY), our common values (WHO), our operational strategy (HOW) as well as the expected outcomes (WHAT). Some would label this as a brand exercise, I would call it the formation of our manifesto and overall red thread for the years to come. The end result of this exercise is best summarized on Mark’s website. (http://emmarkjames.com/#/)
Q: What were the most important components in your marketing strategy at the time?
Arto Joensuu: On a broad level, we used to talk a lot about 2 types of marketing functions/skillsets: makers and spreaders. Makers were fundamentally responsible for bringing our brand purpose to life through content. Spreaders mastered distribution and optimization. Our content at large was divided into stock and flow content, signifying larger “stock” content pieces centered around wider themes, whilst the “flow” content was more reactive or trigger based.
Another important dimension to the strategy was a holistic understanding of the customer journey and user segments. The customer journey had to be looked at holistically, spanning across overall awareness generation, to engagement, conversion, retention, and monetization. The user segments themselves focused on finding the equilibrium between sellers and buyers as well as b2c vs b2b segments. This, in returned, ensured proper equilibrium between both entities, resulting in a vibrant marketplace where demand and supply met “eye to eye”.
This is where your analytics can play an important role in helping you navigate the waters across both, the acquisition and retention funnels. In addition to our own online metrics tools, I always found it extremely useful to map out our overall communication efforts across all channels. If we were doing PR or other on-ground activations, we could suddenly go back and start seeing patterns between overall awareness generation as well as actual engagement through organic visits and non-paid media.
It’s by no means rocket science, as long as you’ve established clear end action goals that you are monitoring. Is the registration process simple enough? What is our 30-day retention rate? Are the listers able to list their items at ease but with high quality? Do the items get sold? How often are they coming back?
At dubizzle, one of the key metrics we used to look at was what we called “ruffians” (another way of pronouncing RFNS, which meant returning, free, non SEO traffic). This metric was particularly important to us, as it signified a “quality returning visit” and was a good indication for true organic, non-prompted retention. For individual transactions, it was pretty clear that you had to ensure (and reward) for the quality for the listing (to get more views and leads) and simultaneously, ensure that the listing to sales cycle was as rapid as possible. If an item has been listed for several days and isn’t getting leads, we could trigger automated emails to our users, where we prompted an edit/enhancement to the listing (ad a better description/copy text or insert additional images, etc. If the quality score of the listing was good, then perhaps the price was wrong and we educated the seller with average sales prices for similar items he or she was trying to sell.
Q: Do you think your strategy can be replicated today?
Arto Joensuu: I believe that these fundamentals continue to remain relevant in today’s time and age. Organizations that are purpose driven and have their minds set across the entire value chain tend to find their way. I guess the important thing is to stay true to your “why” and not let that get diluted along the way.
When it comes to horizontal marketplaces, I think the same rules still apply in terms of getting the critical mass onto your platform and later monetizing and expanding into b2b verticals. Of course, in today’s time and age, we are seeing the emergence of more and more niche driven marketplaces, where volumes of users are not necessarily large but the engagement and the volume of transactions/retention are extremely high. A big enabler for this has been companies like Sharetribe, that in a similar manner to Automattic (the creators of WordPress) enable marketplaces around niche interests to become mainstream. We can see how social media is constantly evolving from niche players emerging and later getting acquired by larger players and becoming mainstream medias. The same early adopter audiences move on to new niche communities while the masses flock to the services orchestrated by the big internet players. The evolution is constant, and the overall classifieds industry is not immune to this disruption revolution around the corner. Big players need to find new ways to evolve the classifieds marketplace and overall core loop involved.
Q: One of your key responsibilities was to expand dubizzle geographically, can you share how you succeeded with the geographical expansions?
Arto Joensuu: Our regional expansion was a combination of sleepless nights, insane turnaround times, 2 political revolutions, a lot of Red Bull and an end result which sparked a nationwide movement. In other words, welcome to Egypt, basha!
In retrospect, (it’s always easy to be the Monday morning quarterback) there were a lot of elements that made our Egypt expansion a success. Here are a few things I personally felt that made a true difference for us.
1. Decide to win.
We knew that another large classifieds player was also entering the Egyptian market and we had very little time to turn things around. This meant that we needed to put our full weight behind this initiative and our previously crafted brand work really served us well in this context. A highly aligned team can make all the difference in the world when things get tough.
2. Acquire the sellers.
I guess we all know that classifieds marketplaces stride on large volumes of high-quality content. Content attracts buyers and buyers means successful re-distribution if items that people have fallen out of love with. We focused quite strongly on the general items for sale segment, meaning everyday household items that people no longer needed. A great way to do this is to introduce an element of lifestyle-driven marketing into the mix, where the seller represents an aspirational target group that in return attracts buyers into the marketplace. For example, a young family that is selling a baby carriage that their child either outgrew or was originally given as a “double gift” brings people in similar life stages together and can even result in new friendships being formed.
3. Become the talk of the town.
As dubizzle entered the Egyptian market, we wanted to create and engage in an overall society-wide conversation about the second-hand economy. In Egypt alone, the value of unused items people had in their homes was equivalent to the entire GDP of Sweden. If people would take action and sell the items they have fallen out of love with, more money is re-fueled into the economy hence improving the overall economy in the country. This meant an overall paradigm shift in the definitions of ownership as well as the new vs second hand thought process. By tapping into a universal topic that had an impact on the whole society, our dubizzle GM was a frequent visitor to talk shows where larger Egypt wide topics were discussed. Becoming the talk of the town isn’t about creating a clever marketing campaign, it’s really about creating a movement.
Q: How do you think marketplaces need to approach geographical expansions today?
Arto Joensuu: There’s always been active dialogue around the need to localize vs. going to market with a more globally led brand identity. This topic goes beyond brand identity, however. At dubizzle, we soon realized that our mainly desktop driven English site for the UAE would not cut it as we planned to enter mainly Arabic speaking markets. We needed to build our MENA sites from scratch, taking a mobile and Arabic first approach to the whole process. At the time, we realized that for example, Arabic font libraries that were mobile (or even desktop) optimized were scarce and in many instances illegible on mobile devices. Before thinking about a localized marketing campaign, we need to fix the basics and develop a user experience that didn’t get in the way of our core loop. We also noted things like email vs mobile number penetration across emerging markets. It was basically useless to have an email sign up and we went directly to mobile number-based registration methods as these were the common standard across the region. We were fortunate that these changes were made before we entered the Egyptian market with a bang. By having the fundamentals in place, we could shift our focus towards overall activation and awareness building.
Q: In marketing, it’s important to have a consistent tone and imagery. With a marketplace, you heavily rely on user-generated content. How do you ensure that the content submitted by users adheres to, or at least doesn’t break, your tone of voice?
Arto Joensuu: Content quality is a common theme/struggle for any classifieds business. The overall listing process is an obvious area where good content can be encouraged (and incentivized by for example giving the listing higher visibility within the marketplace). The move to mobile/app-based solutions allows for easier image uploading, but also the potential addition of other metadata that can make the discoverability and look & feel of the listing more attractive. I think that also the tone of voice across the overall category structure and content fields can have a big impact on the overall end quality of the listing itself. In recent years, we’ve seen market entrants into the classifieds space (such as Soma), who have taken the individual listing into a more shareable/interactive product card format. What this does, is that the product starts having a life of its own and can be embedded and promoted, liked and shared across multiple venues. This forces the content to be good if it wants to have legs to spread (and live beyond one-off transactions).
Q: How do you make sure that your front page, or first search page, is in line with the brand you want to portray?
Arto Joensuu: This is quite a large topic in itself but obviously, one dimension that differentiates a classifieds marketplace from a more traditional e-commerce marketplace is the overall transaction category structure. For ex. when you enter an e-commerce site, you’re pretty much already certainly looking for a specific item (or category of items in that segment). The overall search process is more structured, and the items displayed usually start from that user-generated search pattern. With classifieds, the process can be similar to an e-commerce play (you go in and search specifically) but there’s also a profound layer of random discovery. For example, you didn’t necessarily know that a 1979 Darth Vader helmet was for sale but you discover it by chance. The home page can serve this endless treasure hunt of discoveries by bringing high-quality content to the home page instead of immediately driving your users down the traditional search path. A lot of the mobile/app driven classifieds spin-offs are leveraging this in quite smart ways and the discoverability along with smart geo and metadata can make the overall user experience a unique one. The brand is the experience and this touches every aspect of the service.
Q: You’ve also helped marketplaces improve their lead generation. Is SEO still important? and do you have any tips on how marketplaces can improve their lead generation?
Arto Joensuu: I think SEO and SMO both continue to play an important role in overall lead generation. If you think of giants like YouTube, a big part of their content gets consumed outside of their own site/app via embedded links on other social channels and websites. This speaks to the fact that lead generation needs to evolve beyond optimizing what’s on your site and thinking about ways in which your user-generated content gets extra mileage through social recognition and distribution.
Q: Do you know of any new creative ways to improve SEO and lead generation?
Arto Joensuu: With reference to Soma’s interactive product card, if an item has a wider lifespan than an individual interaction, it starts accumulating equity throughout its entire lifespan. Here’s where I think the next big thing in classifieds and e-commerce could potentially reside in. Picture this scenario:
When an e-commerce player sells a new mobile phone, they have the transactional data of the one-off sale, after which the item pretty much disappears off the digital grid until the owner decides to sell it on a classifieds site. When the item is posted online, the classifieds player gets a small piece of the item lifetime, as they know when the item was sold and for what price. Then again, it goes off the grid until it’s maybe sold for the 3rd time or disposed for recycling.
What if these items had a digital identity (aka an interactive product card) from the get-go? This would fundamentally bridge the gap between e-commerce and classifieds and could even extend into the whole sustainability piece at the end of life stages of the manufactured device. Along the way, item sale and resale value would be tracked, the item would form “link bait” of its own, as the IIC could be liked, shared or promoted by man and machine alike. Manufacturers would get valuable information on their product resale value, quality, “life expectancy” and distribution. Classified players would basically have multiple touchpoints to the value chain as technically the item is never deleted once sold. This is something I believe has immense potential in the future.
Q: When is it important to optimize monetization? and what are the ‘must have components’ in a successful monetization strategy?
Arto Joensuu: Monetization basically contains two dimensions, the b2b, and b2c sides. I guess that with any of the 2, it really comes down to a healthy equilibrium of buyers vs sellers. Traditionally it was about getting the needed b2c sellers and buyers onto the platform, which in return would bring the b2b players onboard and this would be the first segment that you monetize. Once you’ve become the clear market leader, the b2c monetization kicks in towards the later stage of monetization. The industry has obviously evolved from this and you start seeing rapid verticalization of certain segments (ex. property, cars, jobs) instead of pursuing with a unified horizontal classifieds approach only. You can also start seeing early stage monetization happening with more niche classifieds players where highly specialized b2c groups start forming around specific interest areas like fashion, watches, collectibles, etc.
Perhaps one of the toughest transitions in the abovementioned monetization streams is related to going from b2b monetization to b2c side monetization. There’s always an element of fear that by putting up a paywall to a b2c category, you will lose traffic and users to a competitor. When dubizzle decided to monetize its cars section on the b2c side, the team spent a lot of time evaluating the overall transition and ultimately, the overall used car ecosystem/landscape within the UAE. What we discovered quickly is that b2c users listing their cars on the marketplace received substantially more leads than the other platforms and that the end user was (on average) able to sell their car at a higher price than by going through a 3rd party. We also ran a series of A/B tests to identify the right price point for the listing fee and mapped out the various payment solution providers that would fit our user needs. In the end, the launch was successful and paved the road towards monetizing across other categories as well. In the end, I think it’s really about perceived value for your offering and if the marketplace works, people are ready to pay a small fee to the marketplace enabler.
Q: Trust is key for a successful marketplace, what’s your view on trust and how do you think marketplaces can build a safe platform?
Arto Joensuu: Trust is important. Of course, the definition of trust is probably universal to a degree but the ways in which you address this can vary greatly from country to country. There’s always been a debate about whether buyer profiles should also be registered/verified profiles to avoid fraud. Should the facilitator act as an escrow that holds onto the money until the transaction is completed and validated by both parties. Can we increase trust by having seller reviews and ratings etc. etc. Customer support and overall communication obviously play an important role here and educating the user based on potential pitfalls is very important. Companies such as the one you represent play an important role in preventing fraudulent or bad listings to the marketplace. I don’t personally have a silver bullet answer to the whole equation, to be honest. Maybe you should answer this question instead 😉
Q: How can you differentiate yourself as a marketplace in 2019, when there are a ton of new marketplaces popping up?
Arto Joensuu: I think this comes back to the “WHY” your company exists and what’s the deeper substance behind what you are trying to achieve. People don’t buy what you do, they buy why you do it and having this clear vision filter across everything you do creates differentiation. This might also mean that you need to be willing to sacrifice your current cash cows (and create new ones in the long run) by continuously innovating and finding ways to disrupt existing business models. Perhaps a point to make here is that it’s not about disruption “for the sake of disruption” but instead, finding new ways of bringing your “WHY” to life. Think about Kodak. If their true purpose was to enable people to capture their most precious moments in life and re-live them through pictures, they should have been all over the digital camera (which they actually invented). Instead of embracing this new way of bringing their purpose to life, they never capitalized on this new innovation because (at least in the short run) it would cannibalize their film business.
Arto Joensuu is a digital change agent with over 20 years of professional experience across startups as well as large multinational corporations. His professional expertise lies within a profound understanding of the digital landscape and its impact on companies both small and large. Whether it’s about leading a large corporation into the digital era, or helping startups cross the tipping point, Joensuu has been there in the trenches and has the battle scars to prove it. Throughout his career, he has held several leadership positions, ranging from pioneering in digital/mobile marketing at iobox/Terra Mobile during the late 90’s to spearheading the company-wide digital strategy and execution at Nokia. At dubizzle.com, Joensuu spearheaded a transformational re-branding initiative, which had profoundly positive implications across the entire company. This initiative led to a streamlined vision, common set of values and a cultural transformation that could serve as a platform for growth across the organization. He later continued as dubizzle’s CEO, leading the company’s monetization efforts, regional expansion, operational alignment with majority owner Naspers, as well as facilitating the early stage growth of classifieds spin-off Shedd. Today, Arto Joensuu is the founder and CEO of Working in Digital, a network of digital change agents that invest in early stage startups and actively support these organizations as non-executive directors. Current portfolio consists of:
Think the big tech players don’t tackle content moderation in the same way as your classifieds business? Think again! At a recent European Parliament conference, leading lights from some of the world’s best-known technology companies gathered to share their ideas, and challenges. But exactly what are they up against and how do they resolve issues?
No doubt about it: content moderation is a big issue – for classifieds sites, as well as content and social platforms. In fact, anywhere that users generate content online, actions must be taken to ensure compliance.
This applies to both small businesses as well as to the likes of Facebook, Medium, Wikipedia, Vimeo, Snapchat, and Google – which became quite clear back in February when these tech giants (and a host of others) attended the Digital Agenda Intergroup’s ‘Content Moderation & Removal At Scale’ conference, held at the European Parliament in Brussels on 5 February 2019.
What came out of the meeting was a frank and insightful discussion of free speech, the need to prevent discrimination and abuse, and the need to balance copyright infringement with business sensibilities – discussions that any online platform can easily relate to.
Balancing free speech with best practice
The conference, chaired by Dutch MEP, Marietje Schaake of the Digital Agenda Intergroup, was an opportunity to explore how internet companies develop and implement internal content moderation rules and policies.
Key issues included the challenges of moderating and removing illegal and controversial user-generated content – including hate speech, terrorist content, disinformation, and copyright infringing material – whilst ensuring that people’s rights and freedoms are protected and respected.
Or, as Eric Goldman, Professor of Law at the High-Tech Law Institute, Santa Clara University, put it ‘addressing the culture of silence on the operational consequences of content moderation’.
Addressing the status quo
Given the diverse array of speakers invited, and the sheer difference in the types of platforms they represented, it’s fair to say that their challenges, while inherently similar, manifest in different ways.
For example, Snapchat offers two main modes on its platform. The first is a person-to-person message service, and the other – Discover mode – allows content to be broadcast more widely. Both types of content need to be moderated in very different ways. And even though Snapchat content is ephemeral and the vast majority of it disappears within a 24-hour period, the team aims to remove anything that contravenes its policies within two hours.
By contrast, Medium – an exclusively editorial platform – relies on professional, commissioned, and user-generated content. But though the latter only needs to be moderated – that doesn’t necessarily make the task of doing so any easier. Medium relies on community participation as well as its own intelligence to moderate.
A massive resource like Wikipedia, which relies on community efforts to contribute information – rely on the same communities to create the policies by which they abide. And given that the vast wealth of information is available in 300 different language versions, there’s also some local flexibility in how these policies are upheld.
Given the 2 billion users it serves, Facebook offers a well-organized approach to content moderation; tasking several teams with different trust and safety responsibilities. Firstly, there’s the Content Policy team, who develop global policies – the community standards, which outline what is and is not allowed on Facebook. Secondly, the Community Operations team is charged with enforcing community standards. Thirdly, the Engineering & Product Team build the tools needed to identify and remove content quickly.
In a similar way, Google’s moderation efforts are equally as wide-reaching as Facebook. As you’d expect, Google has a diverse and multilingual team of product and policy specialists – over 10,000 people who work around the clock, tackling everything from malware, financial fraud and spam, to violent extremism, child safety, harassment, and hate speech.
What was interesting here were the very different approaches taken by companies experiencing the same problems. In a similar way that smaller sites would address user-generated content, the way in which each larger platform assumes responsibility for UGC differs, which has an impact on the stances and actions each one takes.
Agenda item 1: Illegal content – Inc. terrorist content & hate speech
One of the key topics the event addressed was the role content moderation plays in deterring and removing illegal and terrorist content, as well as hate speech – issues that are starting to impact classifieds businesses too. However, as discussions unfolded it seemed that often what should be removed is not as clear cut as many might imagine.
All of the representatives spoke of wanting to offer freedom of speech and expression – taking into account the fact that things like irony and satire can mimic something harmful in a subversive way.
Snapchat’s Global Head of Trust, Agatha Baldwin, reinforced this idea by stating that ‘context matters’ where legal content and hate speech are concerned. “Taking into account the context of a situation, when it’s reported and how it’s reported, help you determine what the right action is.”
Interestingly, she also admitted that Snapchat doesn’t tend to be affected greatly by terrorist content – unlike Google which, in one quarter of 2017 alone, removed 160,000 pieces of violent extremist content.
In discussing the many ways in which the internet giant curbs extremist activity, Google’s EMEA Head of Trust & Safety, Jim Gray, referred to Google’s Redirect program – which uses Adwords targeting tools and curated YouTube videos to confront online radicalization by redirecting those looking for this type of content.
Facebook’s stance on hate speech is, again, to exercise caution and interpret context. However, one of the other reasons they’ve gone to such efforts to engage a range of individual country and language experts in their content moderation efforts – by recruiting them to their Content Policy and Community Operations teams – is to ensure they uphold the rule of law within each nation they operate in.
However, as Thomas Myrup Kristensen – Managing Director at Facebook’s Brussels office – explained the proactive removal of content is another key priority; citing that in 99% of cases, given the size and expertise of Facebook’s moderation teams, they’re now able to remove content uploaded by groups such as Al-Qaeda and ISIS before it’s even published.
Agenda item 2: Copyright & trademark infringement
The second topic of discussion was the issue of copyright, and again it was particularly interesting to understand how large tech businesses curating very different types of content tackle the inherent challenges in similar ways – as each other and smaller sites.
Despite being a leading software developer community and code repository, the vast majority of copyrighted content on GitHub poses no infringement issues, according to Tal Niv, GitHub’s Vice President, Law and Policy. This is largely down to the work developers do to make sure that they have the appropriate permissions to do build software together.
However, when copyright infringement is identified, a ‘notice and takedown system’ comes into play – meaning the source needs to be verified, which is often a back-and-forth process involving several individuals, mostly developers, who review content. But, as a lot of projects are multilayered, the main difficulty lies in unraveling and understanding each contribution’s individual legal status.
Dimitar Dimitrov, EU Representative, at Wikimedia (Wikipedia’s parent company) outlined a similar way in which his organization relies on its volunteer community to moderate copyright infringement. Giving the example of Wikimedia’s media archive, he explained how the service provides public domain and freely licensed images to Wikipedia and other services.
About a million images are uploaded every six weeks, and they’re moderated by volunteers – patrollers – who can nominate files for deletion if they believe there’s any copyright violation. They can then put it forward for ‘Speedy Deletion’ for very obvious copyright infringement, or ‘Regular Deletion’ which begins a seven-day open discussion period (which anyone can participate in) after which a decision to delete or keep it takes place.
Citing further examples, Mr. Dimitrov recalled a drawing used on the site that was taken from a public domain book, published in 1926. While the book’s author had died some time ago, it turned out the drawing was made by someone else, who’d died in 1980 – meaning that the specific asset was still under copyright and had to be removed from the site.
Vimeo’s Sean McGilvray – the video platform’s Director of Legal Affairs in its Trust & Safety team – addressed trademark infringement complaints, noting that these often took a lot of time to resolve because there’s no real structured notice and takedown regime for these complaints, and so a lot of analysis is often needed to determine if a claim is valid.
On the subject of copyright specifically, Mr. McGilvray referenced Vimeo’s professional user base – musicians, video editors, film directors, choreographers, and more.
As an ad-free platform, Vimeo’s reliant on premium subscriptions, and one of the major issues is that users often upload their work for brands and artists as part of their showreel or portfolio; without obtaining the necessary licenses allowing them to do so.
He noted how to help resolve these issues, Vimeo supports users when their content is taken down – explaining to them how the copyright issues work, and walking them through Vimeo’s responsibilities as a user-generated content platform; whilst giving them all the information they need to ensure the content remains visible and compliant.
Looking ahead to sustainable moderation solutions
There can be no doubt that moderation challenges manifest in different ways and are tackled in numerous ways by tech giants. But the common factor these massively influential businesses share is that they take moderation very seriously and dedicate a lot of time and resources to getting it right for their users.
Ultimately, there continues to be a lack of clarity between what is illegal – according to the law of the land – and what constitutes controversial content. That’s why trying to maintain a balance between free speech, controversial content, and removing anything that’s hateful, radical, or indecent is an ongoing battle.
However, as these discussions demonstrate, no single solution can win in isolation. More and more companies are looking to a combination of machine and human moderation to address their content moderation challenges. And this combined effort is crucial. Machines work quickly and at scale, and people can make decisions based on context and culture.
Whatever size of business you are – from a niche classified site covering a local market to a multinational content platform – no-one knows your users better than you. That’s why it’s so critical that companies of all shapes and sizes continue to work towards best practice goals.
As Kristie Canegallo, Vice President, Trust and Safety, Google said “We’ll never claim to have all the answers to these issues. But we are committed to doing our part.”
Want to learn more about liability and the main takeaways from the content moderation at scale conference? Check out our interview with Eric Goldman.
A few years ago, Eric Goldman, law professor at Santa Clara University School of Law and co-founder of four ‘Content Moderation at Scale’ conferences, went on a quest to narrow the disconnect, around content moderation, between the world’s largest Internet companies, including Facebook and Google, and the policymakers in the US and Europe.
Since then Eric has managed to improve the transparency between the two sides, significantly improving the understanding and complexity of content moderation.
We sat down with Eric Goldman to pick his brain on the topic of content moderation. We covered as diverse topics as why content moderation is important, how each site needs a unique content moderation approach and which moderation regulations and legislation we’re likely to see in the future.
Watch the interview
Want to read it instead?
I’m here with Eric Goldman and I would let you introduce yourself and I know you have a lot of experience in law and content moderation. But why don’t you give us just a brief background of who you are and what you do?
Yeah. Terrific. And thanks for organizing this. I’m Eric Goldman. I’m a professor of law at Santa Clara University School of Law. It’s located in the heart of Silicon Valley. So I work with the Silicon Valley community generally. I started practicing internet law in 1994. I work at a private law firm. And the 1990s during the dot com boom, I worked as general counsel of an Internet company from 2000 to 2002 and then I became a full-time professor where much of my research focuses on Internet law. I also blog on the topic at my blog, blog.Ericgoldman.org, which I’ve been doing since 2005.
So I mention all aspects of Internet law and I’ve had experience with them over the last 25 years. But I’ve been particular interest in content moderation as one subset of the issues which has been from the 1990s. It’s an old topic for me but one that gets a lot of attention now.
Yeah. So how did you get into that I mean you’ve been with us from almost the beginning right, but you seem very passionate about the discussion about content moderation. So how did you get involved and what sparked your interest?
I’m sorry in content moderation in internet law?
In content moderation specifically.
Yeah. So back in the 1990s, we were dealing with content moderation issues but we were doing it in a very crude and unsophisticated way in part because the volume of work just wasn’t that great. It was possible for a single person to handle the needs of the most Internet companies and there was a lot of back and forth with people who were affected by content moderation decision.
So when I worked in-house, content moderation was one of the topics that were part of my responsibilities as general counsel. But we just didn’t have that many issues. So when the issues came up they were interesting, they raised interesting policy issues and raised interesting technical issues and, of course, interesting legal issues. But it just wasn’t a big part of my overall portfolio of responsibilities.
But I’ve been thinking about those technology issues and those policy issues ever since then. So I think a lot of the things I’ve done as a full-time professor have indirectly related to this topic. But I’ve decided to really invest much more heavily starting in 2017 and it was really a response to a bill that was introduced in Congress called FOSTA which is designed to modify an existing law that protected Internet services in the US to create extra potential liability for things related to sex trafficking.
And when I saw that bill it was clear that the drafters had no idea how that would be operationalized and their assumptions were just wrong. And because of that, the policy was wrong. So that’s what inspired me to say that we need to do more to get the policymakers educated about this topic and to raise the profile of the topic. Meanwhile, in the backdrop, the content moderation topic has become a substantial issue for other reasons. So it just so happens that this has been a very hot topic in addition to the FOSTA related problems identified.
That’s super interesting that you mentioned this part about the law that comes in for sex trafficking because I think it was in 2018 we saw the whole ruling around Backpage and how they were charged with pimping on the website. Is that the case you have followed as well?
Oh yeah, I have been following Backpage’s legal saga quite closely. I have at times weighed in as an amicus in some of their cases and I blogged about most of the key rulings on Backpage’s saga. And Backpage is an interesting story, in part because they have content moderation operations. It wasn’t like they were free for all. And a lot of the legal issues relate to how well, or poorly, they did their content moderation work.
So do you think that that whole thing was a lack of thorough thought on their part to how to see the whole process through? Or is it because the law that was implemented was wrong or wrongly handled? How do you see that whole case? I’m just interested here. I think it’s a huge case for content moderation, right?
It’s an important case because it shows the strengths and limits of content moderation. On the one hand, Backpage clearly wasn’t doing enough to police its content. There was definitely more they could have done. On the other hand, they were also among law enforcement’s best allies in identifying victims of sex trafficking and rescuing victims of sex trafficking. And so it’s kind of like we’re not sure what we wanted for Backpage we didn’t like that they existed. But the fact they existed actually contributed to helping curb the sex trafficking problem. And we’ve seen that since Backpage has gone under that, actually we see there are fewer rescues of sex trafficking victims and fewer ways to try and identify the problem. And so Backpage’s content moderation function was actually a resource as well as a possible whole and in our efforts against sex trafficking.
And this actually is something that we talk a lot to our clients about. That they have to work very closely with law enforcement to make sure that they help with all of this, not just sex trafficking but drugs and illegal weapons, etc etc. Actually, that leads me to the next question I have for you. We saw that they got charged for the pimping offense on Backpage, but from a legal perspective who is actually responsible for the kind of use, it generates content, on both social media sites but also online marketplaces like Backpage? Is it the person posting it, or is it the platform, or is that kind of up in the air?
Well, there are different answers in different countries and so let’s talk about the US for a moment. In the US in 1996 Congress enacted a law, sometimes called the Communications Decency Act of the CDA, I just refer to it as Section 230. And Section 230 basically says that websites aren’t liable for publishing third-party content. It could be user-generated content with other types of content, it could be advertising, in the case of Backpage. If the content comes from a third party, the website publishing isn’t liable for it. And that legal foundation actually really shapes the way that we think about content moderation in the United States because there is no one right answer for how to moderate content from a liability standpoint because if you moderate content aggressively you’re not liable for what you missed.
If you don’t moderate content, you’re not liable for the fact that you publish offensive third-party content. So Internet companies in the United States have the ability to dial up or down the level of content moderation to suit their particular community. Either way, whatever they choose, the legal liability answer is the same. That’s not the case in other countries, in other countries, there are different legal standards for different types of content that in many cases impose substantial legal liability for not doing more to filter or manage content, and definitely impose liability for whatever the sites might miss. So in other countries, content moderation is dictated much more by the law. The United States it’s all dictated by what the company thinks is in the best interests of its users.
And I think maybe that’s one of the reasons why it’s so confusing especially for global platforms because we have clients in Thailand, for instance, they have to moderate everything that mentions the King. It’s illegal to have it on the platforms so they have to be really really too careful with that. We have other examples of people in Europe where there are some countries where you really have to take things away because otherwise you’re liable, and in some, you’re not. So from your perspective, if you’re a global player how do you manage that legally to make sure that you actually keeping the laws correctly?
Super challenging problem. It’s not only a legal question but it’s an operations question. How do you build your back end operations in order to be able to differentially handle the demands or the review that needs to be done and then how do you implement that from a remedy standpoint?
Do you take something down locally, do you take something down globally, or do you not take it down at all or do you get some other fourth thing? And so these are the kinds of things that we’re still seeing companies develop their own answer to. There’s not a single global standard answer to this. I think the dominant philosophy is that where there’s an idiosyncratic country law, like the laws of Thailand about disparaging the king, the best answer is to try to remove that content from the Thailand users, but not from the rest of the globe.
And so I think that’s the dominant norm. If you’re familiar with the Manila principles that definitely encourages online services to do that country by country removal, rather than global removals based on most restrictive local laws. But you have to have the operations, from a content moderation filtering or human standpoint, as well as the technological limitation, to be able to remove content only from one user set and not from the entire globe. So there has to be the operations technology and the legal team all working together in order to make that happen.
Yeah, it’s definitely tricky. And when we’re talking about legal liability etc. I just want to bring up another case. In the UK it’s currently running, this dad who’s sueing Instagram for his daughter’s suicide, which is obviously super sad but they are currently you know looking at whether or not it should have legal ramifications that she, after watching some things on Instagram and felt so bad about herself, that she actually committed suicide. Do you think that companies that allow user-generated content need to be more aware of the liability that they may or may not have? Or how should they go about handling something like that?
I think most of the major internet companies are keenly attuned to the legal liability, as well as the Government Affairs and Public Affairs aspects of the decisions that they make. If anything, they’re overly attuned to it and they might prioritize those over what’s in the best needs of the community. What’s the best thing to the community. They’re trying to enable people to talk to each other. And so if we prioritize the law and the Government Affairs piece and maybe the public relations piece over the needs of the community. I think in the end we get homogenized communities, we get one size fits all small communities, rather than the more diverse proliferation of communities that we saw, starting from the 1990s. Back in the old days, the idea was that a thousand flowers could bloom and maybe actually only 10 flowers are gonna be able to bloom, because of legal liability and cover relations so the public relations pieces are going to force everyone to look about the same. Now when it comes to things like people committing suicide based on the content that they consume. A terrible tragedy, but we have to be really thoughtful about what could an Internet company do to prevent that. How does an Internet company make the kind of content removal or moderation decisions that would actually prevent someone from doing something so drastic? I don’t know if that’s even achievable. And so that becomes a legal standard, I don’t know what that does to the outcome of decision making.
No I think you’re completely right and the thing is like, obviously, we work with content moderation, and we have a lot of tools that can help companies, with things so far as cyberbullying. But there’s always context and it’s always, I mean at some point it becomes so hard to actually judge whether something is bullying or not bullying that it’s almost impossible for humans or machines to figure it out. But I do think, and I don’t know if you agree, that we will see a lot more legislation around, specifically cyberbullying in the future, because it’s such a hot topic. What do you think we will see there, in terms of that?
I don’t really understand the term cyberbullying, so I tend to ask for more precision when we’re talking on that topic. Unquestionably the Internet allows people to engage in harmful content directed towards a particular individual with a design to cause our person distress. Here in the United States during, we have a bunch of laws that govern that already. We might argue they’re incomplete. I could understand those arguments, but it’s not like we’ve ignored that problem from a legal standpoint.
And also I think we’re doing a better job teaching our children how to use it an extension of society. My generation, nobody taught me how to use the Internet. It only taught anyone else on the internet, how to use it with me. And so there were a lot of bad things that people did to each other in the old days because it just didn’t know better. They had never been taught about that. My children going through school today are being taught about how important it is to be respectful online and how easy it is for misunderstandings or psychological harm to occur that they didn’t intend. I don’t know if that will change the result, but I do think that as we train the next generation, I’m hoping that that will be a partial or complete response to the need for new laws about cyberbullying. That we won’t ever be completely civil to each other, but I’d like to think that we’ll become more civil than we are today, through education.
So you think it’s more a matter of education rather than putting laws into place here?
I think that that’s our best hope and we have to do it either way right. We have to educate people to use the Internet. And so once we see the results of that education, I think we have a better understanding of what is it that is intrinsic in human nature cannot be educated away that we’re just going out to punish or sanction. And what is it that we can actually tune the community to self-correct through education. We just don’t know the answer to that today, but I’m hopeful we’re actually making progress on it.
And I think also, I think it’s really interesting what you’re saying “I don’t understand cyberbullying”. It’s because cyberbullying is being thrown out there for everything right. Everything from racism and you know, making fun of people to their sexual orientation, to very direct personal attacks that happens between people who already know each other offline. So it’s really hard and you know for the whole part about racism, sexual orientation and all of that, we have solutions for that and it’s easy to remove, not easy, but it can be removed. But when it comes to personal relationships that people bring online, then it becomes almost impossible to enforce any laws there, I think.
Well, the reason I don’t understand the term is that our own, United States president has proclaimed that he’s the victim of presidential bullying. But I never thought such a thing was possible.
And I don’t want to comment on U.S. politics. But yes I agree, I mean it’s a tough one to crack. So one of the things that I’m really interested in actually, when we talk about liability, laws and this whole content moderation across the globe, is do you think that we will ever see kind of like a unified approach where the world gets together and says, this is OK, this is not okay, and have some kind of global laws around it?
I think that was the hope of the 1990s. That because Internet cuts across geographic borders that it becomes this new non-geographic specific communication space. And that was going to force the governments to customize the rules for a borderless network. I think it’s worked in the exact opposite way, unfortunately. I think we’ve in fact eliminated the notion of a single Internet and we now have countries specific internets. Where each country is now reimposing, or imposing, and making sure it’s enforced their local rules. So that’s why Thailand can have a rule about disparaging the king and make that the law for Thailand. And they’re not likely to then say, let’s get rid of that law to become a global center, they’re more likely to say, let’s force the creation of a Thailand specific internet where that is the law. There can be other internets out there, but we’re going to create an internet that’s specific to Thailand, and every country is replicating that. So I think that that utopian-ish vision, that the Internet would cause us to all come together to create a new global law, I think we’re seeing the direct opposite. And it’s unfortunate, as someone who grew up in the 1990s on the Internet, we lost something that we almost had. There was that possibility of achieving this really remarkable thing that we could bring the globe together. And I think that there is no foreseeable circumstance today where that’s likely to occur.
I agree. Companies across the globe who wants to be global, will, unfortunately, I guess, still have to go into each country and understand the liability laws there. At least for the foreseeable future, and probably forever, right?
For the foreseeable future. I’d like to think that we will outstrip our wildest dreams, that we’ll have a new opportunity to bring a global community together. But I don’t know when that’s going to occur.
Fingers crossed, both for people with legal headaches and also for us as humanity as a whole. And so just going back to the US, because I know that that’s where you kind of have the most knowledge about the law landscape at least. So there was a recent case in 2018, where Facebook’s Mark Zuckerberg had to face the U.S. Congress because he was being accused of somehow facilitating Russian meddling in your election, in the US. And after that, there was a backlash where people were saying that he was enabling foreign countries to interfere with your democracy. Is that something you think we will see more going forward with new elections and not just in the U.S. but globally? What can sites do to prevent that? Because that’s really hard.
A super complicated topic. Let’s break it apart in a few different issues. First of all, Facebook enabled self-service advertising for political ads, without imposing appropriate controls to make sure that those ads were actually run by people who had the authority to run them.
That was not well considered, that’s called a mistake. And Facebook and the other internet companies got the message, that doesn’t work. So I don’t think we’re likely to see that kind of abuse in the future, because I think that the Internet companies got the message. If you’re going to turn on political ads, you have to assume it’s going to be misused and you’re going to have to do more than just allow self-service. Facebook and other services also allow the creation of anonymous or pseudonymous accounts, that could be used to spread false information, including false political information. That’s a harder problem to solve because either they have to turn off the ability to create these anonymous or pseudonymous accounts, or they’re going to have to do a lot more to police them. I think that Internet companies are aware of this problem. I think they’re making better steps to address it. So I’m actually optimistic that we won’t see the kind of abuse that we saw in 2016, but I think it would be unrealistic to think there won’t be some abuse. So I’m hoping it’s just smaller, it’s just noise as opposed to potentially changing the outcome.
To me, the thing that I think the Internet companies have to solve, that they don’t know how to solve, is that the politicians who are getting the unfiltered access to the public are engaging in lies and propaganda. And the Internet companies want to the government officials to be speaking on their platform. That’s a net win for them in their mind. But it also has meant that without the filtering process of traditional media, they’re just going to lie, flat out lie to the public and flat out lie to their constituents without any remorse, any kind of fear of consequence. And to me, I don’t know how to fix that problem without treating government accounts as per se suspicious, that since they are going to be used for propaganda lying. You got to do something to treat them as among the biggest threats, which I don’t think the Internet companies will likely do. So even if we get rid of the Russian malefactors coming and trying to hack the elections, we’ll have the official government elected politicians who will engage in the same kind of lies and there’s not a lot that the internet companies can do about that.
Now I know that Facebook at some point had fact checking on the news feed as well. So maybe that’s what we need, we need a fact check for all political figures and their pages, and then we can see a change.
Yeah. We have to stop assuming that they’re telling us the truth. That if they’re given unfiltered access the public, they will lie without remorse and because of that it actually is not easy to address.
You can fact check them all you want, but because the government officials have such a wide platform and no filtration to correct them. Even if you try and wear that back in, it’s not going to be enough.
But maybe, to just go back to what you said about cyberbullying. Maybe this is also something that we just need to make sure that the next generation is educated in. Doing their own fact-checking, because I think in general that’s also a huge issue with the Internet, that we are really bad at fact-checking for ourselves. At least my generation and maybe your generation. I think we kind of the same. We’re so used to traditional media where at least someone else has fact-checked before us. We have a tendency to just consume and maybe the next generation is going to be better at sifting through that.
Yeah, it’s a weird time because we don’t really know what it takes to get a broad segment of the population to actually care about the facts and the accuracy and to punish people who are continuously misinformed or outright misinterpreting the facts. There will always be a segment of the population who is willing to accept the fact that people lie to them. And in fact, might prefer that they’re lied to, that they might view that as the way things are and maybe should be. The question is how big is that percentage, is that just a little tiny percentage or is that the majority of people and it’s going to make a difference. I don’t know how to fix that. Unfortunately, here in the US, as you may know, part of the modern political economy is that we’re actually reducing our investments in education. We are not making concerted efforts to teach people about the importance of getting the facts right and double checking what people tell you online and elsewhere. And if we aren’t educated on that front, we’re not going to actually get the experiment that you just described. So it’s online cyberbullying where I think we are investing in that issue. We’re not investing in the importance of getting your facts right. At the same level that we used to. And, so I actually, I’m only nervous for the future that way. I don’t see how we’re going to turn that around by educating the population, given the way that we’re actually investing.
That’s a dire forecast for the future, I feel. Yeah but in that case, maybe it becomes even more important that companies come up with a good solution for this, if at all possible. And just jumping to another topic, as you know for many online sites a lot of interaction happens offline as well. So, for instance, Airbnb, you have the initial contact online but then you obviously have some contact offline as well, as the person goes to the apartment and live there for a little while. And there’s been issues for Airbnb, for instance, with apartments getting trashed. They solved that with the putting insurance in place, and there was also Uber, who has had issues with both rape accusations, robberies and a lot of other examples of this. So just from a law standpoint who is liable for the interactions that happen offline, and if the first contact was online?
So in the United States in general, Section 230 covers this situation. So let me restate that to the extent that the only way in which the Internet coming is mediating a conversation between buyer and seller is by publishing content between them and then something bad happened because of the communication. The Internet companies only way of being liable is for how it’s shared the information between the parties. And that would hold them responsible for third party content. So in general, Section 230 actually plays a key role here. And we have seen Internet companies invoke Section 234, offline harms, anywhere from eBay not being liable for publishing listings for items that cause damage personal injury to people, to what are the case I teach my intro class. the diverse MySpace case from 2008 where the victim of a sexual assault offline wasn’t able to hold a MySpace liable for the fact that the parties have met online and had exchange information. And we also have several online dating cases of the same kind of result, when the people meet online but then engage in offline contact that leads to physical harm, the sites aren’t liable. In some cases, Section 230 may not be the basis for that, but there are other legal doctrines that would still protect the Internet services. So the starting premise is that the Internet companies aren’t liable for offline injuries that are caused because of the fact that they allow people to talk to each other. But plaintiffs keep trying and that’s an area that I think we have not definitively resolved the legal issue. So I don’t know that that’s going to be the answer over the long run, but that’s where I think the answer today.
And I’m just thinking that it’s more and more our society moves online and into cyberspace. Maybe this is something that could be like that. The boundaries are gonna blur a little bit and maybe that the liability may change. And I can only foresee that in the future a lot more of our interactions are going to happen online. So, it’s gonna be a more blurred line in the future.
Yeah maybe, although the flip side is that one of the things the Internet is best at is making markets more efficient of allowing buyers and sellers to find each other at lower transaction costs than they could in the offline world. And by reducing transaction costs new transactions become possible. That wouldn’t be possible in any other way. And so to accept that liability for offline injuries is added to that equation, we raise the transaction costs back up again, and we might foreclose some of those markets. So one of the questions is whether it’s better to have those markets with knowing that there might be some risk, or shall we foreclose the market because of the fact that there might be some risk. Insurance does play a role in here. So if I’m running an Airbnb or Uber, or if I’m running an eBay. I’m going to have insurance programs who are gonna say, I want to cover some of those risks of laws because I want to reduce the transaction costs as low as possible. But I still want to provide compensation for those for the small percentage of consumers who are affected by some harm.
Yes, and that’s actually what we see with a lot of our clients. That when that kind of step, to trust the other party, becomes too big then they can put in stuff like that, like Airbnb. So maybe that’s something that is solved in other ways than through legal laws.
Well, the question is whether or not the marketplace has enough incentive to provide adequate trust and safety for its members, without the imposition of draconian legal consequences. But we could say, you are the insurer of every bad action that occurs between buyer and seller on your service, in which case the business probably can’t succeed. Or we could say, you know if you care enough about making it environment that people feel confident in transacting with each other, you have to build trust and safety mechanisms. That’s not just going to insurance that could include, ratings and reviews and it could include doing more vetting of buyers or sellers before they enter the marketplace or a bunch of other techniques to try and improve the trust and safety of the community. The legal regulation only really makes sense if we believe that the market mechanisms, or those marketplaces, is strong enough that they care about their own reputation and their own comfort that they provide to their members about trust and safety. I look back at something eBay did almost 20 years ago when they said, we will ensure for buyers a certain risk of loss, certain price points. We’re going to go ahead and just basically write a check to the buyer if they feel like they didn’t get what they bargained for, they should be out the money will bear that risk of loss. Buyers can now feel more confident buying a higher-end good because we got your back. eBay doesn’t need a lot of you that, eBay is going to do that because that becomes the key that unlocks the category of transactions that were being suppressed by the fear.
Yeah, and it’s interesting because, as you said 20 years ago eBay was doing this, and eBay back then was doing it to convince people about the whole idea of trading online was safe to do because back then it was scary putting your credit card online right. I remember being scared the first time I bought something from Amazon saying “oh no, I’m I ever going to see these books”. But now the difference is that now we all believe in that idea. We are not scared of shopping online anymore at the same extent. But now there’s so much competition that it becomes a competitive advantage to go and say, we will cover you if it happens. So I think, I’m actually quite positive about this going forward and no need for any form of liability laws in this area.
I mean think about what Airbnb has done. It’s made us feel comfortable enough that we will stay in somebody else’s house that we have no knowledge about, except for what we write online, or think about Uber, that we’re willing to get in the car with somebody who we’ve never met before who is not licensed by the government. And yet I feel just as safe as I do in an Uber as I do in a taxi and I feel almost as safe in an Airbnb as I do in the hotel. And for me, that’s because the companies have done such a good job of building the trust and safety that they’ve unlocked that marketplace. They’ve created that marketplace but they have to invest in trust and safety to make it work.
So maybe that’s the advice, if you want to be Uber and Airbnb, invest in trust and safety?
It’s not negotiable really. I mean it’s essential to every marketplace online. Because of the fact that you’re asking people to do something they’re not used to doing. That’s the whole point, you’ve unlocked a new market that doesn’t exist.
So I’m just going to take another jump here because I really want to before we end this interview, I want to talk about the events that you do. Because I think they are super interesting and I was very sad that I couldn’t get to join. So recently you had the fourth edition, I think, of the CONTENT MODERATION AT SCALE event, and that was in Brussels. And I think what is interesting because we join a lot of events but mostly they are done by people within the industry organizing the events. And then there’s a lot of online marketplaces that join, social media platforms, vendors and other people who are interested. But what do you do, which I think is really really interesting, is that you also get the governments involved. So that adds that whole other layer to it, that facilitates this discussion which I think is super important. And as you say, it’s super important to educate both sides. What made you start these events, I know that both Facebook, Google, Medium, Snapchat, joined in Brussels, alongside people from the European Parliament. What’s the goal and what made you start these events?
So let’s go back to 2017 when a representative from Missouri introduced a bill called FOSTA that was designed to scale back section 230 for certain concerns about sex trafficking. I’m horrified by sex trafficking. I want us to do solid policy solutions for that. But the law was never going to succeed on its face because it assumes that all sex trafficking ads, or promotions online, came with big neon warning flashing sign saying ‘this is illegal content’ and it is impossible for the Internet companies to miss it. So all they have to do is look for the neon signs, pull those out and everyone’s going to be better off. But that’s not the way the world works, and you of all people know, the reason why people pay your company is because it doesn’t work that way.
Figuring out which is good content and which is the bad content. It’s a really hard problem. It requires humans and machines and requires a policy that can be executed by each of them. And even then it’s never going to be perfect. So the law was misarchitected from its core because it didn’t understand how Internet companies were running their operations under the hood. On the other hand, one of the reasons why the government regulators don’t understand what’s going on under the hood is because Internet companies don’t want to talk about it and prior to 2018 Internet companies were extremely reticent to discuss what they were doing under the hood. They didn’t want that to be the topic of public conversation. And occasionally information would leak out. Investigative journalists might be able to find some source document and talk about it, but the Internet companies weren’t being forthcoming or engaging on that topic. They were trying to hope that they could just sweep it under the rug and that everything would work out.
So when I saw this disconnect in the FOSTA bill, and the fact that the Internet companies weren’t talking and therefore we couldn’t complain the government regulator for not knowing, I said: “let me see what I can do to help produce more information about what’s going on under the hood”. So the regulators have a better chance of understanding why their laws are not going to work. So the initial architecture of our event, which we held in February 2013 here at Santa Clara University, was just to get companies to talk on the record in an open door event that anyone could attend that was going to be recorded. It was gonna be covered by the media, that was essential to the success of the event so that the information would be available to decision-makers in governments. So they understand why there are not neon signs coming on bad content and they can develop laws that would be more appropriately tuned for how the operation is actually structured.
So I went to most of the major internet companies in the valley and a bunch of other companies that were less well known, but equally important to our discussion. I said “will you talk about what you’re doing under the hood” and several companies said no. Other companies said “sure as long as it’s a closed-door event” and other company said “you know what, I think you’re right it’s time let’s go ahead and rip that band-aid off, let’s go ahead and put the information in the public sphere so that the policymakers understand what’s going on”.
It took me about a year, from when I started to when we actually had our event, to real these companies in each of the participants who were at that February event required at least four phone calls, not emails, phone calls. I had to talk to them and assure them that things were going to be OK. And remind them it was going to be on the record and that they weren’t going to be accountable for the words that they said. And so the February 2018 event, which I think was quite successful, not because it said things that were mindblowing as much as it just got information into the public discourse that hadn’t been available. We revealed new information that we didn’t know before. Then we did it three other times. We did it in Washington D.C. in May of 2018, we did New York City in September 2018, or October, now I can’t remember, and then we did in Brussels in February of 2019. And each time we’ve gotten companies to say some new stuff that hadn’t been said of the prior event. So we’re getting more information. And of course since then, given the red hot attention on content moderation, there’s just lots of new information entered in addition to what we got from the conferences.
I’d like to think the companies were part of the zeitgeist of the change, from keeping content moderation operations discussions under the hood to a situation now where they’re being readily shared and discussed in public. And I think we’re all better for that.
I agree. I think you called it the culture of silence, that they have had in the Brussels introduction. So it is getting better, that’s kind of the feeling that I’m getting from you now. You are positive that we will be more transparent when it comes to content moderation. Is that the message?
Absolutely. I don’t know if it’ll be 100 percent transparent. So there will always be a reason to criticize Internet companies for holding back. But if you look at what Facebook has done, for example, Facebook literally has published the written instructions it provides to its content moderation operators. In the past, some of that information leaked out and Facebook always was uncomfortable about the fact it was leaking out. And now they’re voluntarily sharing the entire instructor set and letting everyone have at it, and they’re going to take all the love for all the weird idiosyncrasies. But you know what. What happened when most people looked at that, the entire set of content moderation instructions like “Wow that’s hard”. That’s really an overwhelming problem. I don’t know if Facebook is doing it well, but I don’t know that I can do it better. So I think it’s been a real plus for Facebook to be that transparent and I think other companies have recognized that. They’d say “when you tell people how hard the problem is they start to get it actually”, and that’s actually I think a healthy dynamic. So I do think we’ve made some progress getting Internet companies more forthcoming. I do think that that’s the future. I don’t think they’ll ever be perfect. But I do think that now we’re having a much more informed discussion about what’s actually taking place under the hood.
I think it’s interesting they’re saying this about Facebook’s guidelines because I think it was 2017 they had this whole controversy around Napalm girl and they took down the picture, then they put it back up. And when we sat internally and discussed “what would we have done”. Of course, it has to do with context, but we also agree, and we’ve worked with content moderation since 2002. We agree that is super hard because on one side you have to have policies that protect children online. You can’t have naked children online. On the other hand, we also understand why people want pictures up that are historical and has historical significance, that tells a story and that reminds us of horrors that’s happened in the past, so we can learn from it. But it’s such a hard issue, and especially the more we go into AI, the harder it’s going to be to solve these grey areas if you don’t apply some human level to it as well.
One thing I think we’ve learned about content moderation is that every content moderation decision creates winners and losers. Someone’s going to want that content up, and someone else is going to want the content down, and they’re not going to get everything they want. Somebody is going to get. It’s going to feel like they didn’t get what they asked for. And so knowing that now actually I think is empowering. Of course, you’re going to have winners and losers, and the losers are going to be unhappy. You’re going to try and optimize for a bunch of things to minimize the harm that losers experience or the number of losers who feel like they didn’t get what they want. But you’re never going to win that battle. And I think that’s something the regulators are really struggling with because they think that there is a magic wand where all the content moderation decisions to be made where only winners are created. And once we realize that’s not possible, I think that we recognize that, the napalm girl, for example, is an excellent one, we have to fight to keep that content up. That photo changed the American political decision making about the Vietnam War. That photo changed the world. And if we don’t have photos like that produced in the future, because we’re worried about child pornography, we’re going to suffer for it. People are going to get away with making bad decisions and not holding consequences. We have to find a way to keep that up, even though they’re going to be people who are going to object to it. They’re going to say “that photo should have come down”. So, you know, I hope we can help evangelize the message. There is no perfect content moderation. There will always be people who are unhappy with any decision. That’s not a bug that’s a feature.
Yeah, and this is one of the things that we say when we advise our clients on how to build their policies. Instead of saying, you know on individual cases, look at what is it you want to achieve, what kind of community are you trying to build, who are the people who are participating in it and what message do you want to send to them. And then work backward from that on each grey area issue that comes up. How is that going to affect the kind of community you want, the message you want to send, etc etc, rather than having like, of course, a need had guidelines, but for those great grey areas it has to be a judgment call. Then you have to make sure that the people sitting and making those judgment calls understands what it is you’re trying to achieve, rather than trying to flick through the rule book and try to find a perfect match, because that’s not going to be a perfect match for everything.
I love the way you’re saying that, and I’m 100 percent on board. Another way I describe it is the content moderation policies shouldn’t be the same from company to company. There might be certain things that we all agree as a society or are impermissible across the network. But there are so many other places where it’s actually beneficial if we have a little bit more rough and tumble environment, where content moderation is lighter and that there are more lockdown communities where content moderation is heavier. And the needs of the community should be reflected in those decisions. So I love the way you’re saying that we actually have the ability to dial up or down the content moderation approaches to reflect the needs of a local audience to help them achieve their goal. That always should be first. I’m glad you advise your clients about that.
Yeah, I mean we do have some generic models, but most of our AI that we produce for content moderation, is actually based on the client’s data set. So they get something that’s completely tailored to their community. Because it’s almost impossible to have a generic ruleset that applies across the board. And what a boring world it would be as well, if all communities adhere to that exact same rules.
Well, we kind of know what that looks like. That looks like, at least here in the US, traditional media like newspapers. There was a lot of content that would never be fit to print in a newspaper, between the combination of the high editorial costs plus the liability for publishing it, and what happened is the Internet created the opportunity to have all this user-generated content that never would have made it into a newspaper. But that still had tons of value for society. We want to preserve that extra layer of communication that humans are capable of. And if we have the one size fits all policy, we miss something very valuable in that process.
So just going back to the conference. You mentioned that the discussion around how to do content moderation has significant scope for our society. I think that touched a little bit upon it now, in our discussion, but could you just elaborate a little bit more on that.
So there’s a couple of ways of which it matters. One is to the extent that content can change society. It can have an impact on a political discussion. It can help voters determine who to elect or who not to elect. It can hold a company accountable for selling bad products in the marketplace. It’s said that a single item of content can have that kind of consequence. We have to fight to make sure that it’s available. That if we have a regime that screens that information out, then society doesn’t get to grow. One of the discussions about the “me too movement”. I don’t know how that proliferates across the entire world. But here in the United States, we had a real reckoning about sexual harassment and abuse at the workplace, of women who had been used to being sexually harassed or abused, because that was the norm in their community. And that was the key to unlock economic or social advancement that they wanted to achieve. And then one tweeter put the hashtag #metoo, and said: “this is my story too”. And that started a viral movement in the United States, to fight back against sexual abuse and harassment. There was not an equivalent reaction in the UK. And the reason why is because UK defamation laws would create the liability for anyone who chose distribute that, in addition to the intermediary like Twitter, saying “we can’t have that kind of #meeto discussion here because we face liability for all these people are being accused of sexual harassment or abuse”. So having a legal liability that allows people, women, to share their story and to share collectively, really change our society. If we had a different set of policies, we wouldn’t have had that potential. The other is the diversity of the community, I can’t stress that enough. I don’t know all the millions of different diverse communities that are on the Internet, but they don’t need the same kind of content moderation approach as each other. So if Facebook tries to be like a mass market global platform, they’re going to have one set of approaches. The enthusiast message board for sheltie dogs is going to have completely different needs and if they have to treat each other as the same we’re not going to get both. We’re probably going to get one or the other, but we’re not gonna get both. So making sure that the sheltie dog lovers can talk to each other, making sure that we have a global mass market social network that people can talk across the globe each other. They’re both valuable. We need a content moderation approach that allows that.
Yeah agreed. It’s actually funny because of one of the things that we talk a lot about when we’re talking about how important it is to make sure that the rules you have fit your exact community. Because one of the examples we have, if you have a forum or community for gynecologists, there’s gonna be some words said in there that probably is not appropriate for a platform that caters to children. But in that kind of community, it’s absolutely necessary that certain words you’re allowed to say without a filter coming in to say “no, no, no you can’t do that”. And so yes, it’s super important to adapt your policies to your community, I agree.
You could go through a thousand examples, right. You know, a community of people who are recovering drug addicts, they’re gonna need to talk to each other differently than a community of people who are discussing criminal drug enforcement.
I agree. So I know from Brussels and also from the one in San Francisco, there are recordings available and I think you have that for all of the events actually. So first of all, it would be great if you could tell us where you could find it. I can put in the URL for all of the recordings. I know the one from Brussels, but the rest of them. But if we just go back to the event, from your point of view what was the most important takeaway from the last one in Brussels?
Right. So we did the four different events and each one had a little different emphasis. So the so-called Valley event was more for the Silicon Valley community. This event was very much about talking to inside the beltway politicians. And the New York event was a little bit more academic in nature. And then the Brussels one was very much for European policymakers. And so, we wanted the European policymakers to hear from the Internet companies what they’re actually doing and how they might differ from each other. And so for me the number one takeaway, we had representatives from online services in Poland and the Czech Republic who talked about how they cater to their local communities and how they built their operations to reflect that. And they were different than anything we had heard from the other companies at other three events. Just by nature of the fact that their market positioning, their local community need, and the market niche that they were trying to occupy. They were different. And that led to two different solutions. So I thought the most interesting thing was the diversity of under the hood content moderation practices. We got some new exposure to diverse practices from the event.
That’s really cool. And just to round up, is there going to be a fifth installment of Content Moderation at Scale and if so, when and where?
We don’t have any plans for a fifth installment, and in part, because we’ve done a lot of the work that we want to do. We wanted to get the information to discourse. Now it’s coming into discourse without the conferences even prodding that. I’m open to doing a fifth one, but I don’t have any plans to do so. There are other organizations that might be branching off and doing events like this. For example, the International Association of Privacy Professionals, or the IAPP, is going to have a content moderation program to supplement their normal discussion about privacy. That’s going to be May 1, 2019, in Washington D.C.. Happy to give you more information about that. But that’s their thing. They’re catering to their audience now, it’s not the direct extension to what we were doing. The thing I’ve been doing from the first event, is I’ve been trying to help organize a professional organization for content moderation professionals. That we need an industry association that pulls together all the participants, including the workers in the field, the companies that employ them, the vendors like you, and the body shops that are outsourcing some of the work. We need a venue for all those people to interact with each other, and that’s a high priority issue. It’s a hard project and it’s not one that I can deliver myself. So my hope is that we’re going to achieve big progress on that front. And I’m talking about it, in part because we are making progress on the front, I just haven’t been able to get public about it. And that they’re going to take over doing the programming for our community. So the fifth one, I hope will be the kickoff event for the new professional organization that we’re hoping to create. That’s going to be much more detailed, much more granular and customized for the needs of the industry, than anything we’ve done to date. But we need the infrastructure to do that and we have to build.
Sounds super interesting. I think we would be happy to lend any help we can in building that, because even as you say now we have it out in the open. I still think that there’s a lot of education that needs to happen to companies, to end users and to politicians, and we’re always happy to work with authorities, end users and companies alike. And as long as we don’t have to share any specific details about clients because obviously, that’s not our prerogative, but yes if you need any help, let us know.
Yeah, thank you. I’m hoping that I will have views on that in the foreseeable future. And if we can get that project off the ground, it’s going to unlock the doors to a whole bunch of new programming that’s never existed before.
That’s really cool and I’d say that’s a cool note to end on as well, I think. Thank you so much, Eric, it’s been fantastic talking to you, and I really feel your passion for this topic as well. So I hope you get to work with the new organization as well, and you don’t have to completely let loose content moderation now.
It’s great to have a chance to talk with you today. I do plan to be a part of this community going forward. So that means our paths are going to cross multiple times, and I look forward to that.
All right, thank you.
We all know that each individual marketplace is unique and comes with its own set of requirements and challenges. However, from our 17 years of experience, we’ve also found that some challenges are universal for most online marketplaces. The vast majority of marketplaces struggle with or wants to increase monetization, acquisition, conversion, and retention and often these challenges are closely tied to the user experience.
This means that if you are having issues in one of the areas mentioned UX is often a good place to start when you are looking for a culprit and a solution.
If you struggle with monetization, acquisition, conversion or retention it is highly likely that you are not meeting user expectations.
If you were, they’d be staying on your site and happy to pay for the service, right?
The reason you are not meeting their expectation could be due to several issues. Low-quality inventory, poor customer support, hosting an unsafe platform, for instance, are all very common issues that severely damage the user experience.
Analyzing and pinpointing where the issue lie is the first step, the next is finding a good solution to fix it.
How to turn around decreasing user experience
We get it, user experience is a broad area and improving it can be approached in many ways as such it can sometimes be hard to know where to even start. Seeing how others go about it can be inspiring and help you find the solution that fits your specific site.
Let’s look at a real-life example of diminishing UX. Kaidee, Thailand’s largest C2C marketplace, saw their UX drop significantly when not living up to their users’ expectations of short time-to-site. Their users turned to public forums, including Facebook and The AppStore, to actively complain about it. This was, obviously, not good for Kaidee as it hurt their retention and acquisition nor was it good for their users, who clearly was unhappy with the product at the time.
Kaidee had to act. They looked internally, at their own platform, to find ways to shorten the time-to-site for their users and improve their UX.
We invited the CEO of Kaidee, Tiwa York, for a webinar, where he openly shared how they managed to overcome the negative trend of diminishing UX, how they approached an aging core platform and how they managed to save 6 months of development time while achieving 85% automation of their total moderation processes.
Here’s the full transcript of our webinar “Kaidee’s journey to improved UX using automation”
OK, first of all, I may apologize that there is a slight delay from my slides as they come through, so we may have to wait for the visuals to come up on your site.
It has to do with we’re operating out of Thailand. So quite a long way away from Sweden at the moment. So just a little bit about Thailand. We got a sixty-nine million person population, GDP per capita about $17.000, and Internet users are quite a healthy penetration, 57 million. It’s keen to note that the growth in internet users are all smartphone base.
So Thailand’s a market that jumped from desktop straight into smartphones, and desktop is a minor part of the traffic today. Desktop represents about 20% of our traffic, mobile web is about 40%. And mobile apps are another 40%. So in terms of our social media landscape, Facebook users we’re ranked number eight globally in the number of Facebook users. I think still today Bangkok is the number one Facebook city in the world.
So there’s a little bit of background for the Internet landscape of Thailand and moving on for a time for Kaidee itself. As I mentioned we reached about 30 million Thais and that represents about 329 million visits. We do, roughly, about 27 million to 30 million visits per month. 7 million uniques and about 600 to 650.000 uniques per day, generating about 15 to 20 million page views. And we get about 30000 items listed per day.
Last year we had 8.7 million items listed for sale. And our top category is cars, it’s RodKaidee. So we have a sub-brand, it’s not a subsite just a sub-brand called RodKaidee, which means cars. And by the way, Kaidee translates into English as “sells well”.
Number two is motorcycles, number three is mobile and tablet, and number four is property. So BaanKaidee, which means ‘house sells well’, and number five is auto parts. Another key thing, in terms of our usage, how some people use Kaidee, is that we’ve got a large portion of our traffic that searches for items on the platform, and we do about two to three million keyword searches per day on the platform.
The top five things searched for was PCX, Honda PCX, MSX is another Honda motorcycle, Yamaha R15, so you see motorcycles are a huge category for us. And then speakers, and finally refrigerator. So that gives you a good idea of the business itself. Now, if we take a look at the moderation landscape. We don’t have too much freedom of press in Thailand to put it straight forward.
There are several different laws that we have to abide by. So one of the most strict law is Lese Majeste laws, which means you can’t criticize or be critical of the royal family. So that means, one thing gets public on my site that goes against the royal family. Basically, I’m in trouble with the law and it’s a very serious topic. In addition to that, there’s a lot of consumer protection board laws in Thailand that we have to abide by. In fact, the Consumer Protection Board hired a full-time person to just watch Kaidee.
That’s it, that’s their job.
And then we’ve got, for example, no selling cigarettes. E-cigarettes are illegal in Thailand. Alcohol, so you can sell a bottle, let’s say a Johnnie Walker. As long as there’s no alcohol in the bottle. If there’s alcohol in the bottle, that’s illegal to be placed online and advertised. We also have FDA laws that are very strict. For example, in Thailand the best breast pumps, among the babies category. Breast pumps are considered a medical device and medical devices are not allowed to be advertised unless you have a distributor license for it. So technically by law, if it’s a secondhand item I can trade it. I’m just not allowed to advertise it. Down to the fact, that it’s illegal to print it out like A4 and stick it in front of the grocery store. That’s illegal in Thailand for medical devices. So it kind of gives you an idea of the landscape that we deal with. We also abide by all World Wildlife Fund trading laws for pets.
So we try to make sure that illegally traded animals, ivory, things like that, are blocked off the site and we have people tried to sell elephants, monkeys, birds, that are illegally tried. Yes, they have tried and we have to block all that content. So, kind of gives you a landscape of why we’ve been very serious about moderation, and the landscape that we deal with. And now for us to tell a little bit about our story and our journey with moderation.
When we launched the site we were 100% post moderated, so we would check on whatever got posted but let it go live first. But with the landscape that we have, we realized we need to be pre-moderated. And so we went to 100% pre-moderation. We had to build a tool of our own. We started building that tool in early 2012 and developed it over time and our ad moderators were human. So it was manual and they were moderating about 350 to 400 ads an hour, each.
That tool was developed in-house and built on our back end that we built from 2011. And we were quite serious, but it was also slow. We had always aspired to try to get to 95% of ads moderated within five minutes. We never, at the time with our manual moderation, were trying to block fraud and spam. We were doing about, less than 20% would go live in less than five minutes.
And people were complaining, they would complain on all the message boards, on Facebook, on the App Store and saying why does it take so long for my ad to get approved. They wanted instant gratification, so along this journey of moderation I guess is another thing that, for those of you who aren’t doing in moderation at the moment, it’s also a key differentiator in our strategy. So we believe that moderation and our ad quality team is the heart of our business. And the reason they’re important is because they keep the quality of my marketplace going.
And the idea is that if I don’t have a quality marketplace, nobody wants to walk around in my marketplace, nobody wants to visit me. So we try to keep high control over the quality of listings that go on the site. If you look at the site today, for those of you who are checking it out right, now you’re going to see that we still have struggles with this. In fact, with image quality and all the policies that we have in place, we still can’t capture everything. But we do our best to try to keep a quality marketplace. No spam, quality of images, quality of products, no fake goods, things like that. So it gives you an idea of where we started from.
Now the critical piece in the storyline happens at the end of 2016. I was sitting around having coffee with my CTO and we’re looking at our core platform that we had been built in 2011. And we realized we needed to update the core platform. In the core platform, we had updated everything else to be a services architecture.
We built everything we could in the services, but we had this monolithic core which was our ad posting and members system. We looked at it, and we said, look if we’re going to drive this business for the next 5 to 10 years, we need to fix this. Now, we decided to take the hard yards of fixing that. Part of that, to fix that, meant we would have to rebuild our moderation tool, and rebuilding that moderation tool was a massive undertaking. Because we looked at it saying OK we can rebuild our core system, and what we went to was a services architecture with a messaging platform behind Kafka. We brought in things like Cassandra for real-time tracking of analytics. We switched from a MySQL database to a ProQuest database and really tried to up our technology game. But that also meant I had to rebuild our tool and we looked at and we said, oh my gosh if we have to rebuild the tool, that’s going to be 6 months worth of development and it’s going to take up, basically, my full team of developers.
So today we have a product and dev team with about 30 people. We’re currently growing that. So if anybody listening, that’s interested in working in Thailand, let me know. But we said that’s going to take most of our team to rebuild. At the same time, Besedo came knocking on our door. So Besedo would come to talk to us about their solutions all the time. But for us, we were looking for just the moderation tool. And it happened to be at that point they had just purchased, or just joined forces with the AI company. Is that correct Emil?
Yes, we acquired IoSquare back in 2016. That’s correct. Yeah. And they had this new moderation tool that they were building with machine learning behind it.
So when we talked to William at Besedo, we said, actually now is the right time and we’re keen on just your tool, Implio. We’re not looking to outsource the moderation, because that’s very close to our heart. But could you build the tool that you have, can we integrate it and use that as our moderation tool?
So that’s kind of where we started and started the conversation. And if, correct me if I’m wrong, but I think we were the first integration that was purely a tech platform pure play for Besedo at that time. Is that correct?
You were most likely, at least one of the first. I’m not entirely sure, that you’re the first, but definitely one of them for sure.
Now with that, there were some challenges, and we faced some challenges. One of the challenges we faced was understanding the lifecycle of an ad and how passing that from our core system into Implio, and then receiving that back and how to manage that ad. So it’s not like you just turn on Implio and you’re done. You actually have to think about the whole UX for your customer service team, your internal moderation team and how it passes between our core platform over into Implio.
The other challenges we had was, we had to feed them a lot of learning data. So we worked, I think a total of four months continuously, with Implio’s data science team to try to increase the accuracy of the system.
We also discovered a lot of garbage in our own dataset, and we had to do a lot of data cleaning. We had to figure out how, you know, where is the garbage and how do we get that cleaned up so that we’re passing good quality data so that Implio can learn from our data and understand it in order to bring up the accuracy of the system. I think those are most of the challenges covered. What else do we have? There was another one.
Another challenge, in terms of implementation, was the policies that we have. In setting the policies within Implio to match our policies, that we have to abide by, or by the law. It took our team quite a bit of time to understand how it worked with Implio, and we had to spend time on that and take effort to understand how the policies affect your moderation rules, what gets rejected, what gets accepted, and what has to go to manual moderation.
Another challenge we had, was that the rejection cases (refusal reasons) that we had. We had a lot more rejection use cases than Implio did on their side. So we had to go figure out how we could map ‘this ad is rejected because of X, Y and Z’, but that doesn’t reflect the status with Implio. So these are kind of the challenges that we had to overcome.
I think the total time, from the day that we decided to go forward with Implio to the day we launched, was right around 6 months and that was because we had to work on our side, on how to integrate Implio and also Implio doing all the data learning and data cleanup for us. Yeah. Emil, did I cover pretty much all the topics that we discussed? I’m not looking at my notes at the moment.
Absolutely, and I mean a lot of the reasons why this was a time-consuming part as well, is because Implio was incredibly young at the time. And like you mentioned with the quality data. In order to build a quality AI model, that is able to catch a high number of bad content at high accuracy, you need to have quality data to train. So, if you train an AI model with poor data, that’s the result you will get as well.
So the next slide is the results. How did this turn out for us? By leveraging this tool, today we have greater than 95% accuracy in moderation. We are now reaching upwards of 94% within five minutes from moderation to go live, and 85% of all ads are automatically, or auto-moderated, as we call it. Auto-moderated, which means it’s all machine learning that does it for us today.
When we launched, we were at about 65% automation, and as we gained confidence and understood how to work with the policies and rules within Implio, we grew that to 70%. Then the Implio team themselves worked really closely with us to push that up into the 75, then 80 percentile and now 85% today. So that means about 15% of all our ads come back to the moderation team and we have to manually look through them. Trying to identify all the other rules, is it spam, is it fraud, is it real, is it fake? All the other things that Implio is not sure of when it sends it back to our team.
Yeah. So from our perspective with Implio as a partner, our team has said they’ve found it great as our partner. We work with a lot of different vendors and a lot of partners, Besedo has been really proactive and responsive to our team in how to make adjustments and improve.
Some of the other challenges. Here’s another key detail. Back when we launched with Implio, the UX needed a lot of improvement, in order to do the workflow so we could do it at speed. Implio worked really closely with us and understanding our team’s needs. We showed them our old tool and they really worked to improve that front-end experience and the UX experience for the moderators. In terms of choosing Implio, I honestly don’t know of other platforms that can answer our marketplace needs like they do, and we’re happy to have them as a partner.
I don’t mean that just as a sales gig, but I actually mean it, as it’s been a key part and a key partnership for us in moving our business forward. The other result that we got from this, is it allows my tech team to not have to build this whole tool and actually maintain it. So we get a lot of cost savings in terms of our tech time because that’s offloaded to Implio. Now I don’t know how to measure that in terms of ROI on the budgeting costs, but I can tell you on the different side of what about our moderation team.
So pre Implio, we had 40 moderators working. Today we’re down to 18. If you look at the total cost, it’s about break-even, maybe a little bit more expensive because of technology. But once you put in the cost savings of my team, of our dev time behind it, it’s definitely much more efficient and effective for us. So that kind of brings me to the end of my story, or my journey as Kaidee.
On a yearly basis, we deliver a scam awareness calendar to help online marketplaces prepare for scam spikes in the year to come. We base the scam calendar on trend observations from previous years and analysis of major happenings in the coming year. Our trust and safety team is working day-by-day with analyzing data to find fraudulent behaviors, and proactively supports our clients with information to help them stay ahead of scammers.
Fraudulent behaviors on marketplaces are constantly fluctuating, as we witness periods of increased and decreased scams. Scam spikes are typically triggered by holiday seasons, festivals, events and other large happenings in a year.
For you and your moderation team to stay on top of the scam spikes, you need to be aware of when and where scammers might appear. In this article, we will share some of the most common types of scam for 2019 and when you are likely to see them spike. If you want to learn more about the specific scam spikes, visit our scam awareness calendar where we predict spikes on a month-by-month basis.
Tech release scams
We are spoiled as consumers with new tech releases every year. In so many ways it’s neat that we continue to develop and outperform our technical developments. And often, we witness competing companies triggering each other to step up their game and drive development. One of the most reoccurring battles between brands is between the two phone giants Apple and Samsung. When Samsung releases their phone of the year, Apple can’t wait to release theirs.pan>
These two annual releases are considered some of the most important product launches of the year, by tech enthusiasts and consumers. Unfortunately, this also attracts scammers looking to deceive eager buyers.
As with previous years, we’re expecting the scam spike in the weeks leading up to the launch of a new iPhone or Samsung. To protect your users, make sure to be on the lookout for pre-order listings, cheap prices compared to market price, phrases such as ‘item is new’ or ‘as good as new’ or ‘brand new in box‘, as well as deceiving phrases used in the description.
Samsung is rumored to release Samsung Galaxy S10 on March 8th, with prices starting at $719. Rumors are also floating online, that Samsung will launch the world’s first foldable smartphone in March this year.
Apple, on the other hand, usually host their big annual product release in early/mid-September, and if they stick to their tradition, we’re expecting their new iPhone to be launched on September 10th this year. Visit this page to stay on top of the latest news surrounding the next iPhone release.
Holiday booking scams
One of the most common actions targeted by scammers is vacation and holiday bookings. When we’re dreaming ourselfves away to various destinations in front of our computer or phone, scammers strategically expose us to exclusive vacation deals that looks stunning, but which in reality doesn’t exist. At Besedo we witness these types of scams on a daily basis, but April and August are considered peak season for holiday scams – when we book our summer and winter vacations.
Make sure your users stay safe on your site. Be on the lookout for fraudulent holiday rental ads and offers that are ‘too good to be true’. And more concretely, your moderation team need to look out for high quality or stock pictures, free email domains, IP’s, large group rentals, price below market, full payment in advance etc.
Want to learn more about holiday scams?
Check out this article: It’s that time of the year again, the peak season for vacation rental scams.
Shopping, shopping, shopping. We all do it, we all (most at least) love it. Phenomena like Black Friday, Cyber Monday, after Christmas sales, Singles day etc. are periods where consumers are rushing to get exclusive deals and discounts.
While offline consumers are in the risking to be trampled in packed stores, online shoppers need to be vary of scammers trying to capitalize on the shopping frenzy by deceiving consumers with ‘super deals’. Be ready for a period of increased scams during and after the shopping peaks. Your team needs to be on the lookout for things like “too good to be true prices”, stock photos and phishing emails.
Big events scams
Every year there are multiple events taking place, everything from sports events to concerts and festivals. Unfortunately, most large events also attract a wave of scammers. In 2019 there are two major sports events, the Asian Cup and Copa America. For these kinds of events, your moderation team should be pay extra attention to ads with many available tickets for sale, low prices, miscategorized tickets, ultra-cheap airline tickets, address and phone number are geographically disconnected, and requests for bank transfer payment only etc.
Besides the two football tournaments mentioned above, there’s a lot of concerts and festivals already sold out, which means tickets may be for sale on your marketplace. Stay ahead of the scammers, learn more about ticket scams and how to keep your users safe.
Back to school scams
Being a student often comes with a tight budget and a need to find new accommodation, often in very specific and possibly unfamiliar areas. This, naturally, makes them vulnerable to potential fraudulent rental deals and loan offers. Make sure your moderation team pays attention to new users posting flats/flat shares, pricing, emails, stock photos, and dodgy loan offers.
New courses usually start twice a year, every January and September, and it is during these months we typically see an increased number of scammers trying to trick students of their money.
Stay ahead of the scammers
Most of the scams we’ve listed will happen throughout the year and your team should always be looking out for them. However, by knowing when a spike is likely you can better prepare your team and you can staff accordingly.
By being aware of scam spikes and adjusting your moderation setup accordingly you can both keep your users safe, reduce time to site and shrinkage. If your team size isn’t flexible, a good way to manage spikes with minimal impact to the end user is to increase your automation levels when the volumes grow.
With the right setup you can automate up to 80% using filters alone and with tailored AI you can reach even better quality and levels.
Want to know how the industry is predicting marketplace trends 2019?
2018 is coming to its end, and what a year it has been! We entered the year talking about classifieds becoming marketplaces, by enabling complete transactions through payment solutions on their sites. And we’re leaving it, with marketplaces looking to expand their offerings with more value-added services, beyond transactions, to complement their offering and make sure that users have the best possible experience on their site.
In an industry where it feels things can change overnight, it makes you wonder what will happen in 2019?
In order to answer that question as accurately as possible, we turned to the ones who know the most about our field; marketplace experts and professionals. Here is their prediction of marketplace trends 2019.
The industry’s verdict; here are the top marketplace trends 2019.
Is your marketplace ready to compete and grow in 2019?
As the marketplace industry has kept on changing, the role of content moderation has changed too. With moderation previously only seen as a fraud prevention activity, it has now developed into becoming an enabler for marketplaces to achieve high content quality, user safety, and excellent user experience.
It’s vital to have a solid moderation set up in place in today’s landscape, but it may not be what differentiates your site 2019. In many cases, it may make sense to outsource parts of or your entire moderation to a third-party solution provider who’s an expert in the field. This will help you free up valuable resources to develop innovations that will boost your competitive advantage.
If you want to learn more about content moderation and how we can help your marketplace grow, get in touch with a moderation expert.
We wish you a happy new year and a successful 2019!
AI has arrived! While some marketplaces are already benefiting from the technology, others are researching the right solution. Regardless of which state your marketplace is in, research mode or not, it’s always beneficial to know which AI solutions are out there and to evaluate which will bring the most value if implemented.
The tremendous upside of utilizing the right AI solutions is marketplace growth. Often through increased user experience, lower conversion barriers and safe transactions. But finding the right AI solution for your online marketplace isn’t the easiest of tasks. And spending resources on either developing or purchasing the wrong AI solution wastes money and slows momentum.
Considering the jungle of AI solutions available in the market it’s not easy for decision-makers to make the right prioritizations.
And even once you’ve carefully evaluated your needs and requirements in an AI solution, you also must consider if it makes sense to develop the solution internally or if it would make more sense to buy AI from a third-party solution provider.
Clarifying AI for marketplaces
With so many question marks surrounding this topic, we decided to help online marketplaces straighten things out. Together with Lars Schön, from giosg, we held a 30-minute webinar where we explored how marketplaces can benefit from AI and how to decide whether to build or buy.
Watch the full recording of the webinar below: