✨ Free eBook: The Digital Services Act Explained. Get it here → ×

How are you liable for your user-generated content? – an interview with professor of law, Eric Goldman

Contents

    A few years ago, Eric Goldman, a law professor at Santa Clara University School of Law and co-founder of four Content Moderation at Scale conferences, went on a quest to narrow the disconnect, around content moderation, between the world’s largest Internet companies, including Facebook and Google, and the policymakers in the US and Europe.  

    Since then, Eric has improved transparency between the two sides, significantly improving the understanding and complexity of content moderation.  

    We sat down with Eric Goldman to pick his brain on the topic of content moderation. We covered as diverse topics as why content moderation is important, how each site needs a unique content moderation approach, and which moderation regulations and legislation we’re likely to see in the future. 

    Watch the interview

    Want to read it instead?

    Sigrid Zeuthen

    I’m here with Eric Goldman and I would let you introduce yourself and I know you have a lot of experience in law and content moderation. But why don’t you give us just a brief background of who you are and what you do?

    Eric Goldman

    Yeah. Terrific. And thanks for organizing this. I’m Eric Goldman. I’m a professor of law at Santa Clara University School of Law. It’s located in the heart of Silicon Valley. So I work with the Silicon Valley community generally. I started practicing internet law in 1994. I work at a private law firm. And the 1990s during the dot com boom, I worked as general counsel of an Internet company from 2000 to 2002 and then I became a full-time professor where much of my research focuses on Internet law. I also blog on the topic at my blog, blog.Ericgoldman.org, which I’ve been doing since 2005.

    So I mention all aspects of Internet law and I’ve had experience with them over the last 25 years. But I’ve been particular interest in content moderation as one subset of the issues which has been from the 1990s. It’s an old topic for me but one that gets a lot of attention now.

    Sigrid Zeuthen

    Yeah. So how did you get into that I mean you’ve been with us from almost the beginning right, but you seem very passionate about the discussion about content moderation. So how did you get involved and what sparked your interest?

    Eric Goldman

    I’m sorry in content moderation in internet law?

    Sigrid Zeuthen

    In content moderation specifically.

    Eric Goldman

    Yeah. So back in the 1990s, we were dealing with content moderation issues but we were doing it in a very crude and unsophisticated way in part because the volume of work just wasn’t that great. It was possible for a single person to handle the needs of the most Internet companies and there was a lot of back and forth with people who were affected by content moderation decision.

    So when I worked in-house, content moderation was one of the topics that were part of my responsibilities as general counsel. But we just didn’t have that many issues. So when the issues came up they were interesting, they raised interesting policy issues and raised interesting technical issues and, of course, interesting legal issues. But it just wasn’t a big part of my overall portfolio of responsibilities.

    But I’ve been thinking about those technology issues and those policy issues ever since then. So I think a lot of the things I’ve done as a full-time professor have indirectly related to this topic. But I’ve decided to really invest much more heavily starting in 2017 and it was really a response to a bill that was introduced in Congress called FOSTA which is designed to modify an existing law that protected Internet services in the US to create extra potential liability for things related to sex trafficking.

    And when I saw that bill it was clear that the drafters had no idea how that would be operationalized and their assumptions were just wrong. And because of that, the policy was wrong. So that’s what inspired me to say that we need to do more to get the policymakers educated about this topic and to raise the profile of the topic. Meanwhile, in the backdrop, the content moderation topic has become a substantial issue for other reasons. So it just so happens that this has been a very hot topic in addition to the FOSTA related problems identified.

    Sigrid Zeuthen

    That’s super interesting that you mentioned this part about the law that comes in for sex trafficking because I think it was in 2018 we saw the whole ruling around Backpage and how they were charged with pimping on the website. Is that the case you have followed as well?

    Eric Goldman

    Oh yeah, I have been following Backpage’s legal saga quite closely. I have at times weighed in as an amicus in some of their cases and I blogged about most of the key rulings on Backpage’s saga. And Backpage is an interesting story, in part because they have content moderation operations. It wasn’t like they were free for all. And a lot of the legal issues relate to how well, or poorly, they did their content moderation work.

    Sigrid Zeuthen

    So do you think that that whole thing was a lack of thorough thought on their part to how to see the whole process through? Or is it because the law that was implemented was wrong or wrongly handled? How do you see that whole case? I’m just interested here. I think it’s a huge case for content moderation, right?

    Eric Goldman

    It’s an important case because it shows the strengths and limits of content moderation. On the one hand, Backpage clearly wasn’t doing enough to police its content. There was definitely more they could have done. On the other hand, they were also among law enforcement’s best allies in identifying victims of sex trafficking and rescuing victims of sex trafficking. And so it’s kind of like we’re not sure what we wanted for Backpage we didn’t like that they existed. But the fact they existed actually contributed to helping curb the sex trafficking problem. And we’ve seen that since Backpage has gone under that, actually we see there are fewer rescues of sex trafficking victims and fewer ways to try and identify the problem. And so Backpage’s content moderation function was actually a resource as well as a possible whole and in our efforts against sex trafficking.

    Sigrid Zeuthen

    And this actually is something that we talk a lot to our clients about. That they have to work very closely with law enforcement to make sure that they help with all of this, not just sex trafficking but drugs and illegal weapons, etc etc. Actually, that leads me to the next question I have for you. We saw that they got charged for the pimping offense on Backpage, but from a legal perspective who is actually responsible for the kind of use, it generates content, on both social media sites but also online marketplaces like Backpage? Is it the person posting it, or is it the platform, or is that kind of up in the air?

    Eric Goldman

    Well, there are different answers in different countries and so let’s talk about the US for a moment. In the US in 1996 Congress enacted a law, sometimes called the Communications Decency Act of the CDA, I just refer to it as Section 230. And Section 230 basically says that websites aren’t liable for publishing third-party content. It could be user-generated content with other types of content, it could be advertising, in the case of Backpage. If the content comes from a third party, the website publishing isn’t liable for it. And that legal foundation actually really shapes the way that we think about content moderation in the United States because there is no one right answer for how to moderate content from a liability standpoint because if you moderate content aggressively you’re not liable for what you missed.

    If you don’t moderate content, you’re not liable for the fact that you publish offensive third-party content. So Internet companies in the United States have the ability to dial up or down the level of content moderation to suit their particular community. Either way, whatever they choose, the legal liability answer is the same. That’s not the case in other countries, in other countries, there are different legal standards for different types of content that in many cases impose substantial legal liability for not doing more to filter or manage content, and definitely impose liability for whatever the sites might miss. So in other countries, content moderation is dictated much more by the law. The United States it’s all dictated by what the company thinks is in the best interests of its users.

    Sigrid Zeuthen

    And I think maybe that’s one of the reasons why it’s so confusing especially for global platforms because we have clients in Thailand, for instance, they have to moderate everything that mentions the King. It’s illegal to have it on the platforms so they have to be really really too careful with that. We have other examples of people in Europe where there are some countries where you really have to take things away because otherwise you’re liable, and in some, you’re not. So from your perspective, if you’re a global player how do you manage that legally to make sure that you actually keeping the laws correctly?

    Eric Goldman

    Super challenging problem. It’s not only a legal question but it’s an operations question. How do you build your back end operations in order to be able to differentially handle the demands or the review that needs to be done and then how do you implement that from a remedy standpoint?

    Do you take something down locally, do you take something down globally, or do you not take it down at all or do you get some other fourth thing? And so these are the kinds of things that we’re still seeing companies develop their own answer to. There’s not a single global standard answer to this. I think the dominant philosophy is that where there’s an idiosyncratic country law, like the laws of Thailand about disparaging the king, the best answer is to try to remove that content from the Thailand users, but not from the rest of the globe.

    And so I think that’s the dominant norm. If you’re familiar with the Manila principles that definitely encourages online services to do that country by country removal, rather than global removals based on most restrictive local laws. But you have to have the operations, from a content moderation filtering or human standpoint, as well as the technological limitation, to be able to remove content only from one user set and not from the entire globe. So there has to be the operations technology and the legal team all working together in order to make that happen.

    Sigrid Zeuthen

    Yeah, it’s definitely tricky. And when we’re talking about legal liability etc. I just want to bring up another case. In the UK it’s currently running, this dad who’s sueing Instagram for his daughter’s suicide, which is obviously super sad but they are currently you know looking at whether or not it should have legal ramifications that she, after watching some things on Instagram and felt so bad about herself, that she actually committed suicide. Do you think that companies that allow user-generated content need to be more aware of the liability that they may or may not have? Or how should they go about handling something like that?

    Eric Goldman

    I think most of the major internet companies are keenly attuned to the legal liability, as well as the Government Affairs and Public Affairs aspects of the decisions that they make. If anything, they’re overly attuned to it and they might prioritize those over what’s in the best needs of the community. What’s the best thing to the community. They’re trying to enable people to talk to each other. And so if we prioritize the law and the Government Affairs piece and maybe the public relations piece over the needs of the community. I think in the end we get homogenized communities, we get one size fits all small communities, rather than the more diverse proliferation of communities that we saw, starting from the 1990s. Back in the old days, the idea was that a thousand flowers could bloom and maybe actually only 10 flowers are gonna be able to bloom, because of legal liability and cover relations so the public relations pieces are going to force everyone to look about the same. Now when it comes to things like people committing suicide based on the content that they consume. A terrible tragedy, but we have to be really thoughtful about what could an Internet company do to prevent that. How does an Internet company make the kind of content removal or moderation decisions that would actually prevent someone from doing something so drastic? I don’t know if that’s even achievable. And so that becomes a legal standard, I don’t know what that does to the outcome of decision making.

    Sigrid Zeuthen

    No I think you’re completely right and the thing is like, obviously, we work with content moderation, and we have a lot of tools that can help companies, with things so far as cyberbullying. But there’s always context and it’s always, I mean at some point it becomes so hard to actually judge whether something is bullying or not bullying that it’s almost impossible for humans or machines to figure it out. But I do think, and I don’t know if you agree, that we will see a lot more legislation around, specifically cyberbullying in the future, because it’s such a hot topic. What do you think we will see there, in terms of that?

    Eric Goldman

    I don’t really understand the term cyberbullying, so I tend to ask for more precision when we’re talking on that topic. Unquestionably the Internet allows people to engage in harmful content directed towards a particular individual with a design to cause our person distress. Here in the United States during, we have a bunch of laws that govern that already. We might argue they’re incomplete. I could understand those arguments, but it’s not like we’ve ignored that problem from a legal standpoint.

    And also I think we’re doing a better job teaching our children how to use it an extension of society. My generation, nobody taught me how to use the Internet. It only taught anyone else on the internet, how to use it with me. And so there were a lot of bad things that people did to each other in the old days because it just didn’t know better. They had never been taught about that. My children going through school today are being taught about how important it is to be respectful online and how easy it is for misunderstandings or psychological harm to occur that they didn’t intend. I don’t know if that will change the result, but I do think that as we train the next generation, I’m hoping that that will be a partial or complete response to the need for new laws about cyberbullying. That we won’t ever be completely civil to each other, but I’d like to think that we’ll become more civil than we are today, through education.

    Sigrid Zeuthen

    So you think it’s more a matter of education rather than putting laws into place here?

    Eric Goldman

    I think that that’s our best hope and we have to do it either way right. We have to educate people to use the Internet. And so once we see the results of that education, I think we have a better understanding of what is it that is intrinsic in human nature cannot be educated away that we’re just going out to punish or sanction. And what is it that we can actually tune the community to self-correct through education. We just don’t know the answer to that today, but I’m hopeful we’re actually making progress on it.

    Sigrid Zeuthen

    And I think also, I think it’s really interesting what you’re saying “I don’t understand cyberbullying”. It’s because cyberbullying is being thrown out there for everything right. Everything from racism and you know, making fun of people to their sexual orientation, to very direct personal attacks that happens between people who already know each other offline. So it’s really hard and you know for the whole part about racism, sexual orientation and all of that, we have solutions for that and it’s easy to remove, not easy, but it can be removed. But when it comes to personal relationships that people bring online, then it becomes almost impossible to enforce any laws there, I think.

    Eric Goldman

    Well, the reason I don’t understand the term is that our own, United States president has proclaimed that he’s the victim of presidential bullying. But I never thought such a thing was possible.

    Sigrid Zeuthen

    And I don’t want to comment on U.S. politics. But yes I agree, I mean it’s a tough one to crack. So one of the things that I’m really interested in actually, when we talk about liability, laws and this whole content moderation across the globe, is do you think that we will ever see kind of like a unified approach where the world gets together and says, this is OK, this is not okay, and have some kind of global laws around it?

    Eric Goldman

    I think that was the hope of the 1990s. That because Internet cuts across geographic borders that it becomes this new non-geographic specific communication space. And that was going to force the governments to customize the rules for a borderless network. I think it’s worked in the exact opposite way, unfortunately. I think we’ve in fact eliminated the notion of a single Internet and we now have countries specific internets. Where each country is now reimposing, or imposing, and making sure it’s enforced their local rules. So that’s why Thailand can have a rule about disparaging the king and make that the law for Thailand. And they’re not likely to then say, let’s get rid of that law to become a global center, they’re more likely to say, let’s force the creation of a Thailand specific internet where that is the law. There can be other internets out there, but we’re going to create an internet that’s specific to Thailand, and every country is replicating that. So I think that that utopian-ish vision, that the Internet would cause us to all come together to create a new global law, I think we’re seeing the direct opposite. And it’s unfortunate, as someone who grew up in the 1990s on the Internet, we lost something that we almost had. There was that possibility of achieving this really remarkable thing that we could bring the globe together. And I think that there is no foreseeable circumstance today where that’s likely to occur.

    Sigrid Zeuthen

    I agree. Companies across the globe who wants to be global, will, unfortunately, I guess, still have to go into each country and understand the liability laws there. At least for the foreseeable future, and probably forever, right?

    Eric Goldman

    For the foreseeable future. I’d like to think that we will outstrip our wildest dreams, that we’ll have a new opportunity to bring a global community together. But I don’t know when that’s going to occur.

    Sigrid Zeuthen

    Fingers crossed, both for people with legal headaches and also for us as humanity as a whole. And so just going back to the US, because I know that that’s where you kind of have the most knowledge about the law landscape at least. So there was a recent case in 2018, where Facebook’s Mark Zuckerberg had to face the U.S. Congress because he was being accused of somehow facilitating Russian meddling in your election, in the US. And after that, there was a backlash where people were saying that he was enabling foreign countries to interfere with your democracy. Is that something you think we will see more going forward with new elections and not just in the U.S. but globally? What can sites do to prevent that? Because that’s really hard.

    Eric Goldman

    A super complicated topic. Let’s break it apart in a few different issues. First of all, Facebook enabled self-service advertising for political ads, without imposing appropriate controls to make sure that those ads were actually run by people who had the authority to run them.

    That was not well considered, that’s called a mistake. And Facebook and the other internet companies got the message, that doesn’t work. So I don’t think we’re likely to see that kind of abuse in the future, because I think that the Internet companies got the message. If you’re going to turn on political ads, you have to assume it’s going to be misused and you’re going to have to do more than just allow self-service. Facebook and other services also allow the creation of anonymous or pseudonymous accounts, that could be used to spread false information, including false political information. That’s a harder problem to solve because either they have to turn off the ability to create these anonymous or pseudonymous accounts, or they’re going to have to do a lot more to police them. I think that Internet companies are aware of this problem. I think they’re making better steps to address it. So I’m actually optimistic that we won’t see the kind of abuse that we saw in 2016, but I think it would be unrealistic to think there won’t be some abuse. So I’m hoping it’s just smaller, it’s just noise as opposed to potentially changing the outcome.

    To me, the thing that I think the Internet companies have to solve, that they don’t know how to solve, is that the politicians who are getting the unfiltered access to the public are engaging in lies and propaganda. And the Internet companies want to the government officials to be speaking on their platform. That’s a net win for them in their mind. But it also has meant that without the filtering process of traditional media, they’re just going to lie, flat out lie to the public and flat out lie to their constituents without any remorse, any kind of fear of consequence. And to me, I don’t know how to fix that problem without treating government accounts as per se suspicious, that since they are going to be used for propaganda lying. You got to do something to treat them as among the biggest threats, which I don’t think the Internet companies will likely do. So even if we get rid of the Russian malefactors coming and trying to hack the elections, we’ll have the official government elected politicians who will engage in the same kind of lies and there’s not a lot that the internet companies can do about that.

    Sigrid Zeuthen

    Now I know that Facebook at some point had fact checking on the news feed as well. So maybe that’s what we need, we need a fact check for all political figures and their pages, and then we can see a change.

    Eric Goldman

    Yeah. We have to stop assuming that they’re telling us the truth. That if they’re given unfiltered access the public, they will lie without remorse and because of that it actually is not easy to address.

    You can fact check them all you want, but because the government officials have such a wide platform and no filtration to correct them. Even if you try and wear that back in, it’s not going to be enough.

    Sigrid Zeuthen

    But maybe, to just go back to what you said about cyberbullying. Maybe this is also something that we just need to make sure that the next generation is educated in. Doing their own fact-checking, because I think in general that’s also a huge issue with the Internet, that we are really bad at fact-checking for ourselves. At least my generation and maybe your generation. I think we kind of the same. We’re so used to traditional media where at least someone else has fact-checked before us. We have a tendency to just consume and maybe the next generation is going to be better at sifting through that.

    Eric Goldman

    Yeah, it’s a weird time because we don’t really know what it takes to get a broad segment of the population to actually care about the facts and the accuracy and to punish people who are continuously misinformed or outright misinterpreting the facts. There will always be a segment of the population who is willing to accept the fact that people lie to them. And in fact, might prefer that they’re lied to, that they might view that as the way things are and maybe should be. The question is how big is that percentage, is that just a little tiny percentage or is that the majority of people and it’s going to make a difference. I don’t know how to fix that. Unfortunately, here in the US, as you may know, part of the modern political economy is that we’re actually reducing our investments in education. We are not making concerted efforts to teach people about the importance of getting the facts right and double checking what people tell you online and elsewhere. And if we aren’t educated on that front, we’re not going to actually get the experiment that you just described. So it’s online cyberbullying where I think we are investing in that issue. We’re not investing in the importance of getting your facts right. At the same level that we used to. And, so I actually, I’m only nervous for the future that way. I don’t see how we’re going to turn that around by educating the population, given the way that we’re actually investing.

    Sigrid Zeuthen

    That’s a dire forecast for the future, I feel. Yeah but in that case, maybe it becomes even more important that companies come up with a good solution for this, if at all possible. And just jumping to another topic, as you know for many online sites a lot of interaction happens offline as well. So, for instance, Airbnb, you have the initial contact online but then you obviously have some contact offline as well, as the person goes to the apartment and live there for a little while. And there’s been issues for Airbnb, for instance, with apartments getting trashed. They solved that with the putting insurance in place, and there was also Uber, who has had issues with both rape accusations, robberies and a lot of other examples of this. So just from a law standpoint who is liable for the interactions that happen offline, and if the first contact was online?

    Eric Goldman

    So in the United States in general, Section 230 covers this situation. So let me restate that to the extent that the only way in which the Internet coming is mediating a conversation between buyer and seller is by publishing content between them and then something bad happened because of the communication. The Internet companies only way of being liable is for how it’s shared the information between the parties. And that would hold them responsible for third party content. So in general, Section 230 actually plays a key role here. And we have seen Internet companies invoke Section 234, offline harms, anywhere from eBay not being liable for publishing listings for items that cause damage personal injury to people, to what are the case I teach my intro class. the diverse MySpace case from 2008 where the victim of a sexual assault offline wasn’t able to hold a MySpace liable for the fact that the parties have met online and had exchange information. And we also have several online dating cases of the same kind of result, when the people meet online but then engage in offline contact that leads to physical harm, the sites aren’t liable. In some cases, Section 230 may not be the basis for that, but there are other legal doctrines that would still protect the Internet services. So the starting premise is that the Internet companies aren’t liable for offline injuries that are caused because of the fact that they allow people to talk to each other. But plaintiffs keep trying and that’s an area that I think we have not definitively resolved the legal issue. So I don’t know that that’s going to be the answer over the long run, but that’s where I think the answer today.

    Sigrid Zeuthen

    And I’m just thinking that it’s more and more our society moves online and into cyberspace. Maybe this is something that could be like that. The boundaries are gonna blur a little bit and maybe that the liability may change. And I can only foresee that in the future a lot more of our interactions are going to happen online. So, it’s gonna be a more blurred line in the future.

    Eric Goldman

    Yeah maybe, although the flip side is that one of the things the Internet is best at is making markets more efficient of allowing buyers and sellers to find each other at lower transaction costs than they could in the offline world. And by reducing transaction costs new transactions become possible. That wouldn’t be possible in any other way. And so to accept that liability for offline injuries is added to that equation, we raise the transaction costs back up again, and we might foreclose some of those markets. So one of the questions is whether it’s better to have those markets with knowing that there might be some risk, or shall we foreclose the market because of the fact that there might be some risk. Insurance does play a role in here. So if I’m running an Airbnb or Uber, or if I’m running an eBay. I’m going to have insurance programs who are gonna say, I want to cover some of those risks of laws because I want to reduce the transaction costs as low as possible. But I still want to provide compensation for those for the small percentage of consumers who are affected by some harm.

    Sigrid Zeuthen

    Yes, and that’s actually what we see with a lot of our clients. That when that kind of step, to trust the other party, becomes too big then they can put in stuff like that, like Airbnb. So maybe that’s something that is solved in other ways than through legal laws.

    Eric Goldman

    Well, the question is whether or not the marketplace has enough incentive to provide adequate trust and safety for its members, without the imposition of draconian legal consequences. But we could say, you are the insurer of every bad action that occurs between buyer and seller on your service, in which case the business probably can’t succeed. Or we could say, you know if you care enough about making it environment that people feel confident in transacting with each other, you have to build trust and safety mechanisms. That’s not just going to insurance that could include, ratings and reviews and it could include doing more vetting of buyers or sellers before they enter the marketplace or a bunch of other techniques to try and improve the trust and safety of the community. The legal regulation only really makes sense if we believe that the market mechanisms, or those marketplaces, is strong enough that they care about their own reputation and their own comfort that they provide to their members about trust and safety. I look back at something eBay did almost 20 years ago when they said, we will ensure for buyers a certain risk of loss, certain price points. We’re going to go ahead and just basically write a check to the buyer if they feel like they didn’t get what they bargained for, they should be out the money will bear that risk of loss. Buyers can now feel more confident buying a higher-end good because we got your back. eBay doesn’t need a lot of you that, eBay is going to do that because that becomes the key that unlocks the category of transactions that were being suppressed by the fear.

    Sigrid Zeuthen

    Yeah, and it’s interesting because, as you said 20 years ago eBay was doing this, and eBay back then was doing it to convince people about the whole idea of trading online was safe to do because back then it was scary putting your credit card online right. I remember being scared the first time I bought something from Amazon saying “oh no, I’m I ever going to see these books”. But now the difference is that now we all believe in that idea. We are not scared of shopping online anymore at the same extent. But now there’s so much competition that it becomes a competitive advantage to go and say, we will cover you if it happens. So I think, I’m actually quite positive about this going forward and no need for any form of liability laws in this area.

    Eric Goldman

    I mean think about what Airbnb has done. It’s made us feel comfortable enough that we will stay in somebody else’s house that we have no knowledge about, except for what we write online, or think about Uber, that we’re willing to get in the car with somebody who we’ve never met before who is not licensed by the government. And yet I feel just as safe as I do in an Uber as I do in a taxi and I feel almost as safe in an Airbnb as I do in the hotel. And for me, that’s because the companies have done such a good job of building the trust and safety that they’ve unlocked that marketplace. They’ve created that marketplace but they have to invest in trust and safety to make it work.

    Sigrid Zeuthen

    So maybe that’s the advice, if you want to be Uber and Airbnb, invest in trust and safety?

    Eric Goldman

    It’s not negotiable really. I mean it’s essential to every marketplace online. Because of the fact that you’re asking people to do something they’re not used to doing. That’s the whole point, you’ve unlocked a new market that doesn’t exist.

    Sigrid Zeuthen

    So I’m just going to take another jump here because I really want to before we end this interview, I want to talk about the events that you do. Because I think they are super interesting and I was very sad that I couldn’t get to join. So recently you had the fourth edition, I think, of the CONTENT MODERATION AT SCALE event, and that was in Brussels. And I think what is interesting because we join a lot of events but mostly they are done by people within the industry organizing the events. And then there’s a lot of online marketplaces that join, social media platforms, vendors and other people who are interested. But what do you do, which I think is really really interesting, is that you also get the governments involved. So that adds that whole other layer to it, that facilitates this discussion which I think is super important. And as you say, it’s super important to educate both sides. What made you start these events, I know that both Facebook, Google, Medium, Snapchat, joined in Brussels, alongside people from the European Parliament. What’s the goal and what made you start these events?

    Eric Goldman

    So let’s go back to 2017 when a representative from Missouri introduced a bill called FOSTA that was designed to scale back section 230 for certain concerns about sex trafficking. I’m horrified by sex trafficking. I want us to do solid policy solutions for that. But the law was never going to succeed on its face because it assumes that all sex trafficking ads, or promotions online, came with big neon warning flashing sign saying ‘this is illegal content’ and it is impossible for the Internet companies to miss it. So all they have to do is look for the neon signs, pull those out and everyone’s going to be better off. But that’s not the way the world works, and you of all people know, the reason why people pay your company is because it doesn’t work that way.

    Figuring out which is good content and which is the bad content. It’s a really hard problem. It requires humans and machines and requires a policy that can be executed by each of them. And even then it’s never going to be perfect. So the law was misarchitected from its core because it didn’t understand how Internet companies were running their operations under the hood. On the other hand, one of the reasons why the government regulators don’t understand what’s going on under the hood is because Internet companies don’t want to talk about it and prior to 2018 Internet companies were extremely reticent to discuss what they were doing under the hood. They didn’t want that to be the topic of public conversation. And occasionally information would leak out. Investigative journalists might be able to find some source document and talk about it, but the Internet companies weren’t being forthcoming or engaging on that topic. They were trying to hope that they could just sweep it under the rug and that everything would work out.

    So when I saw this disconnect in the FOSTA bill, and the fact that the Internet companies weren’t talking and therefore we couldn’t complain the government regulator for not knowing, I said: “let me see what I can do to help produce more information about what’s going on under the hood”. So the regulators have a better chance of understanding why their laws are not going to work. So the initial architecture of our event, which we held in February 2013 here at Santa Clara University, was just to get companies to talk on the record in an open door event that anyone could attend that was going to be recorded. It was gonna be covered by the media, that was essential to the success of the event so that the information would be available to decision-makers in governments. So they understand why there are not neon signs coming on bad content and they can develop laws that would be more appropriately tuned for how the operation is actually structured.

    So I went to most of the major internet companies in the valley and a bunch of other companies that were less well known, but equally important to our discussion. I said “will you talk about what you’re doing under the hood” and several companies said no. Other companies said “sure as long as it’s a closed-door event” and other company said “you know what, I think you’re right it’s time let’s go ahead and rip that band-aid off, let’s go ahead and put the information in the public sphere so that the policymakers understand what’s going on”.

    It took me about a year, from when I started to when we actually had our event, to real these companies in each of the participants who were at that February event required at least four phone calls, not emails, phone calls. I had to talk to them and assure them that things were going to be OK. And remind them it was going to be on the record and that they weren’t going to be accountable for the words that they said. And so the February 2018 event, which I think was quite successful, not because it said things that were mindblowing as much as it just got information into the public discourse that hadn’t been available. We revealed new information that we didn’t know before. Then we did it three other times. We did it in Washington D.C. in May of 2018, we did New York City in September 2018, or October, now I can’t remember, and then we did in Brussels in February of 2019. And each time we’ve gotten companies to say some new stuff that hadn’t been said of the prior event. So we’re getting more information. And of course since then, given the red hot attention on content moderation, there’s just lots of new information entered in addition to what we got from the conferences.

    I’d like to think the companies were part of the zeitgeist of the change, from keeping content moderation operations discussions under the hood to a situation now where they’re being readily shared and discussed in public. And I think we’re all better for that.

    Sigrid Zeuthen

    I agree. I think you called it the culture of silence, that they have had in the Brussels introduction. So it is getting better, that’s kind of the feeling that I’m getting from you now. You are positive that we will be more transparent when it comes to content moderation. Is that the message?

    Eric Goldman

    Absolutely. I don’t know if it’ll be 100 percent transparent. So there will always be a reason to criticize Internet companies for holding back. But if you look at what Facebook has done, for example, Facebook literally has published the written instructions it provides to its content moderation operators. In the past, some of that information leaked out and Facebook always was uncomfortable about the fact it was leaking out. And now they’re voluntarily sharing the entire instructor set and letting everyone have at it, and they’re going to take all the love for all the weird idiosyncrasies. But you know what. What happened when most people looked at that, the entire set of content moderation instructions like “Wow that’s hard”. That’s really an overwhelming problem. I don’t know if Facebook is doing it well, but I don’t know that I can do it better. So I think it’s been a real plus for Facebook to be that transparent and I think other companies have recognized that. They’d say “when you tell people how hard the problem is they start to get it actually”, and that’s actually I think a healthy dynamic. So I do think we’ve made some progress getting Internet companies more forthcoming. I do think that that’s the future. I don’t think they’ll ever be perfect. But I do think that now we’re having a much more informed discussion about what’s actually taking place under the hood.

    Sigrid Zeuthen

    I think it’s interesting they’re saying this about Facebook’s guidelines because I think it was 2017 they had this whole controversy around Napalm girl and they took down the picture, then they put it back up. And when we sat internally and discussed “what would we have done”. Of course, it has to do with context, but we also agree, and we’ve worked with content moderation since 2002. We agree that is super hard because on one side you have to have policies that protect children online. You can’t have naked children online. On the other hand, we also understand why people want pictures up that are historical and has historical significance, that tells a story and that reminds us of horrors that’s happened in the past, so we can learn from it. But it’s such a hard issue, and especially the more we go into AI, the harder it’s going to be to solve these grey areas if you don’t apply some human level to it as well.

    Eric Goldman

    One thing I think we’ve learned about content moderation is that every content moderation decision creates winners and losers. Someone’s going to want that content up, and someone else is going to want the content down, and they’re not going to get everything they want. Somebody is going to get. It’s going to feel like they didn’t get what they asked for. And so knowing that now actually I think is empowering. Of course, you’re going to have winners and losers, and the losers are going to be unhappy. You’re going to try and optimize for a bunch of things to minimize the harm that losers experience or the number of losers who feel like they didn’t get what they want. But you’re never going to win that battle. And I think that’s something the regulators are really struggling with because they think that there is a magic wand where all the content moderation decisions to be made where only winners are created. And once we realize that’s not possible, I think that we recognize that, the napalm girl, for example, is an excellent one, we have to fight to keep that content up. That photo changed the American political decision making about the Vietnam War. That photo changed the world. And if we don’t have photos like that produced in the future, because we’re worried about child pornography, we’re going to suffer for it. People are going to get away with making bad decisions and not holding consequences. We have to find a way to keep that up, even though they’re going to be people who are going to object to it. They’re going to say “that photo should have come down”. So, you know, I hope we can help evangelize the message. There is no perfect content moderation. There will always be people who are unhappy with any decision. That’s not a bug that’s a feature.

    Sigrid Zeuthen

    Yeah, and this is one of the things that we say when we advise our clients on how to build their policies. Instead of saying, you know on individual cases, look at what is it you want to achieve, what kind of community are you trying to build, who are the people who are participating in it and what message do you want to send to them. And then work backward from that on each grey area issue that comes up. How is that going to affect the kind of community you want, the message you want to send, etc etc, rather than having like, of course, a need had guidelines, but for those great grey areas it has to be a judgment call. Then you have to make sure that the people sitting and making those judgment calls understands what it is you’re trying to achieve, rather than trying to flick through the rule book and try to find a perfect match, because that’s not going to be a perfect match for everything.

    Eric Goldman

    I love the way you’re saying that, and I’m 100 percent on board. Another way I describe it is the content moderation policies shouldn’t be the same from company to company. There might be certain things that we all agree as a society or are impermissible across the network. But there are so many other places where it’s actually beneficial if we have a little bit more rough and tumble environment, where content moderation is lighter and that there are more lockdown communities where content moderation is heavier. And the needs of the community should be reflected in those decisions. So I love the way you’re saying that we actually have the ability to dial up or down the content moderation approaches to reflect the needs of a local audience to help them achieve their goal. That always should be first. I’m glad you advise your clients about that.

    Sigrid Zeuthen

    Yeah, I mean we do have some generic models, but most of our AI that we produce for content moderation, is actually based on the client’s data set. So they get something that’s completely tailored to their community. Because it’s almost impossible to have a generic ruleset that applies across the board. And what a boring world it would be as well, if all communities adhere to that exact same rules.

    Eric Goldman

    Well, we kind of know what that looks like. That looks like, at least here in the US, traditional media like newspapers. There was a lot of content that would never be fit to print in a newspaper, between the combination of the high editorial costs plus the liability for publishing it, and what happened is the Internet created the opportunity to have all this user-generated content that never would have made it into a newspaper. But that still had tons of value for society. We want to preserve that extra layer of communication that humans are capable of. And if we have the one size fits all policy, we miss something very valuable in that process.

    Sigrid Zeuthen

    So just going back to the conference. You mentioned that the discussion around how to do content moderation has significant scope for our society. I think that touched a little bit upon it now, in our discussion, but could you just elaborate a little bit more on that.

    Eric Goldman

    So there’s a couple of ways of which it matters. One is to the extent that content can change society. It can have an impact on a political discussion. It can help voters determine who to elect or who not to elect. It can hold a company accountable for selling bad products in the marketplace. It’s said that a single item of content can have that kind of consequence. We have to fight to make sure that it’s available. That if we have a regime that screens that information out, then society doesn’t get to grow. One of the discussions about the “me too movement”. I don’t know how that proliferates across the entire world. But here in the United States, we had a real reckoning about sexual harassment and abuse at the workplace, of women who had been used to being sexually harassed or abused, because that was the norm in their community. And that was the key to unlock economic or social advancement that they wanted to achieve. And then one tweeter put the hashtag #metoo, and said: “this is my story too”. And that started a viral movement in the United States, to fight back against sexual abuse and harassment. There was not an equivalent reaction in the UK. And the reason why is because UK defamation laws would create the liability for anyone who chose distribute that, in addition to the intermediary like Twitter, saying “we can’t have that kind of #meeto discussion here because we face liability for all these people are being accused of sexual harassment or abuse”. So having a legal liability that allows people, women, to share their story and to share collectively, really change our society. If we had a different set of policies, we wouldn’t have had that potential. The other is the diversity of the community, I can’t stress that enough. I don’t know all the millions of different diverse communities that are on the Internet, but they don’t need the same kind of content moderation approach as each other. So if Facebook tries to be like a mass market global platform, they’re going to have one set of approaches. The enthusiast message board for sheltie dogs is going to have completely different needs and if they have to treat each other as the same we’re not going to get both. We’re probably going to get one or the other, but we’re not gonna get both. So making sure that the sheltie dog lovers can talk to each other, making sure that we have a global mass market social network that people can talk across the globe each other. They’re both valuable. We need a content moderation approach that allows that.

    Sigrid Zeuthen

    Yeah agreed. It’s actually funny because of one of the things that we talk a lot about when we’re talking about how important it is to make sure that the rules you have fit your exact community. Because one of the examples we have, if you have a forum or community for gynecologists, there’s gonna be some words said in there that probably is not appropriate for a platform that caters to children. But in that kind of community, it’s absolutely necessary that certain words you’re allowed to say without a filter coming in to say “no, no, no you can’t do that”. And so yes, it’s super important to adapt your policies to your community, I agree.

    Eric Goldman

    You could go through a thousand examples, right. You know, a community of people who are recovering drug addicts, they’re gonna need to talk to each other differently than a community of people who are discussing criminal drug enforcement.

    Sigrid Zeuthen

    I agree. So I know from Brussels and also from the one in San Francisco, there are recordings available and I think you have that for all of the events actually. So first of all, it would be great if you could tell us where you could find it. I can put in the URL for all of the recordings. I know the one from Brussels, but the rest of them. But if we just go back to the event, from your point of view what was the most important takeaway from the last one in Brussels?

    Eric Goldman

    Right. So we did the four different events and each one had a little different emphasis. So the so-called Valley event was more for the Silicon Valley community. This event was very much about talking to inside the beltway politicians. And the New York event was a little bit more academic in nature. And then the Brussels one was very much for European policymakers. And so, we wanted the European policymakers to hear from the Internet companies what they’re actually doing and how they might differ from each other. And so for me the number one takeaway, we had representatives from online services in Poland and the Czech Republic who talked about how they cater to their local communities and how they built their operations to reflect that. And they were different than anything we had heard from the other companies at other three events. Just by nature of the fact that their market positioning, their local community need, and the market niche that they were trying to occupy. They were different. And that led to two different solutions. So I thought the most interesting thing was the diversity of under the hood content moderation practices. We got some new exposure to diverse practices from the event.

    Sigrid Zeuthen

    That’s really cool. And just to round up, is there going to be a fifth installment of Content Moderation at Scale and if so, when and where?

    Eric Goldman

    We don’t have any plans for a fifth installment, and in part, because we’ve done a lot of the work that we want to do. We wanted to get the information to discourse. Now it’s coming into discourse without the conferences even prodding that. I’m open to doing a fifth one, but I don’t have any plans to do so. There are other organizations that might be branching off and doing events like this. For example, the International Association of Privacy Professionals, or the IAPP, is going to have a content moderation program to supplement their normal discussion about privacy. That’s going to be May 1, 2019, in Washington D.C.. Happy to give you more information about that. But that’s their thing. They’re catering to their audience now, it’s not the direct extension to what we were doing. The thing I’ve been doing from the first event, is I’ve been trying to help organize a professional organization for content moderation professionals. That we need an industry association that pulls together all the participants, including the workers in the field, the companies that employ them, the vendors like you, and the body shops that are outsourcing some of the work. We need a venue for all those people to interact with each other, and that’s a high priority issue. It’s a hard project and it’s not one that I can deliver myself. So my hope is that we’re going to achieve big progress on that front. And I’m talking about it, in part because we are making progress on the front, I just haven’t been able to get public about it. And that they’re going to take over doing the programming for our community. So the fifth one, I hope will be the kickoff event for the new professional organization that we’re hoping to create. That’s going to be much more detailed, much more granular and customized for the needs of the industry, than anything we’ve done to date. But we need the infrastructure to do that and we have to build.

    Sigrid Zeuthen

    Sounds super interesting. I think we would be happy to lend any help we can in building that, because even as you say now we have it out in the open. I still think that there’s a lot of education that needs to happen to companies, to end users and to politicians, and we’re always happy to work with authorities, end users and companies alike. And as long as we don’t have to share any specific details about clients because obviously, that’s not our prerogative, but yes if you need any help, let us know.

    Eric Goldman

    Yeah, thank you. I’m hoping that I will have views on that in the foreseeable future. And if we can get that project off the ground, it’s going to unlock the doors to a whole bunch of new programming that’s never existed before.

    Sigrid Zeuthen

    That’s really cool and I’d say that’s a cool note to end on as well, I think. Thank you so much, Eric, it’s been fantastic talking to you, and I really feel your passion for this topic as well. So I hope you get to work with the new organization as well, and you don’t have to completely let loose content moderation now.

    Eric Goldman

    It’s great to have a chance to talk with you today. I do plan to be a part of this community going forward. So that means our paths are going to cross multiple times, and I look forward to that.

    Sigrid Zeuthen

    Thank you.

    Eric Goldman

    All right, thank you.

    This is Besedo

    Global, full-service leader in content moderation

    We provide automated and manual moderation for online marketplaces, online dating, sharing economy, gaming, communities and social media.

    Form background

    Contents