fbpx

Julie Owono on the Importance of Establishing a Democratic Agenda for Content Governance

Julie Owono on the Importance of Establishing a Democratic Agenda for Content Governance

Tom:

Hello, and welcome back to the technology policy Institute’s podcast Two Think Minimum. It’s Wednesday, June 22nd, 2022. I’m Tom Lenard, president emeritus and senior fellow at TPI. And I’m joined by my colleagues, Scott Wallsten, TPI’s president and senior fellow, and TPI senior fellow Sarah Oh Lam. We’re delighted to have as our guest today, Julie Owono. Julie is executive director of Internet Without Borders, an inaugural member of Facebook’s Oversight Board, executive director of the Content Policy & Society Lab at Stanford, and an affiliate of the Berkman Klein Center for Internet & Society at Harvard. Welcome Julie, great to have you here. First, you know, of those four hats, those four activities, what’s your primary focus?

Julie:

I would say I’m lucky enough to have four hats that dovetail very well together. They’re all connected. As the executive director of Internet Without Borders, we defend freedom of expression. And the said freedom of expression online is being threatened currently by censorship: censorship online in diverse forms, and particularly in the form of private entities, private companies, increasingly having the obligation because it’s becoming a legal obligation for many of them. The obligation to limit our freedom of expression. So, with my other hats, and specifically the academy hats, and also the Oversight Board hat, I’m trying to make sure that those limitations, coming from companies, making sure that those limitations are proportionate, necessary, are provided by a law that is itself clear, and which is itself respectful of human rights principles, which government have agreed to respect. So yeah, I would say all of these work together and they’re very complementary for me.

Tom:

So you think that censorship is more of a problem from companies rather than governments?

Julie:

Okay, not actually, no. I would say that because of the increased legal obligations, companies, in my opinion, and through the work that we have been doing, companies sometimes are asked to become agents of censorship by governments. If you look at legislation in, let’s say India, for instance, we remember very well a year ago when the authorities there adopted a new law that had for purpose to fight against fake news – which is a noble aim of course – fight fake news online and on content platforms specifically. And the result that we saw was that, actually this law was used to force private companies – when I say private companies, I actually mean private entities, private actors – to force them to take down content that was not necessarily fake, but that was just critical of the action, or lack of action, I should say actually, of the government during the pandemic in the early days of the pandemic in India. That’s one example, but there’s so many examples around the world where, yes, companies are being asked to or forced to take down, without necessarily, you know, having in place the measures to make sure that the said take down or the said limitation of speech is respectful of human right principles and freedom of expression, particularly; and most importantly, making sure that such take down, such limitation, is actually proportionate to whatever you want to achieve with the said limitation. So that’s why I’m talking about forced to become agents or censorship, right? It doesn’t mean those companies agree, I think. And I know many of them do not agree. And I also know that, you know, governments are facing an emergency, right, with regards to disinformation online, which is again a noble aim, but in the emergency, we never take the time to fully reflect on, you know, what we’re asking people.

Tom:

So you mentioned India, are there other countries that particularly stand out in terms of being problems?

Julie:

Unfortunately, the list is getting longer when it comes to content online these days. I’m thinking of course of another country, which is Turkey. Turkey, which took a step further by even requiring the companies – usually US companies, but not only – requiring those companies to have in the country, a representative who would be the point of contact for the authorities for any request related to content take down or request related to content that’s not okay according to the government. And that for us is dangerous, first of all, for the person who will play that role and who will have to sometimes say no to the government. We know that companies, again, do not agree to become agents of censorship for some government. So sometimes they will say no, and saying no to a government can be very dangerous and risky. I’m thinking, of course, about another country, which is Russia. Russia, which has banned virtually all social media platforms. But thankfully for now, people are circumventing that censorship by using VPNs (Virtual Private Networks) or using a tool like a Tor, which was for a long time only used – I remember when I started working in this business, I discovered Tor – and at the time was mostly used by activists who were, you know, facing jailing, a tool that’s used. And thankfully, so, a tool that is used well for people around the world. And in this case, lots of Russians are using those tools to circumvent the current censorship imposed by the government. We also remember that the same government has accused Facebook of being, or charged the company Facebook as a terrorist organization, which is ridiculous, honestly. So yes, these are some, just some examples of the current threats to freedom of expression that come from regulation that are applied, that are, well, that companies are obliged to apply to their users.

Scott:

So VPNs and Tor are ways for citizens to kind of get around some of these restrictions. But do you have sort of, I know it’s got to be case by case, but some kind of best practices for how companies should respond to this? It’s hard for a company to just simply not obey the law. And then at some point, you know, what do they do when there’s a law that they disagree with? And how do they decide there’s a point at which they say just, we can’t do it, we’re leaving?

Julie:

Mm-Hmm. So, to your first point, how to disagree with the law as a company, we have seen examples. Of course, we’re not telling anyone to violate the law. Instead, what we’re saying is, first of all, we must be as proactive as possible and dialogue as much as possible, not only with the government – companies and governments behind closed doors – but also inviting to the tables, civil society organizations, local civil society organizations, which have very thorough knowledge of the threats posed by the said law. We’ve done this at Internet Without Borders. Sorry, I was about to say it in French. Internet Without Borders did that in a country that I know very well because that’s also where I was born and raised: Cameroon, which, it was not a law, but it was a practice rather, they had cut off access to internet completely, and then targeted only social media platforms, allegedly to fight against fake news and hate speech. Again, noble aim, but terrible and counterproductive means. And what we did was, we understood that what the government was just trying to say is, you know what, we don’t understand how come, you know, Facebook and Twitter and the likes do not take down what we see is wrong. Cameroon didn’t have a law, again, on the fake news at the time we were campaigning. So we told the companies: you need to explain, you need to be transparent. At the time transparency was not a big thing – that was in 2018. It wasn’t such a big thing as it is now, but we told the companies, you must be transparent, first of all on your community standards. Companies like Facebook had just adopted, or published, for the first time ever, a set of rules of what you are allowed or not to do or say on the platform. So you must be transparent about those, publish them and explain them. You should do a tour around the world to explain to people what they cannot do. And if they see something that is wrong, then they can report. There have been data about the inequalities in terms of the use of the report button. Everyone, virtually, in the US, knows that when I see something that’s not okay, that I think shouldn’t be there, I can report it and get access to someone who will review my report and review the content and give me a response, while that same button is virtually not used in many places around the world. Because, for various reasons, in some countries, you know, reporting to the police won’t do anything. So why even report it to Facebook and hope that something’s going to happen? So that’s just an example. So we told the company: go and talk to the government, go and talk to civil society organizations, go talk to users, place ads in newspapers to explain new rules and explain the moderation. I didn’t call it that way at the time because I didn’t know this was moderation, but it was moderation. The moderation processes that you are putting in place to safeguard those, the safety on your platforms. That’s the first thing. We also, as the government comes in, we ask diplomatic representations, including of the US, including of the EU, to come at this round table. Then of course we ask private companies. Well, when I say private, I shouldn’t say private. Cause I know private in English means something else. It’s not private, but private entities, rather. And of course, social media platform. And also telcos. Why telcos? Because telcos are the ones being asked to perform the blocking of said platform or the whole internet. And the result was that, from that discussion, from that multi-stakeholder discussion, first of all, the government was reassured and decided not to cut off access or to block access to our social media platforms and to the internet. You know, nothing. There hasn’t been any incident ever since. And there has been a lot of advocacy on the part of the companies with users to explain the rules, explain the moderation process, and the reporting mechanisms. And of course, civil society organizations themselves were able to explain what problems they were seeing. We’re seeing a lot of fake news. We’re seeing a lot of hate speech that’s not being taken down. In a country that doesn’t have the robust institutions to respond to those threats, it’s problematic. And it could be dangerous for the safety of the whole nation, not just of the platforms, but of the whole nation. And yes, that result convinced me, and my organization, and others, that one of the ways to limit a little bit those bad laws is to dialogue a lot. Dialogue as much as possible. Be proactive. But again, when you have already the law and you have very stubborn governments – because that exists too – there are governments with whom it’s quasi-impossible to negotiate as a company. We think companies, as I say we our organization, we have advocated for companies to become agents, not of censorship, but rather of free expression. Give information to users: we have received this order to block X, Y, Z content, or to not operate anymore. We think, for instance, that companies should consider sending out messages about circumventions. What is circumvention? Educating, the same way that we were educated about COVID. We’ve seen that it works, that when we receive messages massively, regularly, on the platforms that we spend our most of our time on, there is action, positive action. So, I think companies should play this role: educate users around the world who live in repressive environments, educate them about the possibilities of circumventing, of continuing to express themselves, while also being transparent about what they’re facing as a company, the challenges that they’re facing as a company working with the government.

Scott:

When a company enters a dialogue with the government, we sort of assume that the government is benevolent and maybe made bad decisions. That’s different from a government that’s actively repressing speech and people. Do you find that in most cases it’s a government that is open to negotiation, and you know, things like Turkey and Russia are extreme outliers, or, you know, what’s the mix? I mean, how often does this negotiation approach work?

Julie:

I think for now there is still less and less so, though. Because, specifically of those more stubborn, extremely repressive governments and a tightly repressive government, I think there is still a room, because for many countries around the world, there is rather a misunderstanding of what is happening to us. Suddenly we have, imagine you’re a country where there was one radio, one newspaper, probably five, one TV channel. And you were, as a government, you were the one telling what the news should be. You go from that to a space where you have access to potentially thousands and thousands of information in a matter of seconds – it’s overwhelming. And I want to – it’s probably naive, but I don’t think it’s naive – I think many governments are really lost. I’m thinking back, for instance, of another very important country, which is Nigeria. Nigeria cut off access to Twitter, blocked Twitter for several months in 2021 last year. And they did so mainly because of a misunderstanding, and they restored the access after they were able to meet and discuss with Twitter. Although I’m not saying that, again, those discussions should happen behind closed doors. No, I’m really against that. I think civil society organizations, experts, outside experts, should have the opportunity to chime in and to be part of those round tables and conversations between governments and companies about safety online. But yeah, that conversation happened between Twitter and the Nigerian government. There are lots of things that I think are not okay in that conversation, but at least that conversation happened, and Twitter is back. But again, that government is lost. I think there are lots of governments who are lost. How do we respond to that overwhelming new environment without sacrificing rule of law, democracy, freedom of expression, and all the other very important rights that come with a democratic space? And those governments that are lost, well, as we are speaking, now they’re being, I don’t want to say targeted, but they’re being in conversation with more authoritarian routines, who are selling them, excuse my expression, but selling a model, seductive model, in which, when you, okay, when there’s something wrong, just cut off or just censor and put a firewall on your internet infrastructure. But I think it’s dangerous. It’s counterproductive. Not everyone has an internal market of 1 billion internet users – if you know what tough country I’m thinking about. So it’s not necessarily a model that works everywhere. I think history has shown that the more free the environment is, the more it profits the whole, the society as a whole, including socially, including economically, et cetera. So I think that’s where I’m going to end. I think it’s really high time for nations, democratic nations, that we look up to most of the time, to, yes, create this space for conversation on what is a democratic way to govern, not only internet, but specifically content. Because, in my opinion, as the years go, content is going to become a very important focus with the platforms that are coming up – Metaverse, of course, and also Web3, the decentralized ideas around those platforms. So yes, we need a democratic agenda for content governance.

Sarah:

Speaking of Twitter, what do you think about Elon Musk hopping into the content moderation debate, and also, you know, his comment that there are a lot of bots on Twitter? How does it feel to hear a leader in tech suddenly get into the middle of your area, content moderation, with everybody who’s been in this discussion for a long time?

Julie:

I want to say, seeing the glass half full, there is a positive effect to it, because all of these conversations that we’re having in expert settings, including at the TPI Aspen forum last year, and many other venues, well, those convers-

Tom:

Thank you for the plug.

Julie:

I didn’t do that on purpose, of course. I mean, we’re having a casual chat here. Yes, suddenly those conversations are becoming mainstream. That’s amazing. Really, it’s a good opportunity, I think, to convince even more people that content moderation, or content policy. Because I like to think about moderation, just not as moderation, but really as a whole process of creating a policy and enforcing the policy. Moderation is the enforcement of the rules, but there is a whole process of creating the rules, which we should also look at as a society. So, I’m happy that finally all of these are mainstream. But at the same time, just like many others have told Elon Musk, you know, this is not something that you come in. No one should come into this space with easy solutions because there are no easy solutions. You cannot just say we want to protect First Amendment. Of course we all do – I think, I hope. But what does it mean when you’re having a platform well known for being a space where women are harassed, women, journalists are harassed, when your LGBTQ participant users [find] it’s difficult to take part in the conversation, when you have governments intentionally harnessing the network to spread manipulation, lies, disinformation? It’s, yeah, there is no solution, easy solution. So that’s why I would say one of my advices, if he wants to take one of my advices, we never know, but not only of me, from me, but from many others in this space is, please be humble. I have been extremely humbled ever since I joined the Oversight Board. Before that, when I was an activist, I was just thinking, oh, why isn’t this billion company taking down, it’s easy, you know, they have the means to do that. No, ever since I joined –

Scott:

It’s hard to imagine advice to Elon Musk that he’s less likely to take than be humble.

Tom:

Well actually, I think some of his answers on content moderation have been somewhat humble

Julie:

I think there’s been a little shift. First of all, he met with the commissioner, the European commissioner – I forgot exactly what’s his portfolio, but in charge of the digital market – Thierry Breton. And he met him, who is the leading, among the leading figures around the Digital Services Act, which is this new legislation that will come into force in Europe. And that basically says, you can say everything you want, but platforms have to be careful to this, this, this type of content that might be harmful to democracy and to society. And Elon Musk met him. And I think that meeting was probably one of the most humbling things that happened to him in this space. Because, yes, there are laws around the world, there are laws that are bad. We talked about those. And there are also laws that try to strike the right balance. Although there are still lots of things to be said about the DSA – we don’t have to, the time to go deeper into that – but still there are laws, rules you have to respect. And there are human rights. I think that’s the most important thing for me as a member of the Oversight Board. There’re international human rights, law, and standards that governments have signed, committed themselves to respect. But also, companies have voluntarily, and I would say increasingly society is demanding companies to commit themselves, not to violate human rights while conducting their business. And a company like Meta has agreed to do that.

Tom:

Let me ask you a little bit more. Let me try to connect this to the Facebook Oversight Board. I mean, isn’t it the case that companies like Facebook and other companies that operate globally really have little choice but to just obey the laws of the various jurisdictions in which they operate, because they don’t wanna get kicked out of the country? I mean, they have a choice to make with every country: they can either operate there or not operate there. But if they’re gonna operate there, don’t they really have to abide by the laws and conventions of those different countries, really? Whether they like it or not.

Julie:

Yes, that’s for me, that situation would be extremely dangerous because we would accept de facto that the internet is splintered, that there as many internets as there are countries, virtually. And that would make this space, this beautiful space, like I like to say, this beautiful space that has allowed me to do all the things that I’ve been able to do in my life and probably wouldn’t have been able to do if I did have a platform where I could speak to the voiceless, but also to the powerful equally. It would be a big loss in my opinion. And I hope we don’t get to that space where we have 190+ internets. And that’s why it’s important to think proactively about those. That’s why it’s becoming even more important to have a democratic conversation about fighting the bad of the internet while protecting the 90%. That’s my stat, please, I don’t have any research on that. This is really arbitrary number. But I think there’s the majority of the internet is good. And so we have to safeguard this – and it’s a role for everyone – but mostly for governments, first, we have to, to show the way how we can do this in collaboration with citizens, and of course experts and companies.

Tom:

So you’re quoted in this recent annual report of the Oversight Board as saying it is about time that we have a conversation about how we create technology that is by design respectful of human rights. Could you explain really what you mean by that?

Julie:

Sure. Historically what has happened with innovation recently, at least the past 20 years with internet innovation has been that well, there is an enthusiasm, you know, we have this incredible tool idea. That’s gonna change the world. And of course, most of the time that does change the world. And we think about what could potentially go wrong after the wrong has happened. We do not have enough conversations, especially with young creators, engineers, innovators, those who are going to, to create the new, I don’t wanna say the future Twitters, et cetera. No, they’re gonna create something different hopefully, but we hope they won’t do that thinking that all human is, you know, we’re all good. We are good. I am a very, every, Jean-Jacques Rousseau, the French philosopher of The Lumières, he says mankind is by design good, but with society becomes bad. I’m paraphrasing here. That is true. And so we have to take this into account even when we’re creating technologies of tomorrow. And that’s why, for me, it was important to create this initiative at Stanford University, and to, to tell students here who are all thinking about, you know, being the next innovator. Well, think that, but also think that you are part of the society. You have a responsibility. Don’t think that your only responsibility is to create something that’s gonna make money. It’s great, but money comes with responsibilities too. And it’s important to have those conversations by design, the same way as if you want to create a, I don’t know, a cybersecurity new innovation. Well, there are compliance mechanisms. You cannot do that just because you want to do it. You have to ask for authorization, licensing, and so on. I think we should do the same with safety. If you want to create, let’s say social media platforms, what are your safety, the safety features that are by designing in what, in your product. Yes. That’s what I meant by this quote.

Tom:

So maybe I should ask you a question, since we’re having these congressional hearings about the events of January 6th – and probably the most publicized case that the Oversight Board had to deal with – was probably the case of taking president Trump off of Facebook, and making a recommendation as to how the company should treat that case. But kind of getting, stepping back a little bit from the particular case of him, has the Oversight Board, or has the company, to your knowledge, done any research into, I mean, everybody says, well, you know, that social media was, some people say, largely responsible, partly responsible for the whole episode. But obviously we know that in history we’ve had episodes like that long before social media ever existed. So has the Oversight Board, is that within the domain of the Oversight Board to look into issues like that; what is the responsibility of, of a company like Facebook and other social networks?

Julie:

That’s a very important question, especially with the current hearings, as you mentioned. What I can say for sure, and which is within our remit, we can’t coach the company. I mean, our responsibility, our mandate, is to make decisions on the cases that are presented to us, and to make recommendations on the request of the company. So unless we have a case and unless we have a recommendation request from the company, we won’t, you know, spontaneously chime in on certain issue. But during the Trump case, which, you’re right, we mentioned, we did ask in the preparatory work before we made our decision. And before we published it, we did ask to the company a series of questions, I think 46. It was one of the most, yeah, one of the longest decisions we’ve ever published. And we asked a lot of questions. And one of those questions was, can you inform the board about the role that your platform may have played? And particularly the fact that the algorithms, according to research, to experts, the algorithms, allegedly present certain types of contents that are most, that can create a reaction, let’s say like this. So we asked the company, are you aware of, or can you look into these allegations, these claims that this has contributed to feeding that narrative of the great election, while the company declined answer. So, for us, it was not a failure that the company declined the answer. The victory was, rather, that we could have that conversation in public and have the company respond negatively, of course, but nevertheless responding, to that claim. And the fact that the answer was, no, we can’t talk about that, is something interesting, to say the least. And hopefully like in other, in other cases and on other issues, hopefully has led organizations working, doing, focusing on this work to ask even more pressing questions to the company about that specific point. Yeah, so it wasn’t a failure, but rather for us, I think, a very important moment where we could have this back and forth on this subject particularly and others in the decision.

Tom:

Well, a company, I mean, obviously for a company like Facebook, their business is to present people with content that they’re interested in, get clicks, which I am. Some people criticize that as a business model. I personally think it’s, it’s a fine business model, but obviously then there’s a question of differentiating different types of content, cuz some, you know, some content is innocuous or beneficial and obviously some content is potentially harmful. So how should a company, you know, how, it’s not, I mean, is it a problem the business model that they’re going after clicks or is it something different?

Julie:

I would say that, and I’ll speak very personally here, I’m not, you know, speaking for any of the organizations that I’m affiliated with, but personally I’m becoming increasingly convinced that it’s exhausting to always be in the reactive mode of reacting to content, reacting to scandal, reacting to individual. It’s really exhausting because there are billions of contents. So potentially we would react a billion times a day. So I would say, instead, and that’s to respond to your question, instead, where we should be focusing on having the conversation right now is, does a company have the safeguards to make sure that with X, Y, Z business model, it’s not going to have an adverse effect on speech, on right to safety, on many other consideration, but specifically on safety. And we don’t have that as of now, as of now, many of the conversations when it comes to content and how this content is governed, many of this conversations are ad hoc, case by case. Yeah. And the rules change as, you know, things go wrong or things work, which is interesting. But it’s, I don’t think it’s sustainable. We need finally this content, real to have real conversation about what does, okay, where do we put rule of law in all of this? Okay, how do we make sure that we have rules that are clear from the get-go and that we have a moderation process in place that will help us not lose on the business front, still allow us to do whatever we want to do with the business model. Although, of course, there are other consideration to take in place, take into account, sorry, including privacy considerations. But how do we do that as a company, without sacrificing the importance for our users to feel safe on our platform, to feel that they understand clearly the rules, to feel that they’re in an environment where they know there is fairness for them. I was about to say for us, yes, for us. We can be on a space where we are not scared that X, Y, Z moderator, whom we are not even sure speaks our language or understands our context, or X, Y, Z machine learning model has decided that from now on, I cannot say anymore, I don’t know, [?], for instance. Well, how can we move from a space where we’re making rule case by case day by day, scandal by scandal to a place where we know, okay, I go on any platform. And I know that these are the processes, these are the rules. They are clear. I know that every year X, Y, Z community standards was enforced this number of time by this type of moderator, automated or human and a space where I feel comfortable as a user that I’m not going to face at any point, any form of arbitrary. And that for now does not exist because we are still operating as I was saying, reaction by reaction. And yeah, for me, it’s not sustainable. We really need to move to a space where we are having conversation on rule of law, democratic principles. And what does that look like to implement this in a concrete way when we’re doing content moderation on platforms?

Scott:

So, so, this kind of, actually maybe goes back to something you said earlier, that our approach to this is being challenged by countries like Russia and China. That just say, all right, just put in a firewall, cut off everything. But then the alternative is this really hard set of conversations without easy answers. And does that in some ways, work in favor of the countries like China and Russia, because they come in with an easy solution that might be attractive to certain governments.

Julie:

I don’t think so, because I still, I’m still convinced that many around the world see the reality, which is, the more free, as we were discussing earlier, the more free, the more positive effects on society and on economy, because that’s another consideration that’s important for governments. They don’t want to, when you cut off, like I was saying, not everyone can have 1 billion internal internet users. That’s very harsh reality, but it’s true. So how do you do when you build a firewall that, you know, blocks your citizens from the rest of the world? Is it sustainable? I don’t think so. There have been extensive studies on the effect, the economic effect of internet shutdowns and social media blockings around the world. I’ve contributed to many of those reports. And it’s a matter, I think the Brookings Institution in 2016 had put out a report that said that in that year, internet shutdown down had cost more than 2 billion dollars. And we were just in 2016 where there were just a few internet shutdowns. So I’m sure if we did the math today, when they have been many, many, many dozens more of internet shutdowns around the world or social media blockings, that number has probably skyrocketed. That’s a consideration that’s important for governments. Many governments rely on internet, on the internet revolution, to leapfrog their economy. So do you want to risk that just for the sake of fighting fake news, or actually fighting news that you don’t agree with? I don’t think so. And that’s why I’m saying there is a room for maneuvering there, including government negotiation.

Tom:

I don’t know if I was interpreting you correctly, or just a couple – what you said a couple of minutes ago, but you seem to be suggesting, and correct me if I’m wrong, that the case by case approach, which I guess the Oversight Board does and, and most companies do, and, you know, you kind of said that’s ad hoc and we need something that’s not ad hoc. So am I interpreting you correctly? Cause I don’t, it seems to me that context is kind of very important in many of these cases.

Julie:

So I’m not saying that we shouldn’t have ad hoc conversations. Of course there should be. That’s gonna be the work of content moderators, but I’m not a content moderator. The board is not a content moderator. What I’m saying, rather, is above the daily enforcement – because that’s gonna be needed, always – we need, those content moderators need clear guidance, which as of now don’t necessarily exist in all spaces. And that’s the space where I was talking about the importance for the companies to consider that, at the highest level of the company and also for the Oversight Board to consider. And that’s what we’re doing at the board. Of course, we received day by, well, individual cases, but frankly, out of the 1.5 million cases that we had appeals, requests, that we had received by October, 2021, if I remember well, we have published 23 decisions. So, it’s not a case by case –

Tom:

Speaking of content moderation, who goes through these 1.5 million and decides which ones you’re actually gonna look at seriously?

Julie:

Okay. So, to be frank, to be fair, out of the 1.5 million. There are also, you know, “I don’t like your CEO” appeals. So, there’s a big filter with that, of course. But then the cases that we try to –

Tom:

Generally, is it a largely a machine that goes through them and, and sorts them?

Julie:

That’s a great question to which I’m not in a position to answer with the most accuracy, simply because I’m not in the process of that filtering. We, as the board, we have told our administration, because we are supported by an administration, that’s tough with a lot of people, I don’t have the exact figure, but anyway, we’re being helped by them, so. But what we told them is, here is what we wanted to focus on. We wanted to focus on, of course, community standards that pose significant threat and that raise significant questions to the platform. And we also want to look at cases that raise significant content moderation challenges for the company. For instance, you talked about the most famous case – what about, what do we do when there is a head of state that constantly violates our community standards or seriously violates them? What should we do? Or another question that we received recently that’s of interest, given the times we live in, what should we do when there are claims of human rights violation in a country that’s allegedly facing an ethnic conflict, ethnic cleansing, I’m thinking about a case we received from Ethiopia. What should we do with substances that are regulated, whose commerce is being regulated in XYZ country? What should we do about talking about the fact substance and its side effects? You know, can we have a conversation about side effects of substances that are not allowed in the physical space? So yes, we try to have cases that could have as much impact as possible beyond just individual case. Because one thing we have to remember, the decisions made by the Oversight Board are binding on the individual side of things. So if we have a case in which a publication was taken down, and we tell the company, you should put it back up, the company will put back up the specific publication. But what we’re working on a lot these days with the company is on the precedent this case is setting. So how can we use this individual case to impact others’ cases that could be similar in the wording of, let’s say, we have an, okay, a famous quote, Nazi quote, attributed, what we call this case the Nazi quote case, I quote, attributed to [?] of the Nazi regime. Of course Nazis are not allowed on Facebook, but there was this quote that’s attributed to him, and that’s used to criticize, at the time, that was used by the user to criticize the Trump administration. Now it’s being used to criticize, I don’t know, the COVID response of the authorities in India, or you name it. So we told, we’re discussing with the company in the implementation committee that the Oversight Board has set up. We have set up this new committee to follow up on the recommendation we make to the company and on decisions that we’ve made. So we’re following up, how are you treating Nazi quotes that are similar to this one? In other context, are you enforcing the case that we, the case decision we made two years ago in those other contexts? So yes, there is a contextual part, which is extremely important and which is going to remain essential because billions of content published every day, every second, even. But nevertheless, we need to have these overarching principles, guidance that are strongly, in my opinion, embedded in rule of law principles, and of course in human rights considerations.

Tom:

So one final question, cause I think we may even have gone over our time. So if Elon Musk does take over Twitter, would you advise him to set up an Oversight Board?

Julie:

That’s a great question. I mean, I think we do have some very robust decisions that have inspired other companies. I know it. I’ve talked to many other companies and yeah, many look up to what we publish, what we say. So I would say, yes, pay attention at least to the work that we do and it, yeah, if we could be an inspiration, that’s fantastic. Let’s chat.

Tom:

Has he asked you for your advice?

Julie:

Oh, not personally. Not me. No, I’m not that important. I have more important colleagues I work with who I’m sure, probably will talk to him at some point. But no, not to me.

Tom:

Well, thank you very much for taking the time to do this was very interesting.

Julie:

Thank you.

Tom:

We appreciate it.

Share This Article

View More Publications by Julie Owono, Thomas M. Lenard and Scott Wallsten

Recommended Reads

Related Articles

Sign Up for Updates

This field is for validation purposes and should be left unchanged.