Tom Lenard:
Hello and welcome back to the Technology Policy Institute’s Podcast, Two Think Minimum. It’s Tuesday, November 14th, 2023. I’m Tom Lenard, President Emeritus and Senior Fellow at TPI, and I’m joined by my colleague, TPI Senior Fellow Sarah Oh Lam. Today we have as a guest, Jeff Kosseff, who has just come out with a very timely and provocative new book, titled, Liar in a Crowded Theater: Freedom of Speech in a World of Misinformation. Jeff is a Professor of Cybersecurity Law in the United States Naval Academy’s Cyber Science Department. He’s the author of numerous books and articles on cybersecurity issues, First Amendment issues, and Section 230 of the Communications Decency Act. Before becoming an academic, Jeff practiced cybersecurity privacy and First Amendment law at the Covington and Burling Law firm and clerked for several federal judges. Before becoming a lawyer, he was a technology and political journalist for the Oregonian and was a finalist for the Pulitzer Prize for National Reporting and a recipient of the George Polk Award for National Reporting. Welcome, Jeff; delighted to have you on the podcast.
Jeff Kosseff:
Thanks so much for having me.
Tom Lenard:
You have timed this book pretty well because speech issues in their various forms, are everywhere. Maybe you can start off and if you can, and just describe what you see as the major themes of the book?
Jeff Kosseff:
Yeah, sure. And I just have to give the disclaimer, everything I say is only my viewpoint, not the DOD, the Naval Academy or the Department of Navy. Now that I have that out of the way.
Tom Lenard:
You mean you don’t speak for the Navy?
Jeff Kosseff:
Yes, exactly. My overall theme of the book is that, there are some really novel challenges that we’re confronting, due to falsehoods, misinformation, disinformation, whatever you want to call it and how it spreads on the internet. But I wrote this book to urge the public and policymakers to resist the automatic urge to regulate our way out of the problem. So I acknowledge that there are circumstances where certain types of speech are not protected, but those are narrowly defined categories by the Supreme Court, and I want people to think long and hard about proposals that would greatly expand those categories because they could have a lot of unintended consequences and might not really help us address the underlying issues.
Tom Lenard:
Obviously, the subtitle of your book is “Freedom of Speech in a World of Misinformation.” What would you say is the status of freedom of speech at the present time, perhaps relative to earlier periods?
Jeff Kosseff:
Well, in the United States, it’s pretty good. It’s definitely relative to our first century. We’ve had the past few Supreme Courts, in terms of going by Chief Justices, have really had expansive readings of the First Amendment. And I think that, especially when you compare the United States to other countries, including other liberal democracies, we really are an outlier, in terms of the extent to which our courts, our constitution, protect freedom of speech.
I do think that we might be at a turning point, unfortunately, I think when you look at the public opinion polling and when you hear politicians from both sides of the aisle, they might say that they are big supporters of the First Amendment, but then they will support measures and bills that are really contrary to our current First Amendment doctrine. And I think when you combine that with the pressures we’re seeing globally, with governments already starting to crack down on speech that would’ve been protected a decade ago, I think that we’re at a very dangerous point, where we might be heading into a free speech recession and free speech ebbs and flows over time, and I worry that we might be contracting it in the future.
Tom Lenard:
Yeah, I mean, I’m not a lawyer, I’m an economist, so I have a little different take, but I mean I do see that their claims of misinformation and disinformation are pretty rampant and there are lots of calls, “something should be done about it, there ought to be a law” kind of thing. And I’m certainly sympathetic to your point of view. Is your point of view that nothing, I mean, what are the boundaries of the First Amendment there? What could be done and what can’t be done?
Jeff Kosseff:
Well, so there are certain types of false speech that have never been protected. So perjury, for example, if you go into a court and you lie under oath, there can and should be legal consequences. If you lie to a federal agent, nobody really seriously says that people have a First Amendment right to do that. If you’re a company and you lie about your products, and let’s say, you have a vitamin C pill and you claim that this cures and prevents all COVID, the government can regulate that because the courts have set different standards for commercial speech. If you meet the very high bar of defamation and if you cause harm to someone, and if that person’s a public figure, if you meet the standard called, actual malice, which is knowledge of falsity or reckless disregard for falsity, you can be forced to pay a whole lot of money, as Fox News has learned recently with Dominion.
So there are ways that the court has said that people and companies can be held accountable for their speech, including false speech. But the point is that those are very carefully drawn categories and what the Supreme Court has said within the past 15 years, is, “We don’t have an ad hoc balancing test for harmful speech.” So we don’t say, “Oh, this new type of speech, I don’t think it’s very good, so we’re just not going to protect it.” That’s the type of movement that I’m trying to push against, but it’s also what so many politicians and frankly, even journalists, are willing to accept.
Tom Lenard:
So do you make a distinction between misinformation and disinformation? I mean by misinformation, how I interpret it is, just basically something that somebody thinks is wrong, it may or may not be wrong, but somebody thinks it’s incorrect. But disinformation strikes me, that there’s an intention to perpetrate something that is incorrect. And in particular, it could be from a foreign government. So for example, if a foreign government has a disinformation campaign, is that also protected speech?
Jeff Kosseff:
Well, so there are a few questions. So in terms of the distinction between misinformation and disinformation, I think those are the standard definitions, so I would agree that that’s how they’re defined. In terms of foreign governments, so a foreign government does not have a First Amendment right to speak, but the Supreme Court has held all the way back for more than 50 years now, that we as Americans, not only have a First Amendment right to speak, but we have a First Amendment to receive information. So that raises some tricky issues. So yeah, if the US government was blocking Russian social media interference, Russia’s not going to be able to sue the United States for interfering with its First Amendment rights. But it does raise the issue of, what happens when the US government has the ability to unilaterally define what is misinformation and start putting pressure on platforms to take it down or to block it.
Tom Lenard:
Right. Well, we’ll get to that soon. But in the case of the Russian disinformation, so the Russians wouldn’t have a First Amendment complaint, but some US citizen might have a complaint that they weren’t able to consume that information or hear that information?
Jeff Kosseff:
Yes. And we haven’t seen the full extent of that being tested in the internet age. So what we have is a lot of First Amendment precedent that was set from the 1950s through the 1980s, which obviously did not involve the internet. And what you’re having courts have to do, is apply these things. So I mean, the receiving information, that came from a case involving mailings from the Communist party. We have other cases that involve indecent state commission, writing to book distributors saying, “You can’t distribute these indecent books.” And it’s a bit of a square peg in a round hole, trying to fit this precedent into the current debates. And I think what we’re starting to see is, the Supreme Court realizing that, while it can stick to the principles from those early cases, it really needs to draw clearer lines, in terms of what the boundaries are for the First Amendment with the internet.
Tom Lenard:
So, you talked about, you mentioned social platforms and you do have a section in your book, and obviously, there are several court cases now, on what you refer to it as, is jawboning. I’m old enough that I remember when the government was jawboning against companies not to raise their prices, in previous periods of inflation. But now we’re talking about jawboning, in terms of, just to take an example out of thin air, what type of information about COVID they should purvey? So that obviously, these are all private companies, so presumably they can say anything they want, but obviously, there does seem to be evidence that they have had some pressure from the government to not say certain things and I guess, to say certain things about COVID. And so there really was a period, as far as I can tell, when a whole debate about appropriate COVID policy, was in a serious way, stifled, and that is in the courts now. Maybe you can bring our audience up to date on what the issues are in the courts?
Jeff Kosseff:
Yeah, so I mean there are two different sets of cases before the Supreme Court. Jawboning, which you mentioned, so this is this issue of, at what point does the government’s counter-speech cross the line into coercing or pressuring an intermediary? In this case, it would be usually social media platforms or Google or something like that, to block constitutionally protected speech. And this is where we have precedent from the 1960s involving book distributors. And it’s not a terribly satisfying application to say, “Well, because a state agency was putting pressure on distributors, this is the line that you draw for social media platforms.” So the Supreme Court is going to need to draw a clear line there. And I can see both sides of the argument, in terms of how it helps to draw that line. On one hand, and my book talks a lot about the marketplace of ideas, this idea that rather than immediately resorting to regulation, instead, let truth and falsity grapple on the open market and the truth will rise to the top.
And there are some flaws with that model, that you as an economist probably know much better than me as a lawyer, in terms of market access, for example, for this theoretical market. But I think that in the marketplace of ideas, to the extent that we rely on it, we need to let the government be a participant in the marketplace. So I mean, the government does play an important role in having a voice, and so if there’s a lot of misinformation out there, we want the government to be able to say, “No, this is actually what we think. We don’t think this is correct. And people may or may not believe it, but they should at least be able to hear it.” Now, the problem gets to when the government is not only exercising that counter-speech, but putting pressure on the social media platforms to take down the speech that they disagree with.
And I think that there have been some cases in this court case that’s going up to the Supreme Court, that make me very uncomfortable about what the government’s doing, like having the FBI routinely contact platforms, having politicians who criticize platforms for health misinformation, right at the same time, threatening to revoke Section 230. And even if they don’t say, “If you don’t revoke Section 230, you will lose or if you don’t remove this misinformation, we’ll revoke Section 230,” it’s an implicit threat by members of Congress or the White House who have some power to influence that.
So I think that the Supreme Court has a tough job ahead of it, that it’s going to have to decide by next June, which is, what guidance do we give, so the government knows, “this is what you can and can’t do.” And I don’t think we have that guidance yet. On the other side, we have another set of cases going to the Supreme Court, where the States of Florida and Texas, passed laws that restrict the ability of platforms to moderate, that say, “certain types of things are unlawful viewpoint discrimination because these were passed out, concerning the platforms were unfairly moderating conservative speech and blocking it.”
Tom Lenard:
I mean, even aside from the kind of overt threats that you’re talking about, the companies we’re talking about, the large digital platforms are under pretty intensive scrutiny by regulators, enforcement agencies, antitrust, other things. So it may not take a lot of, even subtle pressure might say, “Well, what the hell? We won’t put this stuff up. We don’t care that much.” That seems to me, to be a real concern.
Jeff Kosseff:
Yeah, it is. I mean, I think that while the government should be able to participate in counter-speech and provide counter-speech, I do think that the government should not be using the threat of regulation, even if it’s an informal and indirect threat. Because if you’re a platform, you’re probably going to go along with whatever content takedown the government wants because you don’t want to increase your liability and spend more money.
Tom Lenard:
Right, right. And then there’s the other argument, and that leads to other things. There is the other argument that some people make, that these platforms are so big and they face… I’m not saying that I believe this, but this is an argument that’s certainly a plausible argument, that they face insufficient competitions, not just in economic terms, but in terms of other outlets for speech, that is more of a rationale to do something.
Jeff Kosseff:
So I mean, there’s arguments that social media platforms are like common carriers, like a telephone company, and I don’t find that terribly convincing because there’s the physical infrastructure on the telephone company, that a social media platform doesn’t have. Now there’s obviously not perfect competition in the social media market. I think a lot of that just has to do with networks and it’s not terribly easy if you dislike a certain social media platform’s moderation policy, there’s a lot of considerations that would prevent you from just switching to a competitor because your followers and friends are on the platform. So the competition isn’t perfect, but it’s very different from a phone company and you’ve had people like Justice Thomas, who have very enthusiastically made the comparison between social media and common carriers, and of any of the justices, he seems the most willing to impose more regulations on social media.
Tom Lenard:
Yeah. And that also, I guess maybe connects this whole thing to the net neutrality debate, which is also about whether, well in that case, internet service providers should be considered common carriers, which presumably would also have speech implications. Right?
Jeff Kosseff:
Yeah, I mean it’s interesting because there are a lot of people who are against net neutrality, but they’re in favor of imposing content moderation restrictions on social media platforms. And then there are a lot of people who are opposed to imposing content moderation restrictions on social media platforms, but are in favor of net neutrality. So it’s been interesting to, I’ve not been very involved in net neutrality, but it’s been interesting to watch the signs kind of shift and accuse each other of being hypocritical. And I do think that there are definitely some analogies there.
Tom Lenard:
Sarah, did you have some?
Sarah Oh Lam:
Yeah. So from your research and scholarship, do you think technology, the scale and scope of the amount of speech, changes how the Supreme Court should think about First Amendment line drawing, like the algorithms and the AI tools to filter speech? Is this something that the Supreme Court can rely on law or do they need to also understand the technology?
Jeff Kosseff:
I think they do need to understand the technology, but I also think that, not just in the First Amendment context, but also in the Fourth Amendment context and the Computer Fraud Abuse Act case they heard a few years ago, they’ve kind of shattered the myths that they just don’t understand. That’s always a concern. I think the Supreme Court and frankly, judges at all levels take it seriously, and Congress might be a different story, but at least judges, I see taking it really seriously and trying to get it right. I think the biggest danger is making any one too specific to the technology of the moment because the Supreme Court might not hear another similar case for 30 years.
I mean, their last big internet case about internet on the First Amendment, they’ve had a few, but one that set the general principle, that was in 1997. I mean, they, we’re still operating off of that framework. And I think it’s been a successful framework because the court articulated a very broad principle, that the internet is not going to be treated like broadcasters. We’re going to give the internet the full scope of First Amendment protections. And I think that’s worked very well. I’d like to see the Supreme Court in these cases this year, figure out how to re-articulate those principles for the current day and reaffirm that they continue to believe that’s [inaudible 00:22:24] speech.
Sarah Oh Lam:
Right. So if we’re thinking, if these decisions have a 20-year shelf life or whatnot, and what happens when, I mean, I guess speech online, it’s growing, but it’s of the same kind unless you think AI-generated language is different in kind? Do you think they have the tools? I mean, I guess the best would be to draw a bright line in protection of the First Amendment to say, “Government, don’t overstep.”
Tom Lenard:
On which side of that line would AI-generated speech be?
Jeff Kosseff:
I think it depends on the specific content. I mean, I think that people use AI in a variety of ways, and I think that there are some interesting issues about who is the speaker, where’s the content coming from? I think AI frankly, raises much more interesting Section 230 issues because Section 230 protects interactive computer services from liability arising from content someone else generated. So let’s say a platform’s using AI and it takes content from somewhere else and reformulates it. To what extent, if any, does Section 230 protect that platform from liability, from that content that it’s gotten from somewhere else, perhaps rearranged in a certain way? And that, we don’t have much guidance on but I think we’re going to get there.
For the First Amendment also, I mean, I think the biggest issues are things like, who’s the speaker? What is the harm done? For defamation-related First Amendment issues, how credible is AI? So I mean, do people really believe what they see in AI? What damage can it do? But I think in general, I don’t think the Supreme Court should come out and say, “Anything that AI generates shouldn’t be protected by the First Amendment.” I think that would be more dangerous.
Tom Lenard:
So I’m curious as you go around and talk to groups and get to do various podcasts. I’m curious as to how you gauge, for one of a better phrase, the popularity of the First Amendment these days? Because you presumably talk to lots of people, I personally kind of agree with it, but some people might consider it an absolutist take, a view of the First Amendment. I hope I’m not mischaracterizing it. But I’m just wondering, whether you get pushback on that? How controversial do you find it?
Jeff Kosseff:
Yeah, so I mean, the book came out a few weeks ago, so for more than a month I’ve been doing a lot of talks and forums and interviews, and I consider this book and really my position, to be very old school First Amendment. I’m saying, “The First Amendment is there and it’s strong and it protects people and companies from the government.” And I didn’t think that was a very radical idea, but I think unfortunately, it’s not incredibly popular these days. And I think that a more popular argument, unfortunately, would’ve been for me to say, “Let’s rethink the First Amendment.” I don’t want to rethink it. I like it. I think that it sets the United States apart from some pretty awful countries, and it does a lot of great things, it’s not perfect. In terms of absolutist, I’ve been called an absolutist quite a bit over the past month, and that’s just not true.
I mean, I went through all of the types of speech that I don’t think are protected or should they be protected. I mean, the only serious person I know who’s ever been a First Amendment absolutist, has been Hugo Black, and he’s been dead for a while, and nobody’s ever really agreed with him on that. But other than that, I mean, you occasionally hear people say, “The First Amendment says, Congress shall make no laws.” So that means nothing, but that’s not really how it works. But what I support is, continuing the strong reading of the First Amendment and being incredibly cautious before carving out new exceptions. One thing that has come up in interviews and talks that I’ve given is, I won’t talk about specific politicians, but “are you concerned about so-and-so getting power and the authoritarianism and our democracy will [inaudible 00:28:32]?”
And I say, “Okay, well, I’m not going to weigh in on that, but let’s just say that you are concerned. Why on Earth would you address your concerns about authoritarianism by weakening the First Amendment? Because an authoritarian would like you to weaken the First Amendment. You’re not going to combat authoritarianism by taking away speech rights.” And this is something that comes from journalists and people who I think have a really vested interest in maintaining the First Amendment because that’s kind of existential. And I tell journalists quite a bit, I say, “You don’t need to both sides the First Amendment. We know your job, we rely on it. It’s okay for you to say that you support the First Amendment, you’re not going to lose your job for an ethics violation.”
But you think, I mean, I’m not going to single out any specific person, but there’ve been a lot of takes on the First Amendment, about let’s rethink it, let’s overhaul it for the current challenges. And I just think they’re so shortsighted, and I think they’re well-intentioned, but the problem with weakening free speech rights, is it doesn’t matter what the intentions were of the person who could do it, it matters how the person who will get those powers years down the road, will use it. And I don’t think people really are thinking that through.
Tom Lenard:
This is not maybe technically a First Amendment issue, but as an ex-journalist, and just because you brought up journalists, my impression is that there are lots of journalists these days, I’m not sure they would say, I mean even in newsrooms of the most prominent publications in the country, certain types of speech is kind of, my impression, maybe I’m wrong, is out of bounds. So I guess that’s more an internal matter for the press to look at, but not really a First Amendment issue. But I’m curious about your view of that, having come from the world of journalism. Yeah.
Jeff Kosseff:
Well, so I mean, I think that journalists don’t like to inject themselves into the story and they don’t like to take positions, which is understandable. And I think that there are very good reasons for journalists to not support or oppose political candidates for example. Because you want, I mean, obviously there’s a role for that, for columnists and op-ed writers. But for someone who’s reporting the news, you don’t need to hear about that. I think unfortunately, there’s too many times, just me as a consumer because I’ve written for a newspaper for 15 years. But I worry too much about the increase in trying to get a certain person’s slant into what you would think is a traditional news outlet.
But I do think there are certain issues like the First Amendment where, I mean for me, when I was a journalist, I relied heavily on the First Amendment. I had a congressman who wanted us to not publish an investigation, and he was threatening us with a defamation lawsuit, with one of the most powerful lawyers in the state, who’s now a federal judge. I did an investigative story in Texas, where I was being threatened with jail time in Texas State Prison, which isn’t anything you ever want to think of, but because I was reporting the story and they said, “We don’t want you reporting the story.”
And because of the First Amendment, neither of those were really serious threats, but if I were even in the UK, that would actually be a real issue. So yeah, I think for those sorts of things, I think journalists can maybe say, “Yeah, I think the First Amendment’s good.”
Tom Lenard:
So let me finish by, maybe Sarah has some things she wants to talk about, but the subtitle, well, the title of your book is, Liar in a Crowded Theater, which presumably is a play on the phrase, fire in a crowded theater. And the way I as a non-lawyer interpret the phrase, fire in a crowded theater, you can’t say fire in a crowded theater, if there’s some, I guess, imminent threat that that type of speech is going to be harmful, lead to a stampede or something like that, then it’s not protected. What do you think, what sort of imminent threat does there need to be to limit speech or how do you interpret that phrase?
Jeff Kosseff:
Well, so the title, actually, that wasn’t the initial title of the book. I actually came to it when I was halfway through writing. And the reason why I gave it that title is that the phrase, fire in a crowded theater, I mean a lot of the first half of my book is looking at cases, where the government or plaintiff tried to regulate or remove liability for speech. And the court leader said, “No, you can’t do that because of the First Amendment.” But in almost all of the cases where someone was trying to impose an unconstitutional restriction on speech, that person or that government agency would say, “Well, you can’t yell fire in a crowded theater, so you also can’t say it.” So, it turns out you can say it and came this sort of wild card for saying, “There’s this bad speech, therefore I think it’s bad. So therefore the First Amendment doesn’t protect it.”
And that’s not what fire in a crowded theater came from. Fire in a crowded theater was a hypothetical, the Supreme Court used in 1919 to justify the imprisonment of someone who distributed a pamphlet that criticized the military draft. And it was not about imminent danger at the time. What instead the Supreme Court set a very low burden, clear and present danger. So that was basically a metaphor used to justify saying, “If we think it’s a clear and present danger, we can impose liability of the First Amendment.” Now in 1969, the Supreme Court substantially narrowed the clear and present danger test and changed it to imminent [inaudible] action, which is a much higher burden.
So while the Supreme Court never actually adjudicated a dispute involving a fire in a crowded theater, what it stands for is, justifying an outdated conception of free speech. To this day, if you do a Google News search right now for fire in a crowded theater, I will bet that you will find some politician who’s using it freely to justify whatever restriction on speech that they want. So I think it’s a very dangerous phrase because it’s used all sorts of ways, to really be a shortcut around the First Amendment.
Sarah Oh Lam:
Yeah. Well, I guess we haven’t really discussed current events as much, but from reading your book, can’t help but think about former President Trump and how he’s really good at saying things but not saying them. So I guess I don’t know if they’re falsehoods or lies or do you have a category for that, in your view of speech?
Jeff Kosseff:
Well, so it really depends on the specific context. So I mean, there’s obviously a lot of debate about the former president. So depending on how you categorize what he says in any particular speech. In the book, I actually focus one instance, and this actually involves Senator Hawley and his criticism of then, Judge Ketanji Brown Jackson when she was at her Supreme Court nomination hearing, and he put out accurate information about her sentencing of child pornography defendants when she was a district judge. And it was entirely accurate saying, “She, in all seven of these cases, she sentenced them to below sentencing and I’m very concerned about this.” And it was very, very carefully crafted. So it’s not misinformation in it’s classic sense, but I use it as an example because it’s the example of sort of a certain bias of political speech that might lead people to a different conclusion because what it omitted was, that the majority of those types of sentences, non-production, child pornography cases, were below sentencing guidelines across the political spectrum for judges because the sentencing guidelines are just abnormally low.
And so without that context, you would think, “Oh, she’s this radical judge who lets the worst type of people get light sentences.” When she still sentenced them to very harsh sentences and she followed really what was the consensus among most federal judges. He didn’t say that. So I say, what do you categorize this? This isn’t misinformation. This is just what politicians do. I use the example of Adam Schiff, who was the head democrat of the House Intelligence Committee during President Trump’s presidency, and he made a lot of statements that implied knowledge of Russian collusion between the Trump campaign and Russia in 2016, that were not born out by at least his committee’s reports. Again, it was obviously very opinionated and a lot of people accused him of being misleading, but that’s again, politicians do that. And I think that while both of those types of things got criticism, they can get criticism, there also, a lot of that is just kind of what you expect politicians to do.
Tom Lenard:
That’s true.
Sarah Oh Lam:
Yeah. I mean a selective presentation of facts. But that’s like you said, accurate. But also assuming that people have a certain baseline knowledge, but they don’t know the other side of the story. I mean, I guess, which is why we need a free press to have the other point of view. And so yeah, the whole structure of the Fourth Estate becomes that much more important because it’s a given that you’re going to have politicians, so you do need a free press to bring some sunlight to that.
Jeff Kosseff:
Yeah, absolutely.
Tom Lenard:
I don’t recall the press correcting Adam Schiff, but maybe I missed it.
Jeff Kosseff:
It was in the Wall Street Journal, so certain press did correct it. But obviously, I think if you’re just watching MSNBC, you might not get all of that counter-speech.
Tom Lenard:
I’m always thankful for the Wall Street Journal. Well, this was great. People should buy your book. I hope it’s a great success. And thank you very much for participating in the podcast.
Jeff Kosseff:
Thanks so much for having me.
Sarah Oh Lam:
Great.