fbpx

Large Libel Models: Liability for AI Output with UCLA School of Law Professor Eugene Volokh on two think minimum

Large Libel Models: Liability for AI Output with UCLA School of Law Professor Eugene Volokh on two think minimum

Sarah Oh Lam (00:00):

Hi, and welcome back to Two Think Minimum. Today is Thursday, July 27, 2023. I’m Sarah Oh Lam, TPI Senior Fellow, and I’m joined by Scott Wallsten, President and Senior Fellow of TPI. Today we’re delighted to have as our guest, Eugene Volokh. Eugene Volokh is the Gary T. Schwartz Distinguished Professor of Law at UCLA School of Law. He teaches First Amendment law and a First Amendment amicus brief clinic and has often taught copyright law, criminal law, tort law, and a seminar on firearms regulation policy. He clerked for Justice Sandra Day O’Connor on the U.S. Supreme Court. He is also the founder of The Volokh Conspiracy, a leading legal blog, and author of  legal textbooks on the first amendment as well as over 90 law review articles. Prior to law, he worked as a computer programmer for 12 years, graduated with a B.S. in math and computer science at age 15, and has written many articles on computer software. Today Eugene is here to chat about his new law review article on LLMs, but not large language models, it’s titled: “Large Libel Models? Liability for AI Output.” There’s so much to talk about in AI and there’s really no better person to discuss this topic than Professor Volokh. Thanks Eugene for joining the program. 

Eugene Volokh (01:13):

Thanks very much for having me!

Sarah Oh Lam (01:14):

First, would you mind giving us just a quick overview of your law review article? We’ll try and connect it with some points in the news, but first, we’re just curious to hear more about your article.

Eugene Volokh (01:25):

Sure. So the question my article raises is: Say somebody does a Bing search using Bing’s new AI driven search technology, and gets some information about the person he’s searching for. And that information turns out to be false, turns out to be false and damaging to the person’s reputation. So to give one example of a lawsuit that has already been filed, say somebody searches for Jeffery Battle, “Jeff-e-r y,” somewhat unusual spelling of that, and he sees Bing say, oh, Jeffery Battle is an aerospace expert who’s an adjunct professor at a University. But then, Bing says, Jeffery Battle, however, excuse me, Battle, however, not mentioning his name, was also convicted of essentially a terrorism offense. And it turns out that’s a different battle. That’s Jeffrey spelled “r-e-y” rather than “e-r-y.” 

Eugene Volokh (02:23):

If he just had done a Google search, let’s say, without any AI assistance, he’d see maybe there’s a story about Jeffrey and Jeffery, and maybe he’d realize, maybe the user would realize that those are two different people. But, the AI software merges the two together and presents them as if they’re about the same person. Can Jeffery Battle sue for libel? He actually is suing for libel, can he prevail in his lawsuit for libel? And that’s an interesting and important question. Historically, of course, libel lawsuits have been brought about against people or against corporations for the actions of people who are employed there. What happens if the allegedly libelous output is actually created by an AI? Obviously, you can’t sue the AI because it’s a program, not a person, or an entity, or a legally recognized entity, but can you sue Microsoft? Can you sue open AI for letting out into the wild this material or this software that produces what appears to be defamatory material?

Scott Wallsten (03:28):

Can I ask a quick question? Not, not a lawyer. So take these comments as that. Does context matter for this? So, Microsoft made the decision to put OpenAI along with its search as if it were part of a search engine. If they hadn’t done that, would this question be different? 

Eugene Volokh (03:45):

Well, context does often matter in law, especially in libel law, because context matters to meaning. Let’s say, for example, I go to ChatGPT, and I say, write me a short story about Jeffery Battle and about how he was convicted of some crime, and it outputs the short story in which the lead character is Jeffrey Battle. That context makes it clear to me as the user, that this is not a factual assertion, that this is a fictional story, and generally speaking, you can’t sue for libel over something that is clearly fiction. Likewise, if the context is somebody ranting in an obviously hyperbolic, exaggerated way about some politician or about somebody involved in whatever the internet’s 15 minute hate of the day might be, the context might suggest that this is all kind of over the top expression of opinion and not factual assertion. But, while the output of a search engine is likely to be seen as a factual assertion, it’s not the only thing that’s likely to be seen this way. So if instead of going into Bing, I go into ChatGPT and I ask the same query, and it gives me false and defamatory results, which I have seen from ChatGPT before. Again, I think there the context, absent some specific details is it looks like it’s a factual assertion. 

Eugene Volokh (04:48):

To be sure, a lot of these companies are quite reasonably saying, warning, there might be errors in what we’re outputting. That too is part of the context, but it’s not enough to keep something from being libelous. And one way of thinking about it is, imagine I were to tell you, you know, rumor has it that Joe Schmoe has been convicted of child pornography. 

Eugene Volokh (05:42):

Rumor has, it is itself kind of a disclaimer, right? All of us know that rumors are often false, but sometimes true. But I can’t get away with saying rumor has it that someone committed some crime, and then just saying, well, I said it was just a rumor. Especially if I’m the one who made up the rumor, the fact that there is some signal that it might be incorrect doesn’t defeat a libel claim. Libel claims can’t be brought over things that are obviously fiction, but they can be brought over things when the reader understands that there’s a possibility that it’s true and a possibility it’s false. There could be such a liable claim, even given this uncertainty that people recognize about the accuracy of the assertion.

Sarah Oh Lam (06:28):

And what happens if, you know, there’s like a meter next to the result that says, this could be 60% accurate, or take this with, you know, 70% accuracy is, would that change?

Eugene Volokh (06:41):

Well, at some point we have to say, you know, there hasn’t been a court case quite like that. I don’t know of anybody in the past who said, I’m passing this along, there’s a 70% chance that it’s accurate. People just don’t quantify things this way. But again, if I were to say, I’m not sure about this, but I believe that this person is guilty of this crime, or rather, let’s be more specific. I’m not sure about this, but I remember reading that this person was convicted of some crime, that may very well be libelous. Again, likewise, if I were to say, rumor has it, this person has been fired for embezzlement. That is itself sort of a signal that I’m not a hundred percent confident. If I were a hundred percent confident, I probably wouldn’t say rumor has it. 

Eugene Volokh (07:31):

But that again, is not enough to immunize me from a libel lawsuit. Now, again, I suppose if somebody were to say, well, this is certainly false, but I just want to pass along what I heard. That might be a defense in libel cases, although not in all cases. Some courts have actually said that passing along an assertion, and even saying you don’t believe it may still be libelous. But you can imagine a situation where if something is clearly being presented as a false assertion that you’re just repeating because it’s important that it was said, maybe that would get you off the hook. But simply saying I don’t have complete confidence in this, I might be mistaken, but… that’s not enough to get you off the hook for a libel lawsuit. And in fact, if we think about it, imagine the classic libel example lawsuit against a newspaper because the newspaper said something false about someone.

Eugene Volokh (08:27):

Especially let’s assume it’s a private figure. So  you don’t have to show, the private figure doesn’t have to show proof of knowing or reckless falsehood. Do people believe everything they read in the newspaper? I would hope not. Are people aware that there’s a risk of error when things are published in the newspaper? Sure, they’re particularly aware of the risk of error if the newspaper credits, anonymous sources, let’s say, or says it’s passing along a rumor or whatever else. But because these kinds of assertions, even ones that aren’t billed as clearly, completely, certainly right, can be very damaging to a person’s reputation somebody can indeed sue for libel over.

Sarah Oh Lam (09:12):

Not to skip too far ahead, but how about remedies and damages? So let’s say you have, a successful suit. You know, what kinds of remedies or damages would be available? And if it’s AI, wouldn’t that be just enormous?

Eugene Volokh (09:30):

Well, so it’s complicated because so much in law, but in particular in libel lawyers. Let’s break this down. Let’s say that it’s a public figure who is suing over something, and the assertion is a matter of public concern. Somebody says, or, and AI outputs this, this mayor was involved in this embezzlement scandal. Under the Supreme Court’s First Amendment precedence, a public figure or a public official cannot recover in a libel case unless he shows that the statement was said with falsely, I’m sorry, not falsely with knowledge of falsehood and or knowledge it was likely false recklessness as to falsehood. Now, in a typical situation that would be virtually impossible to show about an AI program, you’d say, you know, nobody at OpenAI, nobody at Microsoft knew the statement was false. But imagine that this person had, once they saw that this was being output, sent them, sent a message, sent an email, to OpenAI saying, you are outputting false stuff about me.

Eugene Volokh (10:44):

Or rather, your software is outputting false stuff about me, make it stop. And OpenAI didn’t do anything about it, didn’t add any kind of protections, didn’t add any kind of post-processing filtering that would keep this known false statement from being output. This is in fact what Jeffery Battle is accusing Microsoft of, of doing or not doing. He says, I alerted them about this problem, but, but they didn’t fix it. Then at that point, maybe that would be enough to show knowledge of falsehood that they know that their software is outputting material, it’s false, or at least likely false. At that point, once you can show knowing or reckless falsehood, the plaintiff could recover both any proven compensatory damages, “I lost this consulting contract or this job or whatever else because of this also so-called presumed damages”, which the law allows a jury to award on the theory that often it’s hard to tell how damaging some statement has been to reputation and maybe even punitive damages on the theory that this kind of knowing falsehood, knowing repetition of a falsehood is something that should be punished.

Eugene Volokh (11:58):

So that’s for a public figure. Now, let’s say it’s a private figure, and this Jeffery Battle does seem to be a private figure. Well under the Supreme Court’s case law, a private figure can recover damages for negligence, not just knowing or reckless falsehood, but for negligent falsehood. If he can show specific harms to him, specific losses, like he lost some job or he lost a consulting opportunity, or maybe he lost some social connections or, kicked out of his club because of it, or something like that. That might be quite difficult to show because often, you don’t know about the things you’ve lost. So, but at least in principle, if he could show negligence, maybe negligent design, that the software wasn’t designed in a way that adequately seeks to prevent a publication of libelous statements, then he might be able to recover those damages.

Eugene Volokh (12:55):

There’s one other category is so-called statements on matters of private concern. And there the Supreme Court said, you can recover proven compensatory damages and presumed damages and even punitive damages merely in a showing of negligence. I oversimplify here, but basically that. So let’s say for example, the software outputs an accusation. This person committed adultery, generally seen as a matter of private concern or an accusation. Let’s say that the person committed some, some relatively low profile crime. Many courts say that allegations of low profile crimes are matters of private concern. Then at that point, the remedies might include this whole set of damages without a showing of knowing falsehood. Again, you’d probably need to show negligent design, and that’s itself a complicated question like, how would a court decide whether the software was or was not negligently designed?

Scott Wallsten (13:55):

Where in the process do you put the person who’s asking the questions? Who’s prompting the AI? Because the nature of generative AI is that it won’t give you the same answer exactly the same. It won’t give you exactly the same answer more than once, and you can make it say anything by prompting it enough times. And also when it does give you an answer, no matter how libelous it is or not, it’s just on your screen, somebody has to amplify that message, right? How does that fit in?

Eugene Volokh (14:26):

Not quite. So here’s why. So,  let’s look at three scenarios. Here’s one scenario. I go to ChatGPT and I write the following prompt: come up with an accusation against Scott Wallsten that you’ve made up and it outputs something, it accuses you of stealing from petty cash. That’s not libel, because I know perfectly well that this is false. I asked it to make up something false. It’s really the creation of fiction. So there’s no libel there. Here’s another situation. Let’s say somebody were to say: Has Eugene Volokh committed any crimes? That’s not a signal to make up crimes,  it’s a query asking for factual information, it does say, yes, Eugene Volokh was convicted of beating his dog, kicking his dog.

Eugene Volokh (15:32):

My wife would’ve been extremely upset with me if that were so.  And then you pass it along. Well, then an interesting question is, could you be guilty of libel for passing it along? And there we’d deal with some of the same questions if I’m just a private figure. One question is, was it negligent of you to rely on ChatGPT? Was it careless of you to do that? Perhaps? Probably, although it’s an interesting question, what a reasonable person who often doesn’t know a lot about AI would do in that situation. We’ve seen at least three examples so far of lawyers who apparently filed in court AI generated motions, which hallucinated, which made up court cases, precedents that don’t exist. You know, I think the lawyers thought that the motions were legit.

Eugene Volokh (16:27):

And I do think they were being careless, they were being negligent, but at least some people, are duped by some of what they hear about AI. But here’s the third category. Let’s say that you are thinking about whether to hire me as a consultant, and you just enter Eugene Volokh and it says, Eugene Volokh was disbarred by the state of Vermont. I wasn’t. I’ve never been a member of the Vermont bar, but let’s say it did. And then you don’t pass this along. You just say, wow, I certainly don’t want an expert, or I don’t want a consultant who’s been disbarred. So then you just don’t hire me. You are not liable for that, right? Just like if you heard a rumor about me, believed it, wrongly believed, but believed it and decided not to hire me, you wouldn’t be liable for that.

Eugene Volokh (17:18):

It’s not a tort or any other kind of legally recognizable wrong to be duped into not hiring someone. So, I couldn’t sue you for not hiring me. You had no obligation to hire me, but maybe in that situation, I might be able to sue the AI company for outputting false statements about me that led you not to hire me. And by the way, this is actually, there’s nothing terribly new about that classic libel example. It’s been understood as a feature of Libel Law for hundreds of years. Somebody writes a letter saying, I know you’re considering employing so and so, but I will tell you, he stole from me when he was my employee. The recipient of the letter, if he refuses to hire so-and-so, is not committing any tort. There’s no basis for suing that person for believing the letter, but there is a basis for suing the letter or writer if the accusations in the letter are false.

Sarah Oh Lam (18:21):

And how does this analysis, you know,  how is it done for search engines or in Section 230 cases? How, how is AI different now? That it’s coming straight out of the mouth of the engine.

Eugene Volokh (18:36):

Well, so it is exactly that, it’s that it’s straight out of the mouth of the AI and here’s how. So Section 230, a federal statute, provides immunity for online companies when they’re sued for somebody else’s speech. Here’s the relevant provision. I oversimplify here because there are other provisions, but basically I’m reading right now, no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider, by another information content provider. So let’s say I Google Joe Schmo, outcomes Google’s link to a page which accuses Joe Schmo of awful things, and maybe it includes a snippet from that page, accusing Joe Schmo of awful things. If Joe Schmo sues Google, he will lose because he would be suing over information provided by another information content provider, the operator of that page.

Eugene Volokh (19:46):

But in the case I mentioned Jeffery Battle, he’s not suing over information provided by some other entity. He’s suing over what Bing itself, based on the GPT model, what Bing itself is outputting. It is information that is provided by the defendant itself, that the problem is that Bing takes two accurate sources. One about Jeffery Battle and the other about Jeffrey Battle the terrorist, as opposed to Jeffery Battle the aerospace professor and links them together. It says, first Jeffery Battle, also known as the aerospace professor is, and then explains things about him. However, Battle was sentenced to 18 years in prison. He’s saying it’s what is being added the, however, together with the link between the two saying just Battle, rather than noting the difference in the Battles, that’s what he’s suing over. Likewise, in the other lawsuit, Walters vs. OpenAI, what happened is a journalist asked ChatGPT to summarize a complaint in a court case.

Eugene Volokh (21:08):

And ChatGPT said, oh, this is a lawsuit against Mark Walters for embezzlement. Turns out that complaint had nothing to do with Mark Walters and had nothing to do with embezzlement. Now, Mark Walters is suing OpenAI, and his theory is that OpenAI should be held responsible not for information provided by another information content provider, but information that it itself provided or that its own software hallucinated to use the term that’s being used here. So that is the key difference. And by the way, this arises in other situations as well. If I quote something on my blog from another webpage, I am generally immune on Section 230 for, I’m sorry, immune from liability based on Section 230. But if instead of just quoting it, I just say it myself, well, of course I’m not immune because I’m being sued for what I myself said. Likewise, that’s the basis for at least the lawsuits that we’ve seen so far over AI libels the claim that the AI company’s software itself created the false information or rather than just accurately quoting it from some other source.

Scott Wallsten (22:21):

And how do you view these arguments so far? I mean, do some seem to carry more weight with you, than others?

Eugene Volokh (22:26):

Right. I mean, I think a lot depends on the fact. So for example, in the Walters case it is pretty likely that Mark Walters is a public figure. He is himself a noted online speaker. I think he’s a radio talk show host as well. He’s a gun rights activist. So maybe he’s a public figure. And if that’s, so then he’d have to show knowing or reckless falsehood on the part of OpenAI. And at least so far there hasn’t been any allegation that he had informed OpenAI of the problem, or of the false things that were being said about him. So,  it sounds like at most they’re negligent, and that’s not enough in a lawsuit by a public figure. Even if he’s a private figure, since the allegation was on a matter of public concern, the claim was this lawsuit that had been filed in federal court that was kind of in the news already was about Mark Walters.

Eugene Volokh (23:26):

Turns out it’s false, but it’s, again, a statement on a matter of public concern. Then even if he can therefore make a negligence claim, he’d have to show actual damages, actual lost business opportunities. But apparently the person, as best we can tell, the only person who saw this information with somebody who you look at the chat, it’s quite clear, didn’t believe it. He, you know, a lot of people are properly skeptical in these kinds of situations. Really, this is about Mark Walters? And I think he knew the journalist knew Mark Walters and said, really, Mark Walters being accused of that? And then probably looked up the complaint and saw that it wasn’t about him. So there aren’t any damages. So I think it may be very difficult for Walters to prevail. As to Battle, you know, it’s harder to tell and in part because Battle says he had alerted Bing to the problem and Bing didn’t promptly fix it.

Eugene Volokh (24:24):

So now maybe they do know that their software is outputting false information about Battle. And by the way, I ran the same binging query that he was saying that was run about him. And it keeps, at least as of when I ran it, it kept outputting this, this false information about him linking him to the other, differently spelled criminal Jeffrey Battle. So he might have a plausible claim. We’ll have to see it’s still early days yet for both lawsuits.

Sarah Oh Lam (24:54):

Do you think that the makers of these AI chatbots will, will try and, and change their format to fit under Section 230? I mean, will there be some movement in that direction, or will section 230 reform kind of creep up on AI? 

Eugene Volokh (25:16):

Well, I don’t know about Section 230 reform in terms of proposed bills and Congress. There are things being talked about both with regard to AI and other things, I just don’t know what’s likely to pass and what’s not, so let me just focus on existing law. So some people have talked about what it would take for AI companies to be protected by Section 230, and that would be to make sure that they only provide links to existing sources and quotes, accurate quotes from those sources. So you can imagine some search engine that uses AI technology to try to identify them, but then tries to make sure that it finds an actual source it can link to, and an actual quote from that source that it will include. If that’s what it does, then that’s like a very smart search engine and is indeed protected.

Eugene Volokh (26:06):

But one reason it’s sometimes called “Generative AI” is that one value of programs like ChatGPT for example is that they generate output that’s a synthesis of a bunch of things. That’s what readers often find very useful. And in fact, when it doesn’t come up with false statements, especially about particular people that could damage their reputations, it could be quite useful. I mean, I’ve run searches on ChatGPT asking about just some general perspective on things,  it gave me at least a good first cut of that. That’s pretty useful technology. The problem is that it’s useful in part because it does kind of come up with its own text, albeit in a sense inspired, I use the term very loosely by the training data that it has. So that model, which seems to be pretty central to the way that at least ChatGPT is operating now, is potentially very dangerous from a libel law perspective to the companies, unless the companies figure out some way of minimizing the risk of libel.

Eugene Volokh (27:13):

So one possibility, for example, might be is they could come up with just some sort of filter or some sort of code for, for their products, which says never output anything between quotation marks unless it actually exists in an identifiable source. And I think that’s kind of the minimum. I use never a little loosely here again, what if they’re asked to come up with a story, come up with fiction, you know, that’s a different matter. Presumably they’d be able to figure out whether their query calls for fact or fiction. But if they’re being asked for a fact, I think the minimum they should do is say, look, we’re gonna set up our software so it doesn’t output that quotation mark, which is for human beings, a very powerful signal, that what’s between quotation marks is, is actually from some other source, doesn’t output, this quotation mark unless it can verify. That wouldn’t solve the problem altogether, because of course, some of the libels might be from paraphrases of supposed real sources or actually are being imprecise here. Some of the libels may in fact be completely generated without quotation marks, but might look to the reader like there’s still serious assertions, like in the Jeffery Battle case and like in the Walters case. But at least there’s some things, like for example, verifying every quote that they output and just not outputting it with quotation marks if they can’t verify it, that would at least help diminish the risk of liability here.

Scott Wallsten (28:42):

So you said you wanted to stick to talking about existing laws, but, you’re actually kind of moving in a little bit of a different direction. So do you think that we’re heading towards a world where courts are, you know, going through the cases now that will help generate the precedence to answer some of these questions? Or do you think there should be new legislation saying the sorts of things that you were just not advocating for, but suggesting?

Eugene Volokh (29:09):

So what I’ve said so far is my attempt to describe as best I can, the current legal landscape. And, you know, old laws are applied to new technologies all the time without new statutes. Like back when the fax machine was invented, nobody came up with the fax libel act right, because, you know, this basic principles having to do with letters and the like, apply just as much to faxes. On the other hand, of course, when broadcasting came up, courts did actually have to make some decisions that seem to be technology focused. Like, here’s one, there are different rules. I won’t go into all the details because they’re very technical, but there are different rules in many states for written defamation, which is called libel, from oral defamation, which is called slander. And the rules are not completely different, but they’re somewhat different.

Eugene Volokh (30:00):

So the question that arose is, what about radio and television? Is that libel or is it slander? Well, it’s obviously oral, so sounds like slander. But on the other hand, one of the premises behind this medium distinction in the past had been that written material is more damaging because it reaches more people, whereas slander, oral material is ephemeral and therefore should be less subject  to liability. Whereas radio and television broadcasts can reach a lot of people indeed. So it turns out some states said it’s libel. Some state courts, said it’s slander. In Georgia, the courts said, we’re gonna call it a separate thing. We’re gonna call it defamacast, not a word that has caught on. But you know, they were trying and there were in fact some statutes that were set up specifically with a regard to broadcast technology.

Eugene Volokh (30:51):

You can imagine congress stepping in and coming up with special rules, but even if it doesn’t, courts will just try to adapt the existing legal rules to the new technology as they have done on many occasions in the past. Now, when the court should step in is the hard question in part because I don’t know what the solution to these problems is. Maybe if you come up with a really great idea, I’d say, yes, absolutely, let’s have Congress do something about it. But until we know what Congress should do, I wouldn’t say it should do anything.

Scott Wallsten (31:21):

So sorry I interrupted you before, but given  the differences in these laws across states do you have any idea how broadcasting or if broadcasting developed differently in those states and the different kinds of things that would be put on air? 

Eugene Volokh (31:35):

No. I’m unaware of any actual effects on radio versus I’d say radio or television broadcasting as a result of  the difference in treatment. I mean maybe that’s so, maybe if somebody can do some sort of analysis that conveys excuse me, that determines whether radio and television broadcasters were bolder in some states than in other states. But I’m unaware of any such. 

Sarah Oh Lam (32:07):

To go back to something we talked about earlier about the users of the output. So, you know, depending on what they do with it, do you think that if everybody knows that the output is sausage making, the AI companies kind of have a past?

Eugene Volokh (32:24):

So again, if everybody knew that this was just one big joke or one big fiction generator machine. If everybody knew that this is just like, it’s a short story, it’s a way of coming up with fun fiction, then it seems to me that yeah, this wouldn’t be libelous. Because by hypothesis, everybody knows it’s just made up. Just to give an example, my understanding is that various photo generation technology like I shouldn’t say photo, excuse me, that’s wrong. Various image generation technology like Dall-E, for example, is a famous one. They have some ways there of constraining whether the output so that people can’t come up with images of real people. But let’s assume you have some technology that doesn’t have such constraints, and I come up with some picture there of some famous person doing something, something bad.

Eugene Volokh (33:31):

At that point when I’m seeing it, no, it wouldn’t be libelous because I understand that this is just a mechanism for coming up with a picture. It’s not a mechanism for figuring out things about the real world. If I then redistribute it to others, maybe I’m liable. If, if it looks like I’m asserting that this really happened. But in the absence of that, because we know that Dall-E is just supposed to make things up I don’t think that there would be liability for that. On the other hand, I think ChatGPT and Bing, especially a search engine output, I think people use because they think it’s going to be pretty reliable much of the time. So then the question is, is it enough that people are aware that there’s a risk of error? And the answer there is, I don’t think so. Even if everybody realizes that AI output may be wrong, it’s like, again, with rumors, everybody realizes that rumors may be wrong, but if I just make up a rumor and then pass it along saying, rumor has it, Joe Schmo did this and such, I’ll still be potentially liable for defamation. Even though the person who receives this rumor from me realizes it might be mistaken still, it could very much damage Joe Schmos reputation, if there’s even a rumor out there about him that he committed some crime.

Sarah Oh Lam (34:51):

So for also to go down that road, it seems like the users of the AI would have regulations too. So medical doctors, lawyers, and even ChatGPT has been banned in some companies. Employers are saying, don’t use this technology, probably, you know, because they don’t want them to be feeding proprietary data into the engines, but maybe too, because they don’t want their employees relying on false information. Do you foresee that there’d be kind of tiers of that kind of burden on users of AI?

Eugene Volokh (35:25):

Well it depends. Generally speaking, it’s not basis for a lawsuit that I used ChatGPT to look somebody up or to look something up. By itself, that can’t be a legal wrong on my part. If ChatGPT outputs something false and defamatory about someone to me that could be a legal wrong on their part, that could be libel. But my simply getting this information, even if it’s inaccurate, is not a legal wrong. To be sure, let’s say as a lawyer, I get this information and then I give a client legal advice based on that, and it turns out the information is wrong and I should have realized it was wrong, that could be malpractice. Just as it would be malpractice for me to ask somebody for advice, let’s say, and not check that advice. Or just to Google this, find some webpage, turns out the webpage is mistaken, I rely on it, it’s malpractice potentially on my part. 

Eugene Volokh (36:24):

So it’s not, it’s not just running the AI query, it’s not just using the program, it’s acting in an professionally incompetent way based on that program. But as I said, a lot of things that people can do, even kind of incompetently based on the program’s output, may not be legally actionable. So, for example, refusing to hire someone because they got some information that turns out to be completely false from an AI company generally is not legally actionable. It’s generally not like race discrimination. It’s not violation of any contract or any other thing. So it could, could be foolish on the part of the prospective employer, it could be very bad for the prospective employee. But again, just like relying on a rumor in declining to hire someone isn’t legally actionable, likewise, relying on AI output isn’t legally actionable.

Eugene Volokh (37:25):

So again, it depends on, on a lot of things I mean, I’ll, I think in a lot of people, if they rely on AI and then do something wrong in their work as a result, they could just get fired. It’s, they don’t, the employer doesn’t have to sue them for incompetence, they just fire them for incompetence. Saying, Really? You thought it was okay to just come up with some financial forecasts for our business by running ChatGPT and not checking into the results?. Well, you’re not a very good accountant there, or you’re not a very good financial forecaster. So there are all sorts of reasons why people should be quite hesitant in relying on the output of AI programs. And there’s some reasons why people could use them, let’s say as a first approximation, as a kind of a first stab at the research, as a first draft of something else they write, but relatively few of the things that users do are going to be legally actionable.

Sarah Oh Lam (38:23):

From your article, what’s next in your scholarship? Where do you think the most interesting questions are gonna come from? As you see AI developments and even at the White House, there are voluntary commitments by the big tech companies. What direction is, is the law going and then public announcements by the tech companies?

Eugene Volokh (38:42):

You know, it’s very hard to figure this out, very hard to, to predict such things. You know, I think a year ago, I think very few people predicted that AI would be as big a deal that some lawyers would however foolishly just file motions written by ChatGPT. I think a lot of people are surprised. I know, I was surprised that ChatGPT output comes across as grammatical and as plausible and convincing as it is. That’s part of the problem, right? Precisely because it seems plausible. It could be especially damaging when it’s false. So look, if two, or say two or three years from now, let’s assume there are huge surges in the technology to the point that most business disputes are just submitted for arbitration by some AI arbitrator.

Eugene Volokh (39:33):

I mean, imagine that somebody says we’ve come up with this AI arbitrator, we’ve run it through some, some blind tests where we compare its output and the output of human arbitrators. Our panel of evaluators who didn’t know which output was from the AI arbitrator and which was from the human arbitrators found the AI arbitrator at least as good as the human ones. So business people, you can either go and litigate the case with some risk of error because nobody trusts judges and juries completely. And it’ll take you three years and $300,000, or you can submit it to this arbitrator also with some risk of error, but no greater than with judges or jurors. And you can get the output in 30 minutes for $3,000. You know, maybe a business person would say, Hmm, sounds like a good deal to me.

Eugene Volokh (40:20):

And, you know, maybe it’s not perfect justice, it’s not human provided justice. I don’t care. I just wanna get this litigation behind me. Right? And once that starts happening, then I think a lot of taxpayers will say, well, wait a minute. If it’s good enough for business people with their own money is at stake, maybe it should be good enough for us. Maybe we should have civil and even criminal trials be done through AI’s. Maybe if you imagine that you have this kind of AI arbitrator that’s good enough in two or three years. But if it turns out that, that the technology development is plateaued and there’s these problems of it just making stuff up haven’t been fixed, then what I think will happen is that AI software will still be used because apparently even existing versions could be used to write a first draft of a motion or to evaluate some documents with some degree of reliability, even if not perfect.

Eugene Volokh (41:13):

But it won’t really fundamentally change the way, say law is practiced. Likewise, you know, some people are worried about AI is becoming self-aware, whatever that means exactly. Coming up with our own preferences and beliefs and ideas and desires. Again, whatever that means exactly. We don’t know what it means for humans really deep down inside. So it’s hard to know what it means for AIs. Then is it possible that AI will start attacking humans, trying to exterminate humans, trying to threaten or blackmail humans or bribe humans? Also possible, you know, the Terminator concern is not a trivial concern. I’m far from sure it’s gonna happen. If I were sure, then of course we’d need to try to figure out some way of turning off AI development. I don’t think that’s a good idea at this point.

Eugene Volokh (42:03):

But I do think we need to be constantly concerned about that. And if in two or three years from now there are more and more examples of AIs doing real damage for a variety of reasons, including that maybe they are developing their own desires and preferences, then we might have a very different regulatory system. Finally, one thing just to keep in mind that again, we don’t know about is AI indubitably has tremendous military applications. So that means that assuming that it’s useful for a military, there’s really no way to just stop the development. Because if we say, okay, we won’t do this. Well, the Chinese aren’t gonna say the same, the Russians aren’t gonna say the same. And I just use two two countries which are potentially hostile to us, but also have a lot of technological savvy.

Eugene Volokh (42:49):

I mean, North Koreans wouldn’t either, maybe the North Koreans aren’t technologically up there yet, although, you know, it’s not like it requires a Manhattan project type of investment in order to develop AI it appears. So it may very well be that there will be an arms race that requires the development of better and better AI for military purposes. And then the question is what we can do to diminish the risks posed by AI, given that we can’t just stop AI because of the risks to our national security of not working on AI.

Sarah Oh Lam (43:25):

Right. Well, I mean that, that’s a great ending to the beginning of AI. So thank you so much Professor Volokh for staying in touch as AI law and policy continues to develop.

Eugene Volokh (43:37):

Very much my pleasure. And thanks for having me.

Share This Article

ai, Podcast

View More Publications by Eugene Volokh, Sarah Oh Lam and Scott Wallsten

Recommended Reads

Related Articles

Sign Up for Updates

This field is for validation purposes and should be left unchanged.