fbpx

Artificial Intelligence and the Future of Competition Policy with Catherine Tucker

Artificial Intelligence and the Future of Competition Policy with Catherine Tucker

Scott Wallsten:
Hi and welcome back to Two Think Minimum, the podcast of the Technology Policy Institute. Today is Thursday, April 25th, 2024. I’m Scott Wallsten, President of TPI, and I’m here with my co-host, TPI Senior Fellow, Sarah Oh Lam.

Scott Wallsten:
It’s hard to have a conversation about technology or tech policy these days without discussing artificial intelligence. And far be it from us to buck that trend.

Scott Wallsten:
A key question is what the economic effects of AI are likely to be, and how we should even think about them. To discuss that topic, we’re thrilled to welcome Professor Catherine Tucker to the show.

Scott Wallsten:
Catherine is the MIT Sloan Distinguished Professor of Management and a Professor of Marketing at Sloan.

Scott Wallsten:
She is a renowned expert in the field of digital economics and the impact of technology on business and society, and a leading thinker on the economics of AI.

Scott Wallsten:
Her research explores how firms can use digital data and machine learning to improve performance as well as the regulatory challenges that arise from these practices.

Scott Wallsten:
Catherine’s groundbreaking work has earned her numerous accolades, including an NSF CAREER Award for her research in digital privacy, the Erin Anderson Award for an emerging female marketing scholar and mentor, and many others. She brings a wealth of knowledge and experience to our discussion today, and we’re excited to dive into her insights on the economics of AI and its implications for competition policy.

Scott Wallsten:
Welcome, Catherine! Thanks for joining us.

Catherine Tucker:
Well, thank you for that lovely introduction.

Scott Wallsten: Let’s start off at a broad level and talk about the economics of AI. Tell us about the evolution of how economists have thought about AI over the years and how our understandings and frameworks have shifted.

Catherine Tucker:
Yes, of course. I think it’s fair to say that for most of economics’ existence we pretty much ignored AI.

Catherine Tucker:
And I think the reason we ignored AI was because we thought it was about robots.

Catherine Tucker:
We thought it was about Terminator.

Catherine Tucker:
We thought that, what has really economics got to say about robots?

Catherine Tucker:
I mean, we can study them and study whether they have productivity gains. But we can’t really say much about robots.

Catherine Tucker:
This changed around 7 years ago, where, as a profession, we suddenly realized that actually, AI wasn’t about robots.

Catherine Tucker:
It wasn’t about Judgment Day.

Catherine Tucker:
Instead, AI was about prediction.

Catherine Tucker:
And that the real economic questions come from thinking through what happens when the cost of prediction falls.

Catherine Tucker:
And if you think about that, that’s something that economics is very well equipped to do in that, as economists, we’re constantly studying how shifting costs affect economic outcomes.

Catherine Tucker:
And here is another example of that.

Scott Wallsten:
This, the economic study of AI derives, in part at least, from a broader field of digital economics. But you point out in a paper that will be forthcoming, that is forthcoming, and will be out soon, that some things about AI are different. The cost structure of AI is very different from the way we traditionally think about digital economics. Can you tell us about that a little bit?

Catherine Tucker:
Yes. Let’s just talk about this, because I think this is what is confusing a lot of the discussion, especially in competition policy.

Catherine Tucker:
Digital economics, which is a field I’ve led for a long time as part of my role in the NBER, is a field where we look at what happens when costs drop.

Catherine Tucker:
And we say, gosh, well, the world is very different when you’ve got data storage costs, data passing costs, basically going to zero.

Catherine Tucker:
Let’s just see what happens in that world.

Catherine Tucker:
And what’s different about AI in 2024 is that every single story is really about marginal costs.

Catherine Tucker:
It’s about how expensive it is to build these world models, how expensive it is to train these models.

Catherine Tucker:
And we’re suddenly in this world which is quite unlike the one we’ve studied in digital economics, where we seem to have both high fixed costs and potentially high marginal costs.

Catherine Tucker:
Now, I tend to take a very optimistic view of this, and this is probably a source of debate, which is that ultimately I believe that this period of high fixed costs and high marginal costs is temporary, and that we will end up going back to the more natural equilibrium in the digital world of a very low cost environment.

Catherine Tucker:
But I think in some sense this shift in cost structures is what is provoking a lot of this debate.

Scott Wallsten:
I think one of the ways we can see this possibly high marginal cost, just as typical users of the chatbots, is if you use Anthropic’s Claude, they’ll say you have 7 uses left of this model until 1 PM. And GPT, ChatGPT, will sometimes restrict you at some point. And I think that’s an example of what you’re talking about, right? Or an implication of it.

Catherine Tucker:
Yes, I mean, that’s exactly the kind of marginal cost I’m talking about in that, if you think about digital economics, we’re usually in a world of abundance.

Catherine Tucker:
Where this data is non-rival, basically costless. You can just get it from there and then we’re struggling what to do with non-rival, costless goods.

Catherine Tucker:
Instead, we’re now in a world with marginal cost. Not just marginal cost, we also have pricing models which reflect those marginal costs.

Catherine Tucker:
We’re in a world of metering, and that just feels very different for a digital technology.

Scott Wallsten:
Right.

Sarah Oh Lam:
It’s possible that advertisements might be placed on the chatbots. Perplexity, or some of the more high volume, high traffic bots, would they start putting ads?

Catherine Tucker:
Well, you know, I don’t want to distract us from our antitrust conversation, but I teach at MIT monetization models.

Catherine Tucker:
And what is just clear to me is that none of these firms have really cracked what the monetization model should be yet.

Catherine Tucker:
And again, if you sort of think about the fallout in from 1998, 2002, a lot of that was trying to understand what the monetization model should be of the Internet.

Catherine Tucker:
And we’re sort of at this, we have no idea.

Catherine Tucker:
Instead, what we have is, we seem to have these metering models which is exactly what you don’t want for a new technology, and I can explain why.

Catherine Tucker:
And then a prospect of doing advertising. But again, the advertising model is not clear to me either.

Catherine Tucker:
One of the hardest things right now is that we’ve got these monetization models which don’t seem to make much sense.

Catherine Tucker:
And then, if we try and analyze this from a policy point of view, what you do with these pricing models, which don’t seem to make much sense intuitively.

Scott Wallsten:
I do want to hear why metering is not the right, why we do not want to see metering. But in that context, or before you get to that, tell us what the driving factors of the high cost are. What are the things that cost a lot, have high fixed cost and have high marginal cost currently? And why you think that those will not be true over the long run.

Catherine Tucker:
Of course.

Catherine Tucker:
What is the marginal cost? It’s just simply processing costs. We’ve never had the technology, which is, what we have had technologies, but we just see this is technology with a very intense processing cost where finally it starts to be binding. We haven’t really, the data sets we’ve been dealing with before have never been large enough that processing costs have really loomed so large. But with this technology, we’re there. In terms of fixed costs, I mean, I think anyone recognizes that it’s difficult to get talented labor. You need a lot of talented labor, and that’s going to represent a high fixed cost in this period. But you can see now why I’m optimistic. What we know is that pricing costs go down. They go down a lot, and they go down swiftly. And we also know that labor markets adjust, and that current temporary labor shortages will be resolved as people start to upskill in this area. That’s why I think it’s temporary. But it is different. I don’t think labor shortages are particularly different. We have, we’ve continually suffered from labor shortages in IT in new areas. But I think what is really distinctively different is having these processing costs just loom so large.

Scott Wallsten:
One thing also, though, is energy use. We’ve started to hear more about that. Do you see that as either being a limiting factor, or else something that you’re also optimistic about?

Catherine Tucker:
I’m optimistic about that. But when I say processing costs, I’m sort of absorbing into that.

Scott Wallsten:
Yeah, okay.

Catherine Tucker:
I think, let’s be clear, anytime we’re worried about energy consumption, we’re obviously worried about spillovers.

Catherine Tucker:
But I’m also optimistic that what we’ve seen again and again in computing is that we can do more per gigawatt in terms of processing power.

Catherine Tucker:
And I see no reason that that’s not going to change here.

Scott Wallsten:
Before we move on from talking about the inputs, another issue that’s arisen with data lately is that maybe we’re reaching the limit of data that’s available to train, to use for training. First, I guess, do you believe that? And second, what would the implications of that be, or how AI develops?

Catherine Tucker:
You know, I’ve heard this commonly mentioned, and I will be honest and say it’s not something I’ve really investigated the extent to which that holds.

Catherine Tucker:
But I’ll just see generally what we know is that this is a field where there are many substitutes.

Catherine Tucker:
And I think, the way I sort of think about it is that maybe what we’re saying is that we’ve reached some kind of limit of how much you can use text from Reddit.

Catherine Tucker:
And unfortunately, people maybe, fortunately or unfortunately, people on Reddit have only produced so much text in the last 15 years of Reddit’s existence.

Catherine Tucker:
Now, what does that mean? It just means that there are probably other sources that we have not thought about.

Catherine Tucker:
There’s a lot of the world which is not being computerized, as a lot of writings would have not been appropriately digitized.

Catherine Tucker:
There’s a lot of other sources of data which perhaps we haven’t thought about yet.

Catherine Tucker:
I don’t believe we’re at the limit of data, but I believe, perhaps what we’re saying is that there’s a limit to easily accessible data which you can obtain unproblematically.

Catherine Tucker:
And we’re just going to be in this transition point where we find substitutes for that.

Scott Wallsten:
Going back for a second to the marginal cost question. Predictive AI has been used for a while. I mean, Google Predictive search and others. But marginal cost was not an issue then, or at least hasn’t been for many years. Is that because it was so narrowly focused, or what was the why? Was it not an issue for that use of predictive AI?

Catherine Tucker:
I’m so glad you said that, Scott, because I think what everyone forgets about predictive AI is that if you’ve ever been typing in your phone over the last few years and it’s helpfully reminded you that you probably wanted to capitalize this.

Catherine Tucker:
Say, thank you. Say, no, I can’t make the meeting.

Catherine Tucker:
And it sort of worked out what you wanted to say.

Catherine Tucker:
You’ve been using predictive AI. You haven’t realized it.

Catherine Tucker:
But I think you’re right. What’s really changed, at least with the most recent vintage of predictive AI, is almost the ambition.

Catherine Tucker:
That we’ve gone from a world where you have to predict a narrow scope of textual responses to a world where suddenly we’re saying that one engine can make a prediction when I ask it to write a song, a sonnet to worship my husband.

Catherine Tucker:
And also to somehow help my 16 year old daughter work out what to say in a history essay about the Weimar Republic. I mean, these are completely different things you want it to do. We’re expecting the same engine to do all of it.

Catherine Tucker:
And I think that’s been the transition, and you’re right to identify it.

Catherine Tucker:
But again, I think it’s good to remind people. This is not that new. We’ve been using this type of technology for years. But, as you say, maybe just in narrow, tightly focused use cases.

Scott Wallsten:
Okay, let’s take this discussion and start moving into the competition aspect of it, which is the bulk of your paper. And there are two parts of it, and one of these next two parts is the bulk of your paper, but the first is how these factors affect the development of AI itself as an industry. We talked a little earlier before we started recording about how fraught the word “market” is. And we don’t really want to say the “AI market.” But how do these costs and demand affect whether what we see in terms of concentration of, let’s say, AI providers? And then after that, we’ll move into the implications of AI and competition and other aspects of the economy and competition enforcement. Let’s start with the AI “not market.”

Catherine Tucker:
And the AI “not market,” yes, it’s perfect.

Catherine Tucker:
And this is really interesting in that, if you think about why I think there’s attention from competition authorities in sort of a preemptive, I’m always going to call it preemptive way.

Catherine Tucker:
I think what we’ve seen is we’ve seen that a lot of the players who at least are talking about this technology are firms that have also attracted the attention of antitrust authorities just because they are very successful platforms in their own right.

Catherine Tucker:
And it’s almost because of the firms that are visibly involved. I think there’s almost a presumption that there must be something wrong with this market, or there’s going to be. I think the biggest fear is some kind of winner-takes-all market.

Catherine Tucker:
That, therefore, we want to act preemptively to stop.

Catherine Tucker:
Now, I would just say that when I sort of think about that, we’re all making predictions at the moment about whether it’s winner-take-all, whether there are actions we should take preemptively.

Catherine Tucker:
But if you believe my story about costs, then we’re not going to end up in the world of AT&T of the 1960s.

Catherine Tucker:
It doesn’t seem to me likely that this, these marginal costs are going to persist in a way which lead there to be some kind of winner-takes-all market.

Catherine Tucker:
Or, alternatively, that we expect some kind of feedback loop or economies of scale or scope to act in the same way.

Catherine Tucker:
I tend to be a bit more optimistic.

Catherine Tucker:
And I think some of the worry is more coming just from casual optics of this industry rather than the way I view it, which is, you’ve got a lot of large firms that have been successful and are very worried that they’re going to be competed out of existence very soon, unless they invest heavily in AI.

Catherine Tucker:
I read what’s happening like that. But I think the reason we see this attention is just because they’re like, “Oh, more large firms building these products. That must be a problem.”

Scott Wallsten:
You would expect, I mean, they’re getting this attention because of who they are in the short run. But in the short run, you could, you might even expect a fairly concentrated market because of these high costs, fixed and marginal, but that over time the cost structures will change, and there’s no reason to think it would remain that way. Is that correct?

Catherine Tucker:
That is right. And I would even, I mean, let’s be clear, it is very, very hard to predict. And this is what makes it tricky.

Catherine Tucker:
We know throughout the history of digital economics, it is very, very hard to predict which firm will ever succeed.

Catherine Tucker:
If we were to go back to 2005, and we were going to predict which firm was going to succeed and popularize video content, I don’t think we’d have ever predicted it would be YouTube.

Catherine Tucker:
And we certainly wouldn’t have predicted it would be Facebook.

Catherine Tucker:
I mean, remember, at that time you had the behemoth of MySpace.

Catherine Tucker:
And what’s more, even in the sort of field of college students, Facebook, apologies to Facebook, was like, really, not that great a product relative to some of the other products that were out there.

Catherine Tucker:
They won because they had a concentrated targeting strategy. You just don’t know.

Catherine Tucker:
And I think there’s a good chance, even now, that we could not predict which firm is going to end up winning or end up succeeding most in this industry.

Catherine Tucker:
And again, there are many reasons to think that the large firms that are currently investing a lot in this technology are going to be set up to fail because it’s just going to be so hard for them to do it well. I honestly have a lot of optimism about where this is going, just because of the history of technology, the history of disruption, the fact we know it’s just very hard for a large firm to ever succeed doing something completely different, which is really what these models require.

Scott Wallsten:
Just before we started, Sarah was pointing out to me the rise of these mini models that are meant to run on device.

Catherine Tucker:
Models! Yes.

Scott Wallsten:
Yeah, right? Or phone models. And if rumor is to be believed, that’s what Apple is focusing on so that it can do all the processing on the device. And Microsoft has a new one. Is that evidence of either the changing costs or firms trying to change the cost structure?

Catherine Tucker:
I would say it’s evidence of two things we’ve already talked about. Number one, if firms face an unattractive cost structure, which they are right now, guess what they’re going to do? They’re going to invent around it.

Catherine Tucker:
And that’s what we see happening. And that’s what’s beautiful, right? That’s why it’s exciting to be a technology economist.

Catherine Tucker:
The second thing I take from it is just this point I was making about the unpredictability about what’s going to happen, and why it’s just so hard to regulate preemptively.

Catherine Tucker:
We have no idea whether or not a laptop-based, phone-based, smartwatch-based, or like a massive behemoth for processing power is going to be the right way to do these models, right? We just don’t know.

Catherine Tucker:
And we face an amazing amount of uncertainty about what the structure is going to look like in this industry, which, again, I think, makes it very hard to, even think about what you would do preemptively if you’re a regulator.

Scott Wallsten:
Let’s now get into the meat of your paper, which is what AI means for competition authorities, for enforcement. Which is an issue that has, is complicated and has not really been thought about much at all, except for you and a select few others, all of whom you work with. Bring us into that discussion.

Catherine Tucker:
Yes, I’ll just point this out as an anecdote. I wrote this paper. It’s going to be appearing in the Oxford Review of Economic Policy.

Catherine Tucker:
And I was very proud of this paper. But I was a bit told off because I realized how American my focus was in that I was very much focused on how we do antitrust enforcement in the US.

Catherine Tucker:
Which is that we regulate via the courts and via court cases.

Catherine Tucker:
And I was told very certainly by my European colleagues that I didn’t have enough about preemptive regulation and what regulators should do. I’ll just apologize for that. I’m going to be very US-centric in this conversation.

Catherine Tucker:
And what I was wanting to say is that we haven’t thought about it enough, because I think we’re very distracted about how antitrust is going to apply to AI.

Catherine Tucker:
But we haven’t thought enough about how AI is going to affect how we do antitrust enforcement, in particular, in the US context.

Catherine Tucker:
Where do we see enforcement happening? It’s often happening by the DOJ, the FTC leading a case.

Catherine Tucker:
And I just wanted to emphasize that if you just think through your typical antitrust case, everything is going to be upended. All the usual techniques we use to try and do enforcement are going to go away.

Catherine Tucker:
And I’ve just heard very little conversation about this yet.

Scott Wallsten:
Why? Tell us why it’s going to go away.

Catherine Tucker:
Let me just give you some examples.

Catherine Tucker:
The typical antitrust case relies on a senior executive writing an email saying something.

Catherine Tucker:
You’re looking into the DOJ, you’re looking for the email that says, “We are going to kill this competitor. They’re a threat to our business model.” Then you’ve got that email, and that becomes the central piece of your case.

Catherine Tucker:
Now we want to move into a world where a lot more text is generated by AI, and those kind of “hot documents” as being a major proof of intent are just not going to exist in the same way that they used to exist.

Catherine Tucker:
I mean, first of all, would generative AI ever write that? It might, it might not. If it was the generative AI that wrote that, well then, how can you use that really as intent? What does it really tell you about the firm?

Catherine Tucker:
And the point I was trying to make in this paper is that we rely so much on documentary evidence to try and establish intent in the US. But in a world where this is all written by machines, is it really going to be as informative as we’ve all, or can we use it in the same way as evidence as we have in the past?

Scott Wallsten:
But it’s always questionable as to whether that’s actually evidence, because, you know, you want the CEO to be saying, “Let’s kill the competition,” right?

Catherine Tucker:
Again, I’m just saying that if you think about how, obviously I’m an economist. I believe that all cases about competition should be just about substitution, and at least you write about substitution all the time.

Catherine Tucker:
And that’s how we should be thinking about it. But unfortunately, Scott, the world doesn’t think just like economists.

Scott Wallsten:
You know, it’s a terrible situation.

Catherine Tucker:
It’s a terrible situation, but not everyone thinks that all competition policy should be thought about in terms of available substitutes, all the competition constraints, which is how I tend to think about it.

Catherine Tucker:
Instead, it is the case that, say the DOJ, there are a lot of lawyers there relative to economists, and there tends to be a lot of focus on discovery and a lot of focus on documents and trying to glean intent from documents.

Catherine Tucker:
And I’m just saying that that is going to become more and more difficult, less straightforward than it has been in the past.

Catherine Tucker:
Now, you could be cynical, saying, “What does that leave us?” Or at least us with economists.

Catherine Tucker:
But I do think it’s something we haven’t really thought about and so on.

Catherine Tucker:
And then, we could also talk about, well, what does it also mean if you’re a firm, what does it mean if you’ve got a firm and you’ve got all this predicted text writing your emails, writing your documents, summarizing your Zoom meetings?

Catherine Tucker:
Again, how do we think about that in a world of, say, vigilant antitrust enforcement?

Scott Wallsten:
But you also discuss the effects on how we think about, or how antitrust officials should think about efficiency. Because if pricing is set by AI, how does that affect how you think about that?

Catherine Tucker:
Yes, this is one of my passions. I think I’ve now mentioned at least three times this whole concept of algorithmic pricing.

Catherine Tucker:
I do teach pricing. And one of the things which amazes me is that the economics discourse on algorithmic collusion, algorithmic pricing, is so far away from industry practice in that a lot of the economics tends to be, “Oh, we built a model.

Catherine Tucker:
On my work on my computer in my lab.

Catherine Tucker:
And I ran it a million times with a certain specification. And it learned to collude.

Catherine Tucker:
But there’s not enough looking at actually what algorithms do and what they’re trying to achieve with any kind of pricing algorithm.

Catherine Tucker:
And again, I think it’s just very, very difficult to apply usual cartel style of analysis to a world where all pricing is set by algorithm.

Catherine Tucker:
And again, that just doesn’t seem something we’ve really grappled with, at least in the US, about how we think about that world.

Scott Wallsten:
What is the way forward on this? I mean, it sounds almost as if we want to start thinking about antitrust tools from scratch is the wrong way to put it. But you know, fundamentally different somehow.

Catherine Tucker:
Well, I mean, it’s going to have to. Let’s just sort of think about what’s going to have to change.

Catherine Tucker:
You know, let’s see what’s going to change. Well, you know, so much, everything’s going to change in the legal profession, because so much discovery is going to be done by these kind of models.

Catherine Tucker:
You know, as well as, are we going to have generative AI models writing the documents and then generative AI models parsing them? You know, we gotta just change a lot of this.

Catherine Tucker:
I think what’s going to really have to change in terms of antitrust enforcement, so you know, consumer protection, is just, it’s going to have to be a deep willingness to try and understand how these algorithms work.

Catherine Tucker:
And I’m thinking here about sort of pricing cases where there’s worries about collusion or some kind of cartel.

Catherine Tucker:
But, you know, just give use that as an example. It’s going to apply so much more technical expertise than we’ve ever had before in antitrust authorities or competition authorities.

Catherine Tucker:
You know, it’s really going to lead to quite a transformation in terms of what it would take to be a policy enforcer.

Catherine Tucker:
And I don’t think we’ve, I haven’t really got the impression yet that we’ve thought enough in the US to sort of make sure we have these skills in our competition policy.

Catherine Tucker:
I came across this example in the UK where they did try to build up a data science team precisely to do this and they were led by a very able gentleman named Stephen Hunt, who led that team. But now he’s left, and I don’t know how it’s going.

Catherine Tucker:
I know it’s really, really tough to do.

Catherine Tucker:
But I’m not sick, but almost what it means to be a competition policy enforcer is going to change in terms of the skills you need.

Catherine Tucker:
And we’re going to need to have that transition very quickly in a way which I don’t know we’re prepared for.

Scott Wallsten:
And it seems like, yeah, well, I guess, except in the UK, we’re not really even having that conversation yet to the extent that we talk about anything related to that side of AI. People just say we need guardrails, and then don’t define what in the world that means. But we’re not even at the point of being able to…

Scott Wallsten:
We can’t even think about what that means yet, because we don’t understand it well enough. Is that right? I mean, we need a deeper level of thinking.

Catherine Tucker:
It’s a different, yeah, it’s a different level of thinking. And I think, you know, I don’t know how the transition is going to happen. I mean, at the moment, we’re having a few initial cases.

Catherine Tucker:
Whether it’s going to be the case that competition policy folks or enforcers start to notice the need to have more technical expertise in their teams, or whether, instead, we’re going to have sort of business as usual until we start making big mistakes.

Catherine Tucker:
And then we have to, you know, have sort of catch up time.

Catherine Tucker:
I wish I could predict how it would go, but I’m just saying as of yet, I haven’t quite seen the large change in, I guess, skills identification or needs, the real building up of skills in our big agencies to allow this to happen.

Scott Wallsten:
If you were to start this from scratch, which it sounds like somebody needs to start it from scratch to think about how it would be done. And let’s assume that interdisciplinary groups could work together well, and we know that that’s really hard. What kind of people would you want to see together to start working on this problem?

Catherine Tucker:
I mean, let’s think about what we would ideally want and what I’ve seen, what would work.

Catherine Tucker:
We’re, as you say, we’re in this imaginary world where lawyers and economists and computer scientists can all work together.

Catherine Tucker:
And what would our different roles be in this new world?

Catherine Tucker:
Well, in some sense, we’re going to have to cede a lot of authority to the computer scientists who are going to explain to us not just theoretically how the algorithms are working.

Catherine Tucker:
And by algorithms I mean something very general. I mean, a predictive algorithm that’s predicting prices that the firm should charge.

Catherine Tucker:
It can mean a predictive algorithm that’s been developed for that firm which is going to be, say, coming up with their sales strategy, their sales pitches, or so on. So anything which could be potentially problematic.

Catherine Tucker:
And we’re going to need that computer scientist to not explain just how it works in theory, but also, and this is the hard thing, to actually start to audit it, work out how it works in practice, generate data itself to really help us understand the patterns.

Catherine Tucker:
And then you’re going to have the more usual pattern of economists trying to discern, well, what do these patterns mean for fundamental questions of substitution and impact.

Catherine Tucker:
And then you’re going to have the lawyer making the even harder, I would say, the harder determination of is this problematic, given the current state of the law.

Catherine Tucker:
And these are all very different skills from how we see antitrust being done right now.

Catherine Tucker:
And you notice, I’m talking about data generation, I’m talking about auditing. These are just different words, ones we haven’t really used before.

Scott Wallsten:
Is this, in a sense, going back? That puts us back somewhere around 1890, when we’re trying to think of new approaches, and still don’t have the information yet to test those approaches.

Catherine Tucker:
I don’t know if they quite thought about it that way. But it’s, you know, if you, I think, think about innovations we’ve had and how we do antitrust in the last 40 years, we’ve had the SSNIP test. That’s been our innovation. I would sort of say, at least on the economic side.

Catherine Tucker:
You know, and it’s still got its flaws. And we’re still working out those flaws.

Catherine Tucker:
And if you think about it, that’s just a small tweak. It’s not really, it’s an improvement. It’s not a change in process. I think what scares me most about this sort of change is, it’s a completely big shift in process.

Catherine Tucker:
It’s not a shift in tools. It’s a shift in process, and that tends to be the hardest thing to ever happen within the way we do something, and within a system.

Scott Wallsten:
Hmm.

Sarah Oh Lam:
Do you have a view of whether horizontal or vertical mergers might play a bigger role in this space, or, you know, trying to fit into the current paradigm, or is it going to be…?

Catherine Tucker:
It blows your mind, right? Because, you know, we’ve always had this, we just gone through this uneasy period where we’re trying to understand platforms through our traditional models of vertical competition.

Catherine Tucker:
Which doesn’t make sense, in my opinion, right either.

Catherine Tucker:
And now we’re going to not just take a platform, but we’re going to take a platform with a whole lot of generative AI on top of it and move it into our traditional, horizontal and vertical models.

Catherine Tucker:
You know, I think in the end this is what I’d say.

Catherine Tucker:
Guess what, you know what I’m going to say, I’m going to say for both.

Catherine Tucker:
For both vertical and horizontal questions, we’re often thinking about substitution. And as long as we can keep that clearly in mind and still think that one of our jobs as economists is to think about substitution and how they lead to competitive constraints, we’re going to be fine. I think what worries me about this world is that we’re going to get so distracted about the technology that we’re going to forget that the frameworks that we used to use, which were appropriate for the 60s and 70s, were built basically as a way of trying to make substitution understandable, treatable, malleable, something you could use in a court. But forget that it’s all about substitution.

Catherine Tucker:
Does that make sense, Sarah? That’s what I worry about when we try and use these frameworks which were designed for a different world, that we can lose the underlying economics of them.

Sarah Oh Lam:
And I think, also our understanding of data as an asset, that also, we need empirical kind of evidence of whether there are competitive…

Sarah Oh Lam:
Yeah, I mean it. Whether it’s substitutionary, can synthetic data substitute for real data, you know. Those are questions that are worthy of empirical answers, and that’s the same thing with our current platform questions, and going forward too.

Catherine Tucker:
Yeah, I mean, it’s amazing that after 20 years, we’re still seeking and building evidence on very fundamental questions about data, right, which is, what are the substitutes for it.

Catherine Tucker:
You’ll notice again why I’m sort of warning against these frameworks in that, you know, one of the things, it’s now very old fashioned, but for a 15 year period I felt I was fighting this battle, but there was a tendency to treat data as an essential facility, just automatically, rather than thinking about it in terms of substitution. And I think that’s going to become even worse in our new world, where we’ve got data as an input. We’re going to just assume in some way, and we’re going to conflate inputs and outputs which I hear happening again and again with generative AI, and we’re not going to think really about substitution properly. I’m worried.

Scott Wallsten:
Will that confusion just delay our thinking about it properly, or will it actually be harmful, although delay can be harmful too.

Catherine Tucker:
You know, I think we’re just going to end up with a lot of confusing…

Catherine Tucker:
My prediction is that we’re going to have a lot of confusing decisions coming through.

Scott Wallsten:
Usually true.

Catherine Tucker:
That’s my prediction. It’s probably a safe position. I think they’re gonna take themselves. And we’re going to be very confused for a long time. And I think, if you’re asking, you know, where the confusion is going to come from.

Catherine Tucker:
I’m going to predict it’s going to come first of all, because there’s a misunderstanding of data.

Catherine Tucker:
We know that there’s always been a misunderstanding of data in the digital economy and how it affects competition, and that will persist and get worse.

Catherine Tucker:
Added to that, I see continually a sort of conflation between if data is an input and what the output is of generative AI, and they tend to be talked about in the same way.

Catherine Tucker:
And that again adds another layer of confusion to the conversation.

Catherine Tucker:
Scott, you were right to mock me and saying that I’m making a bold prediction, there will be confusing decisions. But I’m going further. I’m making the bold prediction there will be confusion because of a misunderstanding of data and a conflation of inputs and outputs. That’s my prediction.

Scott Wallsten:
I want to go back to the point your European friends made that this is very US focused. And I think I’m probably going to be making the mistake that you said people make by focusing too much on the technology.

Scott Wallsten:
And perhaps this question is too clever by half, but the European approach to antitrust is, as you said, more predictive.

Scott Wallsten:
And AI is about prediction.

Scott Wallsten:
Is it conceivable that it’s more, as a tool, it’s more applicable to a European style of antitrust enforcement?

Catherine Tucker:
That’s really interesting. In the article, it’s forthcoming. I, of course, did the cheesy thing which you have to do as an academic, which is to get the AI to tell me what it would do.

Scott Wallsten:
Of course, it’s required.

Catherine Tucker:
Oh, there’s a requirement. It’s in the appendix of the paper.

Catherine Tucker:
You know, I’m swelling inside, because AI is an objective, AI is a wonderful tool.

Catherine Tucker:
But the idea of at least the European policy makers I’ve met trusting AI enough to do that seems unlikely at the moment.

Scott Wallsten:
As it does.

Catherine Tucker:
I’ll say unlikely at the moment. I think in general, as we know, it tends to be a little bit more distrusting of technology in Europe and what it can achieve.

Catherine Tucker:
And I think we’re not going to see it being adopted as a widespread tool of antitrust enforcement in the near future. That’s my prediction.

Scott Wallsten:
What do you? Well, we’re running out of time. But before we stop, are there other things that you stay up at night worrying about with AI and competition, or the economics of AI?

Catherine Tucker:
Yes, I’ll tell you what I’m worried about quite honestly.

Catherine Tucker:
I’ve headed up the NBER Economics and AI group for 10 years.

Catherine Tucker:
Every year we have more and more questions that we have to answer.

Catherine Tucker:
I haven’t seen the same explosion of economists working on these questions.

Catherine Tucker:
I worry that we have very little, we don’t have enough substantial economic research in this very, very fast moving field to really tell us what to do.

Catherine Tucker:
And I think, you know, I felt this about privacy, I felt this about many things when we’ve made regulations preemptively.

Catherine Tucker:
But I’m very worried just right now that we’re going to make so many decisions which are going to shape the nature of industry structure right now, where people, we just don’t have any evidence about them. And it strikes me as unusually bad this time.

Catherine Tucker:
Just because, I mean, just in this talk, just think about how many things I’ve said I don’t know, I’ve got no idea.

Catherine Tucker:
You know how many papers Sarah’s told me that I should have written which I haven’t.

Catherine Tucker:
We just don’t have really as many economists as I would like working on these areas. That’s probably what keeps me up at night, but somehow…

Catherine Tucker: Somehow we haven’t grown enough as a group of economists working here.

Scott Wallsten:
What do you think explains that? I mean, there’s certainly demand for that kind of work. Why are we not supplying it?

Catherine Tucker:
I have absolutely no idea, Scott. I always think I’ve got the best job in the world being able to write papers where there’s demand for them when no one else is writing them, where I could say something new and solid.

Catherine Tucker:
And I just think we have not done a good enough job of trying to attract young talent to the area, honestly.

Catherine Tucker:
Which is completely on me, obviously.

Catherine Tucker:
That’s not me.

Scott Wallsten:
I wouldn’t say completely.

Catherine Tucker:
Yeah, we just, it’s just strange to me if you sort of think that we’ve got the same number of health economists, the same number of labor economists, same number of macro economists, the same number of digital economists. And the digital economy has exploded in the last 10 years. I think that’s what I was trying to say.

Scott Wallsten:
Yeah, hmm, that’s fascinating. And hopefully, something that will change over time. Catherine, thank you so much for talking with us today. It was fascinating, and I look forward to your paper being in print, and everybody should read it as soon as it is. And I hope we can talk to you again soon.

Catherine Tucker:
Well, it’s lovely to talk to you as ever, Scott. Bye bye then.

Share This Article

View More Publications by Scott Wallsten, Sarah Oh Lam and Catherine Tucker

Recommended Reads

Related Articles

Sign Up for Updates

This field is for validation purposes and should be left unchanged.