Monetizing AI: Subscriptions, Ads, or Something New with Catherine Tucker

Monetizing AI: Subscriptions, Ads, or Something New with Catherine Tucker

Scott Wallsten: Welcome back to Two Think Minimum, the podcast of the Technology Policy Institute. Today is Tuesday, October 28, 2025. I’m Scott Wallsten, President of TPI. I’m here with my co-host, Tom Lenard, Senior Fellow and President Emeritus. Our guest today is Catherine Tucker, Professor at MIT Sloan and one of the sharpest thinkers on the digital economy. She’s back on Two Think Minimum for the third time. We talk about an important part of the AI debates that needs more attention, monetization. Is the future subscriptions, ads, or business-to-business tools? And how would each model shape prices, switching costs, competition, and ultimately consumer welfare? We also revisit lessons from the early web. If you’re old enough, you might remember channels, for example. And ask what policymakers might consider as firms experiment with pricing and product design. Here’s our conversation.

Scott Wallsten: Welcome back, Catherine.

Catherine Tucker: It’s lovely to be back.

Scott Wallsten: So let’s start with this huge question about AI monetization, which seems like a pretty important topic right now, given the enormous valuations AI companies have and the massive deals they’re entering into for spending monies that they don’t have. And I know you’ve been thinking a lot about that, including a piece you wrote recently on monetization and consumer welfare. Let’s talk about that a little bit.

Catherine Tucker: Yes, of course. Well, maybe if I just explain at the high level why I am worrying about this. I wrote this piece as part of a transformative AI conference where a lot of economists got to worry about the existential threat posed by AI to mankind. And it turned out that 98% of our conversations were about labour markets. And then I got to have the fun role of standing up and saying, hang on, as well as supplying labour, humans also consume stuff. And if we’re going to worry about AI and how it’s going to shape humankind, we need to think about how it’s going to be as a human to consume AI. And that’s when I started to think about questions familiar to this audience about market structure, about competition in AI markets, about where we’re going. And then I had a big point to make about monetization.

Scott Wallsten: Before we continue, why do you think the economists were so focused on labor markets, is it a data issue? Because of the searching under the streetlight problem?

Catherine Tucker: I don’t think it’s a, that’s interesting. Could be a data issue. I think, in some sense, that economists, I mean, in some ways, this transformative AI conference was all about what happens if AI develops so quickly that it replaces humans in the labor force very quickly. And so maybe we were sort of, in some ways, scoped out to think about labor markets. But I do think if you think about most of the work on AI by economists, there’s just been a big, big focus on labor markets and trying to count tasks that people have done, which could be replaced by AI, and then another economist coming in and saying maybe we don’t really want to think about tasks, we want to think about how tasks will substitute for other tasks. So, there’s been a lot to discuss there. On the other hand, the actual business models, monetization, how AI firms are going to be competing in this new world, that just hasn’t been talked about. And you’re right, we don’t have any data, but we do have models, and we do have a long history of technology, so you’d think it should be an important part of the conversation.

Scott Wallsten: So, I do want to come back to the labor, issues, and the measurement issues in particular, because you’ve talked about those, but let’s turn to the monetization, problem. And so, tell us about the piece you wrote, why you wrote it.

Catherine Tucker: Right, so in the piece I wrote, I was really trying to think about a certain perspective. Let’s imagine we’ve got a very simplistic trade model. Then what economists are worrying about is that markets could have a double whammy. And the double whammy could be that workers no longer have jobs, a monopolist comes in and charges a lot for AI, and as a result, takes away any welfare gains they should have enjoyed from the productivity increase from AI. That we could have no jobs, and that there could be one monopolist controlling all AI, who makes a lot of money for their shareholders, but everyone else doesn’t get any of the gains. That’s the big one. And I’m saying, look, in the end, doesn’t that depend a lot on the monetization model? The concern seems to depend on the idea that we’re all going to be paying a monthly subscription for AI, and that that monthly subscription can just go up to infinity. That’s sort of the idea of the model. And I was saying, actually, if you look at the history of technologies, the monetization model actually matters a lot for welfare implications. And I can give some examples of that to explain what I’m worrying about.

Scott Wallsten: Yeah, please do.

Catherine Tucker: Yeah, so the kind of example I’m worrying about is that, you know, let’s go back to the history of the internet. And if we go back to one of, everyone probably who’s listening to this podcast favourites, which is the Microsoft case, if you go back through the documents, what’s really interesting is how they got monetization completely wrong for how they thought the internet might be monetized. In that there was this big worry, that the way browsers were going to be important is that they would act as huge funnels into the internet. And as a consequence, a firm like Microsoft could extract a lot of money from Disney for highlighting it in what was called a “channel.” It’s hard to remember this, but back in 1997-8, there were such things called channels, and if you had a browser open, there would be a drop-down menu. And you could go to your AOL, you could go to your Disney. And you could navigate the internet that way. And there was a worry that that was going to be the key way that the internet was monetized, that, Microsoft could extract huge amounts from Disney for being placed first, and as a result, you would have fixed costs in the creative market, leading to more concentration there, less quality, and so on and so on. And so that was the big worry, and that was the big welfare story people were worrying about. Now, as we know, that’s not what happened on the internet. I don’t know when channels disappeared, but I think that drop-down menu from browsers disappeared pretty quickly, because it really wasn’t a very good way of navigating the internet. And instead, we ended up with a largely advertising-supported internet, where actually, if anything, we have a lot more competition in terms of creative content, barriers to entry or lower, and all of these good things.

Scott Wallsten: Well, let me push back a little bit on that. Because, I mean, with the channels, no question, most people don’t remember channels at all. But people still complain, whether justified or not, that, maybe it’s been replaced by Google Search, and they worry about what is at the top of the search list. And it sounds like a similar concern.

Catherine Tucker: Right, so that’s, I mean, that’s a big, you know, it’s really interesting, because in some sense at this point, we are going to a lot of interest in the EU. And I think a lot of these interests in the EU tended to be, what happened when Google entered the market, such as Google Mortgages. If Google enters the mortgage market, and it puts its search results first, doesn’t that, sort of, unfairly shape competition? Right. Now, I’ll point you to a paper by my co-author, Leslie Chu, where she actually found very little, when she tried to study it. It was one of those unpublishable papers, because she looked at the introduction of lots of these products, and, you know, most of these Google products went away because they just weren’t very successful, and that she found out why no one was clicking on them. You know, I’m not saying it’s not a concern that we shouldn’t worry about in some circumstances. But empirically, at least, what we’ve seen, we haven’t worried about so much. But, you know, there it’s interesting, because again, it becomes this key question, right? About how, how a technology provider monetizes themselves is actually shaping how we think about welfare effects. Because let’s suppose that the EU have been right, and what had happened was that Google captured all mortgages through the cunning strategy of listing them above other search results. And, you know, assuming that people went buying something which cost them tens of thousands of dollars would never go to the second result, that, I imagine they capture that, and as a result, that becomes an important part of revenues. Again, you know, that’s going to have welfare consequences. They will be different from the world of, I was talking about the channel drop-down menu world, in terms of where they’re going to be falling. And so, I would like to say what the EU was worrying about, in terms of steering, is exactly an example of what I’m talking about, which is that you have to really project what the monetization model is going to be to understand welfare effects. And I think at the moment, that’s just not what we’re focusing on. And that’s what I’m trying to say we should worry about.

Scott Wallsten: Okay, sorry, I interrupted you just as you were heading into how advertising became…

Catherine Tucker: Oh, yes, the big money, which by now, hopefully, is being previewed, is just this idea that, you know, we had no idea, what we saw with the internet is that, you know, there ended up being a lot of monetization by advertising. In the end, the welfare consequences depend on the extent to which you hate advertising, I suspect. And of course, there’s some people who really hate it, but there are many people who tolerate it, and some people who like it. And so, as a result, the welfare consequences are very different than what we predicted. And as a result, what I’m trying to say about AI is that as economists, we both need to be scrutinizing the data for clues about what monetization strategy is going to prevail in our new AI world. But also thoughtful that when we write down models, that we are thinking, well, how is this going to be monetized? Because essentially, that is going to shape any consumer welfare implications.

Catherine Tucker: So, you know, it’s really interesting to me, where, as you know, I’ve been studying digital advertising at exactly the right time for the last 15 years, when it’s been a big engine of growth of our digital economy. Now, it is less clear to me how advertising is going to support the new wave of generative AI. Right now, we have this strange conundrum of firms that made a lot of money through paid search deliberately not making money from paid search by putting the AI-generated results first. Now, I’m not privy to why it is that they are doing that, why it is that they’re deciding to lose money on advertising in order to prioritize AI snippets. But it does suggest to me that we have to be, that it’s going to take some time to work out how to support AI through advertising in the way we’ve seen with the sort of user-generated content that characterized Web 2.0.

Scott Wallsten: So, and how, how would, how will consumer welfare differ based on the different modernization models. Now, of course, it’s all endogenous.

Catherine Tucker: Oh, yeah. You know, in some ways, it’s wonderful to be in familiar territory, right? If it’s online, if it’s digital advertising, then most economists are very optimistic. And why is that? Well, if you think about it, we have a model of advertising where it’s, like, purely informative. And as a result, having more advertising and more information just increases everyone’s welfare. So that’s the best possible world. And then even if you don’t believe in that world, then you say, well, maybe people got different preferences over advertising. Lots of people seem to like it, and find it useful. Lots of people just are very good at ignoring it. And then there’s a few people that just don’t like it, and they, you know, as a result, sometimes choose, like, the, ad-free option. We know from, our experience, if you think about, say, Facebook in Europe, that it’s like, the miniscule percentage, fraction of the percentage of people who actually ever choose to pay for a product, rather than seeing ads, so that gives us some idea of the welfare implications. In other words, there’s very, what’s nice about online advertising is that it doesn’t have the clear-cut negative welfare implications of, say, one firm controlling a monthly subscription price and raising it above competitive levels, right? Which I think is how a lot of economists are worrying about it.

Scott Wallsten: And so, how would the distributional effects differ under different monetization models?

Catherine Tucker: Okay. So, let’s imagine, let’s imagine a few worlds. We can imagine the monthly subscription model world. And then that would be quite a straightforward world if you’re an IR economist. In that we got a clear price, and then the question will be, are there sufficient barriers to entry? Are there sufficient switching costs? Which mean we think that that price could, in the long run, be supra-competitive. Now, that’s reasonably hard to predict. I think, what we’ve seen now, you know, there’s little evidence right now of huge amounts of, sort of, network effects or scale economies we might worry about. When it comes to barriers to entry, what is just remarkable in some ways is how all the sort of predictions we might have had even two years ago about how difficult it was to enter this market has been completely wrong yet again. But again, in that sort of world of monthly subscription, people competing, firms competing on monthly subscriptions, I think we know as economists exactly what to look for, right? We’ve got to look for switching costs. And then we’re going to look for some kind of network effects, economies of scale argument, which might be preemptive at entry. That’s going to be our big worry. I could go on to other worlds, too.

Scott Wallsten: Yeah, please do.

Catherine Tucker: Yes, so the second world, what we might worry about is, as I say, one where there’s another sector of the economy that pays. And I was suggesting a world of online advertising. And there, that’s going to, I think, depend a lot on distributions of preferences about online advertising and how useful people find it. In terms of the generative AI-enabled market, I tend to be positive about that world. You know, other people, who have more of a viscerally negative reaction to online advertising, have a less positive view. But I think even we could all get together and agree, it will certainly have far more uneven consequences for welfare than just a straightforward price. And then I think you’ve got the third model, which is saying that, really, in the end, this is not about end consumers. In the end, most generative AI or AI services are going to be purchased by firms. And then it sort of becomes more of a vertical relations story, more of a, say, pass-through-style story, and a question of, I think, how both competitive the upstream and downstream market is. And so it’s going to be quite, what I want to say is that the monetization model and the way that this technology ends up getting monetized is going to have huge consequences for how we think about welfare in those markets. And what we as IO economists should be worrying about in terms of these markets.

Scott Wallsten: Also, trying to think about what we should worry about, let’s turn to the supply side a little bit. One of the things that seems to, that’s true about AI, and that you talked about last time you were on the podcast, was that typically in digital markets, we think of goods or services that have very high fixed costs and low or to no marginal costs, right? And in AI, that’s not exactly true, because you’ve got high both. There’s a true marginal cost, and high fixed costs. So, does advertising only work in a low marginal cost environment? Or is that irrelevant?

Catherine Tucker: You know, I think my heart, as someone who teaches marketing, says no, pretty irrelevant. And the reason I say it’s pretty irrelevant is that what we haven’t, what I haven’t seen yet is any evidence anyone has worked out how you do online advertising well in a way which can monetize, say, for example, generative AI. And, you know, let me set, you know, if we sort of think about our stories, right? Like, about, say, early versions of the search engine, Alta Vista really had, you know, their way of doing advertising was, in some sense, far too obtrusive, was not really thought out about who would complement the search results. If you think about, for example, Meta’s early types of advertising, again, not particularly well thought out. They were trying to look like a search engine, it didn’t make any sense if you’re doing user-generated content. So, I’m just giving you examples, as at the moment, I think we’re at the stumbling phase of online advertising, where no one’s really cracked how you might do it in a way which can actually monetize generative AI. And I think it’s going to be that cracking of the code which matters far more than, sort of, fixed costs or not as a story, right?

Scott Wallsten: So, I mean, cracking the code probably requires lots of experimentation on the side of the various companies, but there’s kind of an urgency for money. And they’ve promised investments that seem hard to understand how they can afford. Maybe outside investors will still believe in the future enough that they don’t have to be able to monetize enough right away. But, do you see this experimentation happening, and can they do it with, sort of speed that we might expect them to need?

Catherine Tucker: So, I mean, again, I’m, I have not, I want to be clear, I’ve not talked to, any executives who are worrying about this, or the ones I have talked to, I think, have told me to never, ever repeat what they’ve said on a cost of debt, so I have to be a little bit careful. But, you know, what I’ve been told is that, in general, at the highest level, experiments are being run which, in the short run, make no sense from a monetization perspective. And I think there’s such a fear of being perceived as being not AI-forward. But as a result…

Scott Wallsten: What do you mean by AI forward?

Catherine Tucker: Well, what I mean is that, you, if you, for example, say, oh gosh, it seems that this experiment is suggesting that AI is getting in the way of my traditional monetization model, you might usually, if that was an A/B test, say, no, I’m not going to proceed. That’s a really silly idea. I’ve just discovered this doesn’t make any sense for me making money. But I think a lot of the firms are so worried about being the proverbial Kodak, or Polaroid, or some kind of firm stuck on the old S-curve that doesn’t survive disruptive technology. They’re like, it’s actually a bonus if we’re losing money, because that’s what we’ve learned from previous technology cycles. So I think as a result, because firms have really learned this lesson, and maybe it’s a good lesson, it’s what we teach in business school, that often firms fail to jump between technology cycles because they’re so worried about protecting their existing monetization, that as a result, we haven’t seen that kind of experimentation in new ways that you might try and monetize this kind of technology.

Scott Wallsten: Does that make you worry about the amounts of money that are being invested, and promised investment?

Catherine Tucker: Well, I teach pricing, so I always worry about this. I always have students who are sort of like, you know, students will always tell me that they’re going to be somehow, you know, much like WhatsApp and, you know, be in an industry which is really hard to monetize and not really have a good plan for monetization and be acquired for billions of dollars. That’s often a plan. So I always worry about it, because that doesn’t sound as a plan to me as a pricing professor. But at the moment, I’m honestly just not even seeing some of the experimentation I would hope to see for us to really be able to predict what successful monetization might look like. Instead, what I’m seeing is a lot of, sort of, software as a service, a lot of what I would call cost-based pricing architectures, where firms are simply pricing queries. Why? Not for any intellectual reason, but because that’s how their cost structure works, rather than actual, sort of thoughtfulness about monetization.

Scott Wallsten: That’s kind of worrisome for the future.

Catherine Tucker: And it isn’t, Scott. I’m, you know, I mean, it’s worrisome. What is worrisome is if we have spent a lot of money and wasted a lot of, you know, potential investor dollars on things which could be put to more productive uses, exploring vintages of the technology which will never be as self-sustaining. So I think the only reason I would worry about it is if we’ve just wasted a lot of money on a type of technology which could never be self-sustaining, rather than my more optimistic view is, which was maybe this world is just a long, a long road to getting the technology to a place where someone has to eventually think about monetization.

Scott Wallsten: You’re generally an optimist on these. Is that still true?

Catherine Tucker: Oh, I, I am such an optimist, Scott. I, you know, it’s, as you know, it’s, very easy in technology policy circles to be constantly fretting about the bad things about technology. But teaching at MIT, for the first time ever, my executive MBA class, I would say over half of their job responsibilities are now related to AI. They are all trying to pioneer amazing innovations and ventures in the world of AI. I have, you know, for example, one of the lovely things about teaching at MIT is that we, at the executive program, is we have a lot of doctors, really impressive doctors from some very impressive hospitals in the Boston area. And the things that they are thinking of doing with AI to try and improve healthcare outcomes really are, like, I mean, they inspire me every day. Does that make sense? So, I am in this world where I see all the inspiration and good things that AI can do, and I know in DC there’s a lot of fretting. But I wish in some ways you could just come to hear my, I wish I’d get some people from DC to come to my class at MIT, to just hear about all the good things that are being done.

Tom Lenard: So, are there any things that you are worried about?

Scott Wallsten: In terms of, you want to force the pessimist out?

Catherine Tucker: You want to force the pessimist out of her.

Scott Wallsten: Well, wait, before you, while you’re thinking about that.

Catherine Tucker: Okay, okay, I have an answer, it’s not very well, okay, so this is what I’m worrying about. Tom, you’re gonna force the pessimist out of me.

Tom Lenard: I’m an optimist.

Catherine Tucker: Is that, you know, I was in Silicon Valley in 2000.

Scott Wallsten: This is what I was going to ask about, this comparison.

Catherine Tucker: Yes. You know, I remember, I was at, as I say, I was at Stanford, and you would get these, sort of flyers. I don’t know why we had flyers in the internet age, but we had flyers saying, quit your job as an undergraduate at, in Stanford, $250,000, we’re going to make a lot of money selling potatoes on the internet. Or, like, pet food on the internet. There was all these things that should not be sold over the internet being sold over the internet. And I worry we could, if I was to worry about anything, it’s repeating some of those mistakes, just because we, if you remember, it was a very painful shakeout from that first, we’re selling pet food on the internet boom. I’ve seen less signs of it, at least of MIT, I hope, but I’m not saying that, you know, elsewhere there hasn’t perhaps been a lot of investments made in AI strategies which end up being somewhat redundant. And so, if I was going to worry about something, it would be just the sort of feeling that we have to do something with AI, leading a lot of firms to invest money in perhaps not the most productive uses in AI. And that would be my worry. So if I had to be pessimistic, Tom.

Scott Wallsten: But that really, I mean, that does, I think, also support your optimistic point of view, because I was at Stanford at the same time, and I remember sometimes I just felt like an idiot, because there I was, living on $15,000 a year, while everybody was becoming a multi-millionaire. But on the other hand, some of the projects were so stupid.

Catherine Tucker: Really, really idiotic, and there were these flyers up, and it was like, we are going to change, I remember one, we are going to change, you know, car buying. You quit your, you know, and there were a bunch of undergraduates, you know, they were going to have a, you know, and the money, as you say, being offered when we were on our students’ stipends was unbelievable. But maybe the fact they were flowering graduate, you know, grad school econ people’s desks shows you they didn’t have much of a strategy, I’m not sure.

Scott Wallsten: But it’s amazing, I mean, it’s nice to hear that the ideas that you’re hearing are more, I mean, I’m sure many of them, most of them won’t work out, but they seem thoughtful and innovative and potential for large consumer surplus gains.

Catherine Tucker: I think so, and I think the ones that get me most excited are always the projects really about process improvement, right? If you think about the ones that failed in our first internet boom, they were just, we’re going to sell stuff on the internet, and there was no real change in the way we were doing stuff. Whereas what my, at least the executive MBAs I’m teaching right now are really thoughtful about is, like, we live in these jobs where there’s some really silly and cumbersome processes which get in the way. And we can use AI to improve that process. And as an economist, just hearing about process improvement is sort of boring and nerdy and not, you know, not as sexy as selling pet food on the internet. But it’s like, that sounds to me like something real. Now, they might not succeed, process improvement is hard, it involves people, all of these things can get in the way, but still, it seems worthwhile.

Scott Wallsten: So, let’s tie this in a little bit to the project, the book you edited on the political economy, political economy and AI. This isn’t exactly what the book is about, but how do you think the politics and the political economy will differ based on how different business models turn out? I mean, the pay is, the, you know, the software as a service is very different from advertising.

Catherine Tucker: It certainly is. So just to explain, so the book I wrote, I didn’t write it, I’m very much an economist, but I got to edit a book of political economists, and that was fascinating in that we basically gave them the job to sort of say, well, how will AI change political economy? How will it change how firms, it’s how countries relate to each other, rather than firms? And, you know, during that, there was a lot of things that people wanted to say that we wouldn’t allow them to say, because it was an MBR volume, which means we can’t take a policy stance. Because the underlying question, of course, is sort of China versus US, and questions of protection, and what do you protect or not, and how that will change relations. Now, it’s interesting to sort of think about how the monetization model might change that. You know, if we were to have some sort of off-the-cuff predictions, because I haven’t actually been too thoughtful about it, is that, you know, again, in the world of software as a service, then, in some sense, that’s going to be, again, a question of barriers to entry, fixed costs, switching costs, all of these boring things that IO economists worry about. You know, and then, and whether or not we see any sort of elimination effects, which could leave one country potentially to have a few firms which end up dominating the world, right? Which is a lot of what people’s concern is, who are sort of more negative about these questions than me. Now, with the advertising-based model, if we go there, you know, I think if we see how that’s shaken out, what has been very, very interesting to me is that the advertising model has allowed two very different ecosystems to, evolve in, for example, China and the US. And so maybe if we think if history repeats itself, then the advertising model will allow some degrees of global differentiation. And then the business-to-business model, then that would be interesting, right? Because they, I think, would be thinking of the downstream firms as being local. And the question about the upstream firms will again be a question of barriers to entry. So, I haven’t really, you can see that these are sort of new thoughts I’m having while talking on this podcast. But I think we can say the new thoughts, though not yet completely sketched out, suggest that the monetization model will be important for whether or not AI becomes something more geopolitical, or something that, the more geopolitical than it is even right now, or a source of international conflict.

Scott Wallsten: So, one of your very high-level, sorry, I don’t want to put words in your mouth, but I think, that one of your very high-level takeaways from the book was that economists generally need to do a better job of working with other disciplines. Yeah, first, I mean, I guess, is that right? And then, where do you see a need for that in AI and economics research on AI, in particular?

Catherine Tucker: Well, I think, it should be said that economists incredibly were bad at working with other people, so let’s just sort of take that as a given, and that will be a universal truth. That would be a, that would be a universal truth, universally acknowledged. I run Digital Economics and AI at the NBER. And one of the things we have tried really hard to bring in trade and macroeconomists to talk to us. My worry is, is that, as of yet, we’ve attracted a few of them, and they’re just wonderful when they come in. But it’s not, it’s obviously something they do, I think, more as a kindness to us sometimes, rather than just being something which is central to their identity. And so, what I’m hoping, you know, if I was to say something, it’s the study of AI, though it’s natural for people like me to worry about constructive structure, to worry about switching costs, to worry about monetization, all the things that IO economists should worry about. You know, hopefully I’m humble enough to realize that there are big macroeconomic consequences. We need their models. We especially need models of trade, which are really hard to sort of bring into the AI conversation. And, you know, that’s something, you know, it’s probably up to me to do a better job of encouraging, but that’s sort of our failure point right now.

Scott Wallsten: What other fields, what fields other than economics? If you could pick somebody from a field other than economics, all else equal, what kind of scholar would you want to work with, on your, on AI projects? What’s, what’s, what perspective is missing?

Catherine Tucker: What perspective is missing? So there’s sort of two questions there, right? Which is, what would be my favorite person, type of academic to work with myself? Versus what might be better, better for society. In terms of me, I’ve been doing a lot of research recently on the embedding of AI in medical devices. And so we’ve got a paper right now about how privacy regulation inhibits the cloud-based nature of a lot of AI-enabled radiology devices. I would love to be working for radiologists. Honestly, selfishly, I need that institutional knowledge about how this is actually working in practice. But that’s my selfish desire. You know, if we were to have more global, conversations. I think it would definitely be bringing in people in national security. Now, why is that? It’s because we, you know, were hoping at some point to have a conference about national security and AI. And that was just a difficult thing to even envisage in terms of how to bring that community together. So given that I thought it was a good idea and failed to think exactly how to do it well, that would be probably, you know, if I was doing a global good question, I would definitely choose, to bring in, to being national security experts.

Scott Wallsten: I think that’s an issue in a lot of economics work. People, whatever policy they want, they either, they either use national security as a reason for it or against it, but then they can’t provide any details, and we don’t know how to value it.

Catherine Tucker: Yeah, we don’t know exactly, we don’t know how to have that conversation to make it valuable for them or us, so that would be my wish. You know, you ask me my ideal, that would be my ideal.

Scott Wallsten: Huh? Well, that seems like a good place to leave it. Catherine, thank you so much. It’s always great talking to you. And we’ll have you back and see what’s changed next time.

Catherine Tucker: Exactly, let’s see how wrong I actually was this time.

Website |  + posts

Catherine Tucker is the Mark Hyman Jr. Career Development Professor and Associate Professor of Marketing at MIT Sloan School of Management. Her research interests lie in how technology allows firms to use digital data to improve their operations and marketing, and in the challenges this poses for regulations designed to promote innovation. She has particular expertise in online advertising, digital health, social media, and electronic privacy. Generally, most of her research lies in the interface between marketing, economics, and law. She has received an NSF CAREER award for her work on digital privacy and a Garfield Award for her work on electronic medical records. Tucker is associate editor at Management Science and a research associate at the National Bureau of Economic Research. She teaches MIT Sloan's course on Pricing and the EMBA course Marketing Management for the Senior Executive. She has received the Jamieson Prize for Excellence in Teaching as well as being voted "Teacher of the Year" at MIT Sloan. She holds a PhD in economics from Stanford University, and a B.A. from Oxford University.

Thomas Lenard is Senior Fellow and President Emeritus at the Technology Policy Institute. Lenard is the author or coauthor of numerous books and articles on telecommunications, electricity, antitrust, privacy, e-commerce and other regulatory issues. His publications include Net Neutrality or Net Neutering: Should Broadband Internet Services Be Regulated?; The Digital Economy Fact Book; Privacy and the Commercial Use of Personal Information; Competition, Innovation and the Microsoft Monopoly: Antitrust in the Digital Marketplace; and Deregulating Electricity: The Federal Role.

Before joining the Technology Policy Institute, Lenard was acting president, senior vice president for research and senior fellow at The Progress & Freedom Foundation. He has served in senior economics positions at the Office of Management and Budget, the Federal Trade Commission and the Council on Wage and Price Stability, and was a member of the economics faculty at the University of California, Davis. He is a past president and chairman of the board of the National Economists Club.

Lenard is a graduate of the University of Wisconsin and holds a PhD in economics from Brown University. He can be reached at [email protected]

Scott Wallsten is President and Senior Fellow at the Technology Policy Institute and also a senior fellow at the Georgetown Center for Business and Public Policy. He is an economist with expertise in industrial organization and public policy, and his research focuses on competition, regulation, telecommunications, the economics of digitization, and technology policy. He was the economics director for the FCC's National Broadband Plan and has been a lecturer in Stanford University’s public policy program, director of communications policy studies and senior fellow at the Progress & Freedom Foundation, a senior fellow at the AEI – Brookings Joint Center for Regulatory Studies and a resident scholar at the American Enterprise Institute, an economist at The World Bank, a scholar at the Stanford Institute for Economic Policy Research, and a staff economist at the U.S. President’s Council of Economic Advisers. He holds a PhD in economics from Stanford University.

Share This Article

artificial intelligence, Economics of Digitization, regulation

View More Publications by

Recommended Reads

The Case of Newspapers

Paul Rubin on Instant Info as a Two-Edged Sword

Press Releases

What Are We Not Doing When We’re Online?

Research Papers

Explore More Topics

Antitrust and Competition 181
Artificial Intelligence 34
Big Data 21
Blockchain 29
Broadband 382
China 2
Content Moderation 15
Economics and Methods 36
Economics of Digitization 15
Evidence-Based Policy 18
Free Speech 20
Infrastructure 1
Innovation 2
Intellectual Property 56
Miscellaneous 334
Privacy and Security 137
Regulation 12
Trade 2
Uncategorized 4

Related Articles

Jeff Prince on Economics at the FCC and Platforms

FinTech in the Biden Administration

Giulia McHenry and Wayne Leighton on the FCC’s Office of Economics and Analytics

Digital Platform Agency Would Imperil Internet Commerce

TPI Event: Quantifying the Digital Revolution

The Competitive Effects of the Sharing Economy: How is Uber Changing Taxis?

The Ridesharing Revolution: Economic Survey and Synthesis

EC Proposals May Impact Entire Internet Economy

Sign Up for Updates

This field is for validation purposes and should be left unchanged.