Dr. Guy Ben-Ishai on the Economics of AI

Dr. Guy Ben-Ishai on the Economics of AI

[00:00] Scott Wallsten:

Hi, and welcome back to Two Think Minimum. Today is Friday, June 16th, 2023. I’m Scott Wallsten, President of the Technology Policy Institute, and I’m here with my co-host, TPI Senior Fellow, Sarah Oh Lam. Today, we’re happy to be speaking with the head of economic policy research at Google, Dr. Guy Ben-Ishai. Prior to working at Google, he was a principal at the Brattle Group, where he focused on antitrust and other matters, and the chief economist for the New York State Attorney General’s Office. Now, because the only topic anyone cares about these days is artificial intelligence, that’s what we’re going to talk about. And of course, Guy is spending a lot of time on that being at Google. We’re going to explore several topics, kind of the economics of AI: what its impacts are likely to be, its economic impacts, and how policymakers are approaching it, and what they might, what they’re doing right, and what they’re doing wrong. Guy, thanks for joining us.

[00:54] Guy Ben-Ishai:

Hey, Sarah, Scott. terrific to be here. You know, as the podcast’s biggest fan, I’m trying to keep it together. But again, thank you for the opportunity.

[01:04] Scott Wallsten:

We appreciate that, and we’re thrilled to have you. So, let’s just start with, you know, tell us a little bit about the economics of AI. This sort of burst onto the scene, at least as far as people who weren’t watching too closely or concerned, and it’s beginning to pervade the economy. What’s the, how do we think about this? What’s the right framework for thinking about AI?

[01:29] Guy Ben-Ishai:

OK, that’s a useful question. Let me, Scott, maybe just start first by a general definition in what are features or elements of AI that we think are really important. So, it’s a digital technology, at the bottom line, the most basic form, that can observe data, recognize patterns, and make predictions. Now, that in and of itself probably doesn’t sound too interesting. But I think that there’s three things to keep in mind. First, in terms of the nature of the data, it’s not just what used to be structured in single-sourced data, but it’s also multi-sourced and unstructured data that AI is able to interpret. The ability to observe instantaneously in real time, like multiple data sources, is really important. Think about humans kind of like flipping through a textbook. That takes a very long time. And then the third one is the type of observations that AI can make. And that’s really important. It’s not just observations or tasks that are prompted or coded, but it has also the ability to somewhat autonomously make observations on these unstructured data sets in real time. And I think that that’s really important because it implies that it exceeds the speed at which humans conduct activities. And for that reason, it really expands our ability to do things in a way that we’ve never been able to do before. And this is important. It’s really a dramatic shift in our capabilities to collect and analyze information. And I think that for this reason, for these reasons, the economic potential of AI is quite significant.

[03:18] Scott Wallsten:

So right now, everyone’s kind of obsessed with large language models. And when I say people, I don’t mean that to be sort of a derogatory way, because I am too. But that’s not all that AI is. I mean, we’ve been working with AI long before that. I mean, some of it has been large language models. I know Google used AI to help interpret search queries. But AI is used in other contexts. Is there something different about the type of AI that, you know, that chat GPT uses or that Google’s Bard uses or that any of the other big AIs use that’s kind of fundamentally different in an economic sense than other kinds of AIs like that drive driverless cars or we hope will drive driverless cars, for example.

[04:10] Guy Ben-Ishai:

Yeah, so Scott, I want to respond in two ways. So first I want to respond, you know, to meet my duty as an economist here to provide you, what are the economic features of AI that are really different. And I think that there’s three elements that really enable AI to deliver economic benefits in ways that we have never seen with predecessor digital technologies. The first one is self-improvement. If you think about digital technologies, it was a really important feature that we had remarkable economies of scale. That, once you develop a digital solution, replication tends to be costless or inexpensive. With AI, we actually take that a step further. We train generative AI and LLMs in a way that actually doesn’t only create economies of scale, it actually enables diminishing costs over time. I think that’s really important. Another element or feature that I think is categorically different from what we’ve seen before is the expanded capacity to automate. So with predecessor digital technologies, we tended to automate tasks that are more manual and routine, whereas with AI, we’re increasingly finding that we can actually automate tasks that are non-routine and cognitive. And that I think is also important. And the third feature, which we… covered already, is pattern recognition. The ability to autonomously recognize and observe patterns and make predictions using multi-sourced unstructured data is really important. Umm.. Sorry, go ahead.

[05:53] Scott Wallsten:

No, go ahead, finish up, and then I want to follow up with something.

[05:58]  Ben-Ishai:

You know, Scott, I was going to say, from a personal perspective, so look, I’ve been at Google for three years. And as you can imagine, we’ve been an AI first company for quite some time now. For the most part, I think those efforts, the tremendous R&D and scientific advancements that we made are not really visible to the public. And to be honest, it’s hard to visualize them and still they become kind of like a user-facing platform. My tipping point, my pivotal moment, was when I first used Bard to ask, how do I know if I’m not a fish? That was remarkable. So first, I didn’t think that I had these existential questions in me. And also, the responses were so thoughtful and detailed in a matter of seconds. So I think it was the first time that I really realized that AI has this ability to correct, generate information and provide content, access information ultimately in ways that are really thought provoking and creative, a technology that we didn’t really have before.

[07:07] Scott Wallsten:

So are you a fish?

[07:09] Guy Ben-Ishai:

It turns out that I’m not. I know.

[07:11] Everyone: [laughing] [07:14] Guy Ben-Ishai:

You know, it was really interesting. I can follow all the details. But I remember some information about, check if you have gills, if you have fins, if you live in water. And then, consult a marine biologist or a fish expert, which I thought was really rich.

[07:26] Scott Wallsten:

Well, that’s nice that it wanted confirmation.

[07:29] Guy Ben-Ishai:


[07:30] Scott Wallsten:

Independent confirmation.

[07:32] Guy Ben-Ishai:

Sorry, you were gonna ask a more serious question.

[07:35] Scott Wallsten:

The first thing you said, the point about self-improvement, and it’s not just that replication is costless, but that it can reduce costs over time. The implications of that sound enormous. So does that imply that, like in pharmaceuticals for example, that the cost to develop the next new drug could be decreasing over time. I mean, pharmaceutical companies spend enormous amounts of money on R&D. Are you saying that maybe it will take less and less? Or even to use your own example, you said people don’t understand the enormous resources that Google has put into AI. And I’m sure that’s also true for OpenAI and everyone else who’s using it. But would that also apply to incremental developments in AI?

[08:22] Guy Ben-Ishai:

Yes, and then some. Absolutely. I think you’re absolutely right. So first I do think that as a general-purpose technology, we’re looking at a technology that triggers other technologies, which is remarkable. With respect to the diminishing costs, these models are trained and as computing costs are declining over time, we’re gonna see improved efficacy and efficiency and accuracy of these models. Now, if you could pause for a minute and think about general purpose technologies in a historical sense. We like to point out to the compounding effects of general-purpose technologies. Think of the printing press that led to a scientific revolution. Think to a transistor that led to a digital revolution. I think where the rubber hits the road and what you just alluded to, Scott, is exactly the point. We don’t know what those compounding effects of AI will be in the future. You’re absolutely right that we in healthcare, we recognize that drug discovery costs are diminishing as a result of AI applications, but what Avi Goldfarb who actually was on your podcast points out, and I think correctly so, is that these are just one aspect, kind of like a single element of applying or deploying the technology. And once we get to a system-wide application, once we have enterprise-wide applications for pharmaceutical companies, for example, those economic features will generate far greater benefits. So in a sense, I think we’re really just scratching the surface.

[10:04] Scott Wallsten:

You know, one feature of general-purpose technologies is often that you don’t know it’s general-purpose technology until much later, you know, until it’s been around for a while. But in the case of AI, there seems to be almost unanimous agreement that it’s a general-purpose technology. Is there, you know, is there a chance that we’re wrong, that it’s hype? I mean, it sure doesn’t seem like it, but it also seems really fast to conclude that it’s a GPT.

[10:31] Guy Ben-Ishai:

Yeah, no, we’re economists. There always is a chance that we’re wrong.

[10:36] Scott Wallsten:

Usually more than a chance.

[10:38] Guy Ben-Ishai:

I don’t know. So let me think about this constructively. So what we have learned so far from the empirical literature in the very recent one is that every time we enable humans to be assisted by AI, we see remarkable productivity gains. And productivity can lead to a lot of many different things. I think in the context of AI, because we tend to automate tasks that are more mundane and routine, it really enables to kind of like release those human traits that we really enjoy and that we really benefit from, you know, judgment, empathy, intuition. And I think that for that reason, we’re not just looking at increase in productivity in the narrow sense, but there’s kind of a good reason to suspect that this indeed will turn into a general-purpose technology. and that we will have these compounded effects.

[11:37] Sarah Lam:

I had a question going back to you saying Google’s been working on this for a long time. And a lot of companies are at the same point, you know, releasing their GPT-4 or 5 or Bard. What brought it to the tipping point? You know, were all these groups working on these models, you know, for 10 years and then suddenly now it got to a point of advancement that it’s useful to the world. What happened in the last year that… didn’t happen five years ago or.

[12:07] Guy Ben-Ishai:

Yeah. No, Sarah, it’s a great question. So we have been, we Google, sorry, let me be clear. We’ve been an AI first company since 2017. So it’s been quite a long time for us. We’ve invested billions of dollars in the technology. If you look, and it’s not just us, it’s the entire industry. If you look at data on investments, on startup investments and venture funding, you’re looking at it. annual increases of 75% on average in the last five years. Uh, but a lot of it wasn’t really visible or maybe a lot of it wasn’t really comprehended by the public. Uh, and maybe it’s not until people get to experiment with those existential questions that we talked about before that it gains such visibility. Uh, it’s kind of interesting. A lot of times, you know, we kind of dismiss the importance of entertainment, I would say, in driving technology forward. And I think that to some extent is what we’re seeing here.

[13:10] Scott Wallsten:

So another, I mean, it related to the question about what, you know, what caused this to happen now. Do you think that it’s sort of bursting out into view has been a… How has that influenced policy, let’s say? Do you think it’s caused policymakers to kind of jump on a bandwagon, think that AI just means chatbots and Terminator, and that in some sense, let’s set aside what is best for society and productivity in the world. We can come back to that. But might it have been better in a policy sense if it had been slower and policymakers had a chance to kind of learn about it as it developed rather than suddenly being confronted with it.

[14:04] Guy Ben-Ishai:


[14:05] Scott Wallsten:

I mean, it’s a counterfactual, we’ll never know, but.

[14:06] Guy Ben-Ishai:

Yeah, no, that’s right. It is counterfactual and we’ll never know. Policy needs to offer solutions to real problems and real tensions. And I think you’re right, Scott, we are at the very early stages of implementation of this technology. I don’t know that we have a full comprehension of what those tensions are. Having…

[14:26] Scott Wallsten:

Actually, I just interrupt. I really like that you said policy has to answer real problems, but not politicians. They can try to answer problems that don’t even exist or may never exist. So…

[14:39] Guy Ben-Ishai:

That’s such an interesting point.

[14:41] Guy Ben-Ishai:

So yes, AI needs to be regulated in some way. It’s an important and powerful technology. It certainly creates tensions and can lead to misuse. So I think when we’re talking more broadly about policy, I think we’re really trying to find a balance between the duty to protect and the need to advance the technology forward. I think it’s really important in kind of like a more narrow US-focused sense is the recognition that the fact that we’ve advanced the science does not necessarily imply that we will be the market that actually gets to capitalize and leverage on the economic potential. And I think that it’s so important to reach that balance between a duty to protect and making sure that we actually capitalize on the economic potential.

[15:33] Scott Wallsten:

So just to play extreme libertarian here, why is some regulation required? I mean, any new tool can be used in bad ways. Do we. I mean… What makes you say that we need regulation, even though you’re not saying what kind?

[15:59] Guy Ben-Ishai:

So, kind of like three aspects that, so I think that’s right, Scott, regulation kind of triggers a lot of hyper-partisan issues and concerns.

[16:03] Scott Wallsten:


[16:04] Guy Ben-Ishai:

But I think that three kind of like common themes that we can find, not just cross-partisan, but also more collectively as a country, as a nation, are really important. And I think, you know, just to clarify what they are, the first one is, as I mentioned, Beyond the scientific discovery, we have to make sure that we develop, scale, and commercialize AI applications. Make sure that the scientific breakthrough actually translates to economic success.

[16:33] Scott Wallsten:


[16:34] Guy Ben-Ishai:

We have to expand AI’s footprint beyond the tech sector. It cannot be understated how important it is that small businesses and traditional industries, such as manufacturing, agriculture, or transportations, are employing and… employing and adapting AI technologies. And then I think we need a policy agenda that promotes and supports a workforce transition. It’s not just to ensure that AI is perhaps distributed more equitably, but it’s more importantly so that we will be the nation that is actually able to promote, advance, and deploy the technology. So I think that we can really debate delineation and the nature and what would be optimal in these directions. But I do think that beyond any partisan, um, uh, perspectives, those three elements are critical.

[17:37] Scott Wallsten:

Do you think policymakers are generally focused in that way? Because it seems to me, right now, the focus is more on how to restrain AI rather than how to use it productively. And again, it’s just my impression. What do I know? But I worry that people say we need regulations on it. And all these examples are, of course, good ones. We don’t want it to be evil in some sense, right? We don’t want it to be biased in bad ways. But do you think that the policy proposals that we see address those issues without, either address those issues at all or address them with a low enough cost?

[18:33] Guy Ben-Ishai:

So it’s really, as an economist, it’s really hard for me to kind of reach a conclusion on whether those policies are working or not. We’re very early stage of

[18:41] Scott Wallsten:


[19:42] Guy Ben-Ishai:

setting those policies, and there’s many different voices in the conversation. I do think that if we pause for a second, if we think again about the benefits, the fact that we have an opportunity to reverse decades of a productivity decline through this type of technology. We have an opportunity to reverse a diminishing share of labor in our economy. And we’re also at a risk of falling behind. There’s no guarantee that in just a few years around the corner, globally, we’re gonna retain our global leadership in emerging technologies, or that we will be within the top three or four leading economic powers. The risks, the risks are just too… too great, too large to propose advanced policies that would undermine those needs. At the same time, there is a clear duty to protect. And as in any other technology, AI is a powerful technology. It can lead to misuse and we wanna make sure that it’s being implemented responsibly. So it’s hard for me to opine on whether we’re moving in the right direction or not, but I hope and I think that our policymakers recognize how important this balance is.

[20:06] Scott Wallsten:

So when you talk about how this might be an answer to the productivity puzzle, the problem that we’ve seen such declines in productivity growth. So let’s suppose that AI can radically improve our productivity. That could bring with it short-term problems, which I know you’ve thought of. Problems might be the wrong word, but there might be labor force adjustments. And that even if it generates net-new… net-positive jobs in the future and better jobs like we’ve seen with technology in the past, it’s conceivable that there would be a transition period where people have to deal with AIs being able to do some jobs that people did. How should we… First of all, how should we think about that? Because we certainly don’t see it yet. I mean, like I said, it’s very early days. But you know. We certainly don’t see anything like it in the unemployment numbers. But how should we think about it? Is it realistic? How do we prepare? And in a way that still allows for technological improvement without too much of a backlash.

[21:12] Guy Ben-Ishai:

Okay, so Scott, let me kind of break down to a few components. So first, when you talk about productivity, just to be clear, this is one of the most important determinants of a country’s wages and standard of living. As economists, we cannot understate how greater productivity leads to greater job outcomes in numerous dimensions. So we’re not just talking about wages, but we’re also talking about career paths, stability and so forth. And beyond that, when we’re thinking specifically about a technology that augments and improve jobs in a way that automates the more mundane and enable us to be far more creative, thoughtful, innovative in our jobs. There’s tremendous non-pecuniary benefits that come with this increased productivity. I think that’s really important to keep in mind. In terms of whether it will ultimately be distributed equally, I think it’s a really challenging question. If I want to think about optimistically about the potential, the technology itself has a tremendous democratizing power to enable people to access, access occupations and skills. And that’s really important. We’ve seen that with prior digital technologies. In that respect, I think it’s really critical that we really make an effort. And, you know, Scott, whether it’s policy or whether it’s programs or whether it’s collaboration between industry, policy and academics and labor unions and creatives. Right. That’s a different question. Uh, but it’s critically important that we include in the circles of entities that benefit from this technology, small businesses and traditional industries. That has a tremendous potential to lift all boats if you will. Now, when we talk specifically, and I think that was kind of the third element of your question about, is this going to ultimately introduce greater equality in our market or whether this is going to be shared equally? You know, I think that first and foremost, we gotta be mindful about the fact that we cannot answer this question in abstract. You know, given what we know today about economic barriers to mobility in our economy. I think that any wealth creation would run into tensions in that respect.

[23:40] Scott Wallsten:


[24:41] Guy Ben-Ishai:

And we’re learning more about such barriers in a way that we haven’t thought about them in the past. And that really pertains to policies that are far broader than the technology sector. We’re going to talk about taxation, about healthcare, about education in a way that really expands the conversation. But having said that, I do think that if we have a once in a generation opportunity to meaningfully create better jobs and what we can do in a more micro level sense is making sure that those opportunities are expanded to traditional industries and small businesses. We should absolutely make the most to make sure that action takes place.

[24:24] Scott Wallsten:

When we had a while ago, Professor Kristina McElheran from University of Toronto and she was talking about a paper that she had done on automation, on workers by age groups. And she was speculating that one advantage of AI is that it could help older workers deal with automation because if you don’t need to necessarily learn a new computing language, you can ask it to write the Python code for you. So you can retain that institutional knowledge that older workers have without having to learn whatever new thing is. I already feel that way because now I can write code in Python and I don’t know Python. So, you know, just pseudo-coding. But then as economists, we always say, and I’m just sort of, I guess, emphasizing what you said, that when something is a net benefit to society, we should be able to compensate the losers. We generally do not a very good job of that. I mean, if there’s one thing economists have shown is that free trade is good, and yet our trade adjustment assistance program is less than a billion dollars. Not that we necessarily would know what to do with it if it were bigger, but these issues somehow often tend to fall to the side.

[25:44] Guy Ben-Ishai:

Scott, thank you. I’m so glad you’re asking this question. And you know, I think that as economists, and maybe I should be less critical of my tribe, we tend to overlook these issues and these challenges. And there is even somewhere a fundamental presumption that maybe it’s a question of redistribution and payment. Nobody wants to be paid to let go of a job that they enjoy. I think people are looking for better, more meaningful jobs. And I think that’s what’s remarkable about this technology. I think it does provide the opportunity for better, meaningful, and yes, better paid jobs in the future. And if we’re looking at the literature, it seems like in every occupation, every skill, every function that has been studied in the last year, we find significant potential for doing so on a wide basis. And I think that’s what we should really focus on. I think… That is an opportunity that, you know, on a bipartisan level, to be honest, we can all align with and promote. I think it’s really important.

[26:50] Scott Wallsten:

Um…Sarah, did you want to follow up? Like you were going to ask something.

[26:54] Sarah Lam:

Well, I was thinking that the trade-offs between all the pros and the cons. So there are so many productivity gains for people to not have to do mundane tasks anymore. It’s like the spreadsheet. Once there’s a spreadsheet, you don’t need someone writing down the numbers in a chart. And then there are a lot, I forget if there’s a study or an observation, but there are so many more finance jobs from the spreadsheet. It wasn’t a lot, you know, it didn’t cut out jobs in accounting. It actually created so many more jobs. Same thing with like accounting software and you know, all sorts of. So that story, you know, if you apply it to AI, the fear of losing jobs to automation is not really valid. But Yeah, I guess, you know, how does AI differ from the narrative, you know, from automation, from technology?

[28:00] Scott Wallsten:

That’s a good question.

[28:01] Sarah Lam:

Is it transformative or not? I guess that’s the continual question. Like, is it really that different or is it just a lot more of what we’ve experienced before?

[28:12] Guy Ben-Ishai:

So Sarah, it is a great question. I just want to clarify a couple of things. I think it’s also a valid concern. I think it would be unreasonable for us, as Scott alluded to before, to kind of go into this conversation not realizing that there is a risk of job displacement at the very minimum in the short term, in the transitionary period. So I think that is a valid concern. That’s one that we absolutely need to take seriously through, again, a workforce transition agenda. You know, it’s interesting that you mentioned banking. I want to, to some extent we’ve been to this movie before. If you think about it in the 1980s, when we rolled out ATM machines, there were tremendous concerns that human tellers would lose their jobs. As a result, branches, the number of branches only increased, not declined. And the number of tellers that were employed at those banks and those branches increased and they became more financial advisors than just individuals counting cash. I think it’s a perfect precedent to what may take place in the future with further technology. To the other question about, are we really at a tipping point? Are we at a point where what we’re doing in terms of automation is dramatically different from what we’ve done in the past? I think it’s really hard to tell. I don’t know if this is binary or if this is a continuum. I think there’s no doubt that we are moving towards more, we are automating more non-routine and knowledge tasks than we did before. And with that, you know, you have, of course, more concerns about displacement, but also greater benefits. Again, this is why this is so challenging.

[29:58] Scott Wallsten:

So one concern that I have, and maybe this is just really, really minor because I’m very petty, is that in some of these non-formulaic tasks, it will actually, in some ways, make us less creative. And the reason I say that is because everybody’s played around with ChatGPT and BARD. And I’ve asked it to write an op-ed based on one of my papers. And it knows the paper. and it’ll do a pretty good job of knowing the conclusions. But then the op-ed that it writes is basically just whatever conventional wisdom is. When my whole paper was about conventional wisdom being wrong. So when people use this and it’s based on a collection of existing literature, existing work, is it just going to be people writing the same thing again and again in just different ways because they’re having an AI do it for them? I mean, I think we’re about to see a whole lot of junk because people are using these AIs to write for them. Now, obviously, I don’t think this is going to permeate through the economy, but it annoys me.

[31:01] Guy Ben-Ishai:

So look, Scott, technology is a reflection of humanity for its best and worst.

[31:06] Scott Wallsten:

And we’re inherently boring, so…

[31:08] Everyone: [laughing] [31:12] Guy Ben-Ishai:

But, you know, it’s really interesting. And I think this is a question that we’ll see evolve. And it’s really fascinating to see what the next steps in applications and use cases will be. You are right that there is some gravitation towards the middle, if you will, or towards consensus when you are training models on data and content that is kind of like widely shared and widely available. You tend to get the consensus. Question is, Scott, next time that you are going to write an essay, if we have a chat bot that is trained on your data individually, how’s that gonna change the result? Is that really gonna reflect Scott’s view more accurately? Is that going to be more creative? Is that going to be more innovative? Is that going to be more different from the consensus? So…

[32:07] Scott Wallsten:

It might be more Scott than Scott.

[32:09] Guy Ben-Ishai:

Exactly. Exactly.

[32:12] Scott Wallsten:

Because I’m horribly inconsistent.

[32:14] Guy Ben-Ishai:

But you know, but there’s also kind of like, going back to the distributional question that we’ve not talked about before. It’s really interesting because in some dimension, if we are leveling the skills, right? If we are getting what you call the consensus, let me suggest that this is not just a consensus, but this is a common skill level. And if we’re leveling the skill across different jobs and different occupations, are we actually providing more opportunities for individuals that didn’t have access to higher paying jobs to actually really benefit, to really thrive with their peculiar human skills such as judgment, intuition, creativity, and so forth. I think that’s fascinating.

[32:55] Scott Wallsten:


[32:56] Guy Ben-Ishai:

I think to a lot, I think, I hope to a great degree in the future years we will see that taking place.

[33:04] Sarah Lam:

So for the human aspects, you know, will all this new AI technology really heighten what we know as human? So you’ve mentioned judgment, creativity, and those are the wisdom, the human qualities that maybe get washed together with technology benefits. But yeah, do you think in the future, like, the human versus machine will be more separated or are these tools helping us to amplify our humanity? Have you thought about that?

[33:48] Guy Ben-Ishai:

So of course, Sarah, I do think that the, you know, I kind of struggle a little bit with the framing of human versus machine, because I think that what the economic literature is consistently teaching us is that it’s actually the tremendous comparative advantage of humans with machines. It’s, you know, go back to Garry Kasparov in Deep Mind in 97 when IBM beat our best human chess player, I think that caused a lot of concern. But here we are 25 years later. And what we find is that human grandmasters that are assisted by AI is a winning combination. So that framing, I think, is probably what is the future of human plus machines? And would we be able to do different things? You know, under like a personal note, I don’t remember and you guys can probably relate to this. When you’re like, you just came out of grad school, or you just finished your bachelor’s in economics, you do a lot of what’s called data cleaning. Those are horrible mundane tasks that take weeks.

[34:47] Scott Wallsten:

We still do that. Yep.

[34:50] Guy Ben-Ishai:

I think of the time that we spent as a profession on this term cleaning data, it’s inhumane. It doesn’t surprise me that economists are unhappy. But with AI, I don’t want to say that we’re going to completely eliminate this unhappiness, but we can tremendously diminish it. So we as economists can be more thoughtful and constructive about the real challenges that we face as a nation.

[35:10] Sarah Lam:

Yeah, that’s true. I saw it.

[35:12] Scott Wallsten:

That example just rings true because we spent so much time cleaning data and already do use AI for some of it.

[35:19] Sarah Lam:

Yeah, I saw an Adobe Photoshop advertisement where they could fix problems in the image. They can make things like symmetrical or that’s amazing. So if they can, you know, synthetic data sets, you wouldn’t have to spend time making them.

[35:39] Guy Ben-Ishai:

Look, I do hope that we’ll never, like, I don’t think that we’ll ever, and I may be wrong about this, that we will ever become obsolete, that you will never need some type of human interaction and intuition to determine are these images reasonable, do they reflect something that’s more human, do they reflect kind of like common experiences. But the fact that we can automate those, I’m going to call them entry-level tasks and activities, if you think about the hierarchy of the activities and the tasks that go into our occupations, I think that that’s really important.

[36:22] Scott Wallsten:

Yeah, I feel like we find new things to do with it every day. And it’s just so much fun to play with. I mean, on the data task, the data cleaning that you’re talking about, even little things, like I’ll download a spreadsheet from BLS. and it’s got the data formatted one way and I want to format it another way. Yeah, I know how to do that. But instead of like writing a little code and data or whatever, I just dump the table into chat GPT and ask it to reformat it the way I want it and it does it. And I feel like I do countless little things like that during the day that adds, it adds up.

[37:55] Guy Ben-Ishai:

You know, one thing that I would mention about these applications and use cases, they’re so interesting. So we talked about the different ways that economists are using generative AI applications. I think that coding is actually really important and it can make, I want to circle back to that question of the footprint. How can we make sure that this technology gets widely deployed by small businesses? We know that the incremental benefits of generative AI are far greater than what we’ve seen with digital technologies. So it leads us to believe that in itself would be enough to make sure that small businesses are really adopting these technologies. There’s other barriers. There are significant barriers to technology adoption for small businesses. And one of them is integrating software. It tends to be really expensive because small businesses don’t have those necessarily those skills, right? They don’t have a software engineer on board and a lot of times don’t need to retain a third party to do software and data integration. That is really expensive. If the use cases will be at the level that coding will be more intuitive. We’re looking at kind of like a ability to diffuse technology to smaller players in a small scale in a way that we haven’t seen before. The…

[38:17] Scott Wallsten:

So let me ask a follow-up question to that, which is, I mean, for so long, there’s been this push to teach every kid coding in school, which I think is ridiculous, because, I mean, what is it that they want us to not teach them? Art, music, I mean, how many things can we take out of schools before we’re not? Anyway, different rant.

[38:34] Guy Ben-Ishai:

Ha ha!

[38:36] Scott Wallsten:

But maybe now should we be teaching them coding per se. I mean, some people need to be computer scientists. We’ll need those. But maybe we want them to learn some kind of pseudo coding. You want them to understand logic and the way code, the way a code, you would lay it out the way it thinks instead of teaching them Java or C++ or anything like that. I don’t know. What do you think?

[39:05] Guy Ben-Ishai:

Scott, I’m so glad you asked this question. And let me just say, let me come to the preface. And I think this is ultimately really, you hit the core of why policy in this area, in this space is gonna be so challenging and so difficult. Pause for a minute to think about David Otter’s contribution from MIT about today, you think about it, 60% of our occupations or 60% of our workforce was in occupations that have never existed before the war, before World War II. And that is primarily because of technology. That is just going to increase. That means that we’re looking at tremendous occupational changes over the next couple decades, right? A change to more meaningful and better jobs, yet an important transition that we need to account for. Now, how do we scale for those jobs if we don’t even know what they are? If we recognize that, you know, skilling is no longer a barrier for individuals who didn’t have access to education, what does, and we know that we’re now gonna rely more on judgment and creativity, what does the first day on the job look like? How do we train for that? How do we select the right people for those right occupations? I think those are really challenging questions. And I think that it will be misguided for us to just assume that we’re going to continue and train or for some specific skills and that will resolve these issues. And I think that’s why we really need a broad coalition of constituents, including labor, creatives, policymakers, and academics to really think about these issues carefully. This is greater than just the tech sector alone.

[40:49] Scott Wallsten:

Yeah, I think it’s really, it’s an interesting time in this, not just because it’s an interesting time in AI, but trying to see everyone figure out what their positions are, because there are so many angles and so many things to figure out. People don’t know exactly what they think, including me.

[41:01] Guy Ben-Ishai:

And me! Hahaha!

[41:05] Scott Wallsten:

So, we should probably wrap up, but thank you so much for joining us. This was a really interesting conversation. And we appreciate your being here and we hope that we will talk to you again soon.

[41:18] Guy Ben-Ishai:

No, Scott, Sarah. Thank you so much for having me. This was great.

[41:22] Sarah Lam:

Thanks, Guy.

Share This Article

View More Publications by Sarah Oh Lam, Scott Wallsten and Guy Ben-Ishai

Recommended Reads

Related Articles

Sign Up for Updates

This field is for validation purposes and should be left unchanged.