fbpx

Avi Goldfarb on AI and Predictive Analytics

Avi Goldfarb on AI and Predictive Analytics

Scott Wallsten (00:00):

Hello and welcome to Two Think Minimum, the podcast of the Technology Policy Institute. Today is Monday, October 31st. I’m Scott Wallsten, President of TPI, and I’m here with my co-host TPI Senior fellow Sarah Oh Lam. And today we’re delighted to have with us Avi Goldfarb. Avi is the Rotman Chair of Artificial Intelligence and Healthcare and a Professor of Marketing at the Rotman School of Management at the University of Toronto. He’s also Chief Data Scientist at the Creative Destruction Lab, a Faculty Affiliate at the Vector Institute and the Schwartzman Institute for Technology and Society, and a Research Associate at the National Bureau of Economic Research. Avi’s research focuses on the opportunities and challenges of the digital economy. Additionally, he is co-author, along with Ajay Agrawal and Joshua Gans, of a new book titled Power and Prediction, The Disruptive Economics of Artificial Intelligence, which will be coming out on November 15th. Avi, thanks for being with us today.

Avi Goldfarb (00:52):

It’s great to be here. Thank you.

Scott Wallsten (00:54):

So, this book is, I guess is kind of a continuation of a book you wrote earlier and you sort of start off by saying, here’s what we got wrong in the previous book and here’s how we see the future. So tell us about the evolution of the book and what it’s about.

Avi Goldfarb (01:06):

Let’s start at the beginning. So in about 2012, Jay founded this organization called The Creative Destruction Lab, which was to help science-based startup scale. And in that very first year we had a company called Adam Wise that said it was using artificial intelligence for drug discovery and put yourself back 10 years ago that just seemed crazy. Ok. It was led by somebody who was a former doctoral student out of Jeff Hinton’s lab a few blocks down the road from our business school. And the technology seemed really cool. And over the next few years in our program, we saw first a trickle and then a flood of these AI startups to the point where we decided, this is really interesting and we wanted to try to understand it. We, we’d spent our careers trying to understand the economic impact of the internet and we decided it was time to start thinking about the next technology, which we believed was going to be artificial intelligence.

Avi Goldfarb (01:56):

And so we got our heads together, we ran conferences and we wrote a book. And that book was called Prediction Machines. And the key point in the book was, when people think about artificial intelligence, they might think about super intelligent robots like the ones in science fiction that could do just about everything humans do. That’s very interesting. And they very well be possible someday, but that’s not the reason we’re talking about artificial intelligence. Well, not the reason we were talking about it in 2018, and certainly not the reason we’re talking about it today in 2022, because a very particular technology, a branch of computational statistics, sounds a little less exciting than AI, but we argue is still transformative, called machine learning, has gotten much better. And so we should think about AI as prediction technology. And the book came out in 2018 and we thought this revolution was about to start.

Avi Goldfarb (02:43):

We thought the world was going to change quickly because these technologies seemed extraordinary. They could translate between languages, they could label pictures. It seemed like industrial transformation was just around the core. It took a couple years and we figured, ah, it’s because things aren’t quite digital enough, we haven’t figured it out. Then the pandemic hit and everything went digital and we thought, Okay, now the revolution’s going to happen. And the months went by and we realized still not there. We’ve seen AI make a little difference, but we hadn’t seen it transform very many industries. And that got us to thinking more deeply about what technological change looks like and why a technology that we can see on the other side how transformative it can be for industry might not have had an impact yet and what the, the barriers to that might be to do that. We dug into the history of technology and we looked at what happened with electricity.

Avi Goldfarb (03:41):

Because lots of people are saying that AI is the new electricity. So, if you’re paying attention in 2018, we found at least a half a dozen, a dozen articles and more quotes from leading thinkers, including the CEO of Google, saying AI is the new electricity. And that seemed fantastic and from our Prediction Machines, 2018 point of view. Oh yeah, of course it’s the new electricity that’s transformative. But then we remembered our, our history of electricity and realized it was a little less exciting than we thought. What do I mean by that? It was clear in the 1880s that electricity was going to be a big deal, but it wasn’t until the 1920s that the median household and the median factory were electrified. It took 40 years for the average American to feel the impact at scale from electricity. Warren Divine Jr. and Paul, David and some other economic historians dug into this.

Avi Goldfarb (04:31):

And the reason was because electricity wasn’t just cheap power, but it allowed you to do things differently. So the first factories that used electricity just took out their old steam dimension and put in electricity. And so what that did is it saved a little bit of time on power. So the factories, if you imagine the factories in the 1880s, they were organized around the power source. Why? Because the further away a machine is from the power source, the more energy get dissipated. And so you organize your production to be so that the machines are as close to the power source as possible. The first factories that used electricity just took out the steam engine, put an electric motor and changed nothing else. It was a point solution to their existing systems and it helped productivity a little bit, but it didn’t transform anything. And then over the next 40 years, people started to realize that electricity wasn’t just cheap power, but it allowed you to decouple the power source for the machines.

Avi Goldfarb (05:27):

And once you could decouple the power source for the machines, you could organize your factory differently so that instead of putting your machines as close to the power source as possible, you could lay out your machines horizontally over a big space where inputs go in one end and outputs come out the other. And it was only then that we saw the massive productivity boost from electrification, from electricity into factories that led to widespread adoption. So, we think we’re at the 1880s equivalent in artificial intelligence. We can see the long term impact and the potential of the technology of prediction machines, but it hasn’t come to pass yet. So what this book is about is thinking through first recognizing that while the technology has potential, we need to invent a whole bunch of processes in order to make it transform the way we live and the way we work. And so, the book is about navigating what we call the between times, these times after we recognize the potential of technology, but before we can see that potential come to pass.

Scott Wallsten (06:33):

One of the most interesting parts of the book, of all parts of the book, I guess because it runs throughout, is the theme of how rules, and I would say sort of just broader institutions affect this, you call it, you say rules are glue and there’s a whole host of rules and institutions and incentives that can prevent this from happening, from getting this sort of systemic change. So I guess, you know, sort of two questions about that you to first discuss those barriers and also you seem pretty confident that AI will overcome those barriers. So why is that?

Avi Goldfarb (07:04):

So just trying to think through which part to go through first. Okay, so there are lots of people who want to slow down the progress of technology. They tend to be the people who are doing pretty well right now, given the state of technology, they’re not really excited to have disruption. And so large incumbents might be worried. People who benefit from existing systems might be worried and so they’re going to push back and regulation. Okay? So there are all these rules in place that are designed for an old system that inhibit the adoption of technology. So just as an example, in the summer of 2020, Ajay, Joshua and I, along with a political scientist and epidemiologist and an engineer who actually knew how to do things, the rest of us were academics, had this idea to help Canadian workplaces open back. And Joshua wrote a book in the spring of 2020 that pointed out that for most people, COVID wasn’t a health problem for most people, it was an information problem.

Avi Goldfarb (08:00):

What do we mean by that? In the summer of 2020, for example, about one in a thousand Americans had COVID, for them it was a health problem for everybody else, it was an information problem. If we knew who had COVID, we’d keep them home and the rest of us could go about our business. Now, if it’s an information problem, there is a tool that should work really well for that. That’s a predictor machine, right? Prediction is the process of filling in missing information. And so we should be able to find a prediction machine in AI that can fill in that missing information. And so, we set about trying to find the prediction tool that could help people identify whether they had COVID before they went into work. Okay? We first went through a bunch of AI tools and it turned out none of them worked.

Avi Goldfarb (08:37):

Okay, fine. Went through a handful of others, we saw, you know, articles about dogs sniffing people at the door to figure out if they, and they’re like, that’s not really practical. Eventually by September, 2020, we realized the rapid test, which is now familiar to everybody, was the best prediction tool for figuring out if somebody was infectious before they went into work. And so, we had this idea of we’re going to  use rapid testing to allow workplaces to open up. And in Canada, many workplaces, even in the winter of 2021, were still closed. So, we had the prediction machine wasn’t an AI, but we had the prediction machine by September, 2020 was pretty clear. It worked reasonably well and it could solve a big problem, but they were illegal to use in Canada at first. And then once they became legal to use, it had to be done by a healthcare professional and it had to be one of those brain ticklers that went way up into the nose, okay?

Avi Goldfarb (09:32):

No one was going to submit to that twice a week for work. That just wasn’t practical. And no workplaces would want to be hiring doctors to do that or nurses to do that. Over time, the regulations started to loosen up, but the biggest barrier to widespread adoption of this technology was, was not the prediction, actually the prediction tool in this case was the easy part. It took us about a year to work through mostly regulatory barriers. There’s some other barriers too, around organizations. Like no one wants to get tested if they’re sick, if they don’t get sick pay. And so there are a handful of other pieces too that needed to come along. And eventually most of those were adjusted or balanced as public health learned more about how rapid testing worked. Ok. So like, our experience there was very much, sometimes the prediction’s the easy part, but that doesn’t mean that the problem is solved because you need to overcome these regulatory barriers in order to use the predictions effectively.

Avi Goldfarb (10:29):

Now, first part of your question, the second part of your question is why am I so optimistic? Okay, I’m optimistic for two reasons, okay? But they’re not really optimistic reasons, okay? So I’m optimistic about overcoming the regulatory barriers, but why? The first reason has to do with the fact that I think our current systems are pretty bad in many ways. And so a lot of the talk about AI, for example, is, and the resistance to AI has to do with discrimination and bias. Okay? And there are very good reasons to worry that compared to a perfect benchmark, that machine learning tools that AI is discriminate, say in credit scoring and in hiring in other places, right? So we should worry, but why do they discriminate? They discriminate typically because they’re mimicking human processes and we humans discriminate. So the reason I’m optimistic, eventually we’ll move to AI on many of these processes isn’t because I think AI’s so great, it’s because I think we, we humans are pretty awful, okay? And so, because we’re so bad at these kinds of things and because many of our processes to terminate so much, it’s going to be hard to ignore how good the machines are. Ok? And so that’s the, I’m pessimistic that we can fix humans on discrimination and hence I’m optimistic that the machines can do better, at least assuming they’re run by some well-intentioned humans.

Scott Wallsten (11:49):

I totally agree with you about the human part. Yeah. Because we have thousands of years of history showing how awful humans are and how discriminatory and biased we can be. But I’m not shy. I’m as optimistic as you are about the ability of a better technology being able to overcome that. And like, I’m not sure whether the COVID example is a pessimistic, you know, supports the pessimistic or optimistic view. And the same with, you have an example of this amazing algorithm in Flint, Michigan for replacing pipes and that has elements of the positive and negative too. So, you know, one of the things I think that helps new entrants in anything succeed on a large scale is when the incumbents don’t see it coming, but kind of everybody’s seeing this coming or they think they are. So, you know, I guess maybe, I don’t know what to do with that on this piece.

Avi Goldfarb (12:33):

No. Okay, so one thing that machine learning does is it decouples the prediction from the rest of the decision. Okay? So you actually know what the prediction is and then you can make the decision based on it. And the Flint, Michigan example is these two professors from the University of Michigan developed a prediction tool to identify which houses likely had lead pipes coming into them. And so which people were likely to be drinking toxic water. Okay? Now, if you remember a few years ago there was a water crisis in Flint because there was so much lead going into the water. And so they want to start replacing the pipes. And these professors were figured out a way to predict which houses likelihood led pipes. And their algorithm was really good. It’s about 80% accurate, okay. And the city started to deploy it and the results on the surface seemed fantastic, but a whole bunch of people in Flint, Michigan started to complaining, started complaining to their politicians saying, Hey, how come someone else’s pipes got dug up but not mine?

Avi Goldfarb (13:34):

That doesn’t seem fair. I want to know for sure if my pipes are lead or not. And so, the politicians overruled the professor’s algorithm and said, from now on, we’re going to do something fair. And what does fair mean? It means we’re going to go street by street throughout Flint, Michigan. They were a little bit ambiguous about who got first, but if you look at the map, it looks like the powerful politicians had the people who vote for them go first, but there’s some ambiguity there, whatever street by street to make sure it’s a fair process and the accuracy rate. So instead of 80% of the pipes they dug up, actually having led it dropped to about 20%. So that’s the bad story, that’s the pessimistic part. But because the prediction was decoupled from the decision there was clear what ground truth was. Okay. And it was clear that what the politicians were doing, once they got rid of that University of Michigan algorithm, was worse.

Avi Goldfarb (14:29):

Okay? And once it was clear that it was worse, people from the city effectively sued and eventually won a court case where a judge ruled that the city had to follow these predictions, had to follow what the algorithm said, and the success rate jumped right back up to 70%. And so, the, you know, it’s pessimistic because hey, the incumbents, they’re going to fight for their own power. But it’s optimistic because there’s only so much once it’s so clear that you’re making the wrong decision. And here’s why I’m optimistic on discrimination. Once it’s so clear you’re making the wrong decision, it’s going to be hard to, to push back and say, no, no, no, we really do need to ignore the algorithm and discriminate just like we used to. Just like in the lead case, we really need to ignore the algorithm and allow lots and lots of children in the city of Flint, Michigan to be drinking toxic water. It just, it’s an argument that doesn’t fly morally once you have, in that case, the good prediction.

Scott Wallsten (15:26):

So there, in that case you have, you know, very clear outcomes that you can measure, but in lots of things you don’t. And so somebody will have to define a threshold level and then, I mean, we already see its backlash against and people just generally seem to not like algorithms. And it’s, this seems like a time in history when we’re, you know, society’s uniquely unable to deal with ambiguity and uncertainty and you know, probability not sure sit well with it. So is it possible? So I guess first of all, you could just say that, that I’m wrong and, and it will be obvious in almost any cases and that we will be able to deal with it in a policy sense and we will be able to set thresholds. But can you also, would we separate policy from, you know, business settings where they are more likely to have clear cut objective measures?

Avi Goldfarb (16:12):

Okay-

Scott Wallsten (16:13):

Sorry, that’s a lot of stuff in one question. Yeah.

Avi Goldfarb (16:15):

Yeah. <Laugh>. So I said I’m optimistic that we could overcome the regulatory barriers because I’m pessimistic about two things. So, the first thing I’m pessimistic about is human decision making. We’ve talked about that. The other thing I’m pessimistic about, in some senses, is geopolitics in competition between countries. And so, to the extent that these technologies enhance productivity and generate wealth, never mind the national security stuff, that’s not my expertise, then there are reasons for regulatory bodies to not put the brakes on too much unless we have a global agreement that we don’t want to use AI for productivity enhancing aspects of work. Then geopolitical pressures are likely to, I want to be a little careful of my language here, but the geopolitical pressures make me a little more optimistic about adoption of the technology, though as I said, not so optimistic about the direction of the world.

Scott Wallsten (17:09):

Let’s all be a little less careful. So are you saying that, you know, we’re worried that if we put the brakes on AI, then China won’t and that would put us at a disadvantage and that will cause politicians to not want to put the brakes on. I mean, <laugh> more or less.

Avi Goldfarb (17:22):

Yes, that is, that’s one example. It’s not the only example, it’s not the only other country. But yes, you know, if we put the brakes on and others don’t, then you know, then others will be ahead.

Scott Wallsten (17:32):

Do you see evidence of that going?

Avi Goldfarb (17:33):

No, we want to be a little careful here, which is that there’s a place that’s not to say the optimal is no regulation, right? I think this is actually an important caveat, which is clearly the most, even for adoption of the technology. We want some regulation, we want people to trust it, right? So even something as simple as the elevator, you probably don’t think about your elevator when you go in and out of the elevator. But when elevators first started being used for humans and not for freight, they were pretty terrifying. At least they seemed pretty terrifying. And so first, you know, the engineers did all sorts of things to show the elevators worked, but there was a regulatory system put in place. And to be honest, that regulatory system still exists today with elevator inspections to make sure that your elevator system is safe. And that system is part of the reason why, when we walk into an elevator, we don’t even think about it. With AI, it’s not to say we want no regulation, because we clearly want to think through what the, what could go wrong. And we clearly want some regulation to ensure that people trust the systems to your earlier point. So it’s not about no regulation, it’s trying to figure out how can we find that sweet spot where the regulation keeps people safe while also encouraging use of the technology.

Scott Wallsten (18:51):

Do you have thoughts on where we start with that? I mean, lots of legislation is just very crude, but what, you know, so what’s the right way to start thinking about it? I mean, is it, you know, criminal justice issues? Is it sort of case by case? Where do we begin?

Avi Goldfarb (19:06):

So I think the starting point is to move away from the idea that machines are making decisions. So when we talk about artificial intelligence, maybe because we’ve called it artificial intelligence, instead of prediction machines, we have this idea that machines are actually deciding stuff. And to somebody who’s not thinking about what’s going on behind the scenes, they might, it might seem like machines are deciding stuff, but machines don’t decide things, humans decide things. And underlying, at least as long as we’re dealing with prediction machines, not artificial general intelligence, like it’s a tool where the machine provides a prediction. The human decides which predictions to make and what to do with those predictions once you have them. So what machine learning does through this decoupling of the prediction for the rest of the decision is it can change who makes the decision. It can change the time and place of a decision, but it doesn’t change the fact that a human or team of humans is responsible.

Avi Goldfarb (20:00):

So, for example, I don’t know if you remember, a few years ago there was this story that Amazon was automatically firing people from their warehouses. Okay? There were all sorts of things in that story that weren’t true. Like even the process as described wasn’t quite what was happening. But even if we take those facts as given, even if we say yes, there’s this company that has a whole bunch of warehouses around the world, and it used to be the HR decisions at the warehouse were decided by a manager who worked in the warehouse. And instead what somebody at headquarters decided was to build a prediction tool to measure productivity of each worker in each warehouse. And then at the end of the day, maybe even as the article might just, some of these articles describe it, at the end of every day there’s a score and if you the worker fall below that score, you get an email saying you’re fired.

Avi Goldfarb (20:53):

Ok. Even in that extreme scenario, it’s not a machine making that decision, it’s somebody at headquarters who decided they didn’t trust the managers in the warehouses who built their own tool to measure performance and then decided what the right threshold was to send the notice that somebody lost their job. Okay. So again, to be clear, that’s not what happened, it’s how it was reported. But even in the way it was reported, it wasn’t about a machine deciding, it was about somebody at headquarters figuring out how to massively scale their power using this decoupling of their prediction from judgment to take power away from managers and warehouses and move it into their own purview and headquarters.

Scott Wallsten (21:32):

Well would that have been actually a, is it always the case that that’s a better outcome? I mean, so then, you know, it’s still a person’s decision, but you have one person’s decision override lots and lots of people’s decision, you know.

Avi Goldfarb (21:43):

Oh, there’s lots of reasons to think it’s no. Well, probably the reason why that article was largely fiction is because right, the company didn’t think would be an improvement, right? So, one hundred percent agree with you. So, the point of that story is just to recognize that any regulatory process, that’s focused on the AI we have today, not the AI we’re going to have in 50 years. Cause once we’re thinking about machines that, you know, artificial general intelligences that can do just about everything humans do, then it’s just a different economics. And I haven’t thought deeply about that, but some philosophers, biblical scientists have. But as long as we’re thinking about the technologies we have today, and in the short term future, five, 10 years down the road or more, we need to recognize that there’s no such thing as a machine decision. There is a human responsible. And so the starting point should always be understanding and allocating that responsibility. Then once we have responsibility, we can think through where do we want to do things through incentives, where do we want to do things through criminal law? Where do we want to do things through tort law and elsewhere? And that’s for, you know, but the starting point should be, look, someone’s making decisions and if something goes wrong, there’s responsibility for that human or team of humans.

Scott Wallsten (22:50):

So the machines are providing new, presumably more accurate inputs into a decision making process.

Avi Goldfarb (22:56):

Exactly.

Scott Wallsten (22:57):

Do you think we’ll see lots of incidents where an AI takes in all the available information and gives you predictions that are more accurate than the ones you can make. But it bumps up against cases where people are known to not be rational. I mean, you talk about the, in the book, not every, we know that humans aren’t rational, even though we often assume, assume that. And you know, there’s behavioral economists love talking about all these, about all the, all the ways that people are irrational. What happens when an AI gives you information that conflicts with what people just feel is right.

Avi Goldfarb (23:28):

So, let’s, let’s go to a specific example. Let’s think about an AI giving a doctor information the doctor feels is wrong.

Scott Wallsten (23:35):

Good. Ok.

Avi Goldfarb (23:36):

Yeah. So if you talk to doctors, they tend to say when they feel the AI is wrong, they overrule it and things are better than they otherwise would’ve been. For the most part, the studies that have looked at data, when doctors overrule algorithms, whether it’s AI based algorithms or others is on average, that’s not good for the patients, right? Yeah. Sometimes it might, sometimes the doctor gets lucky or there are some doctors who have some intuitive sense of where the machine fails. But for the most part, when we see humans in medicine overruling machines, once the machine process has actually passed some rigorous testing that it’s good enough to be deployed, it’s not so good for the patients. Okay? So, then the question is, do we trust that human feel or should we focus on the machine? Or do we need to invent new processes to figure out how the machines and humans can work together?

Avi Goldfarb (24:25):

And I would argue that it’s very much in that third category. Okay? And so, one of the themes of the book generally is for electricity, it took 40 years to figure out these new system solutions that would enable us to use the technology safely and effectively and productively. And part of new system solutions for AI is going to involve new decision-making processes, new job titles, new roles, new meanings for medicine, what it means to be a doctor. And we haven’t figured that out yet. So we know that machines on average might be better, but that might not be enough for the humans who make decisions to trust them. We are on the regulatory side, there’s people pushing for models of explain-ability to enhance trust so that people can understand the reason behind the predictions. I think in, in some context that makes sense in many contexts, that’s a little misguided because I worry that the explain-ability is largely a fiction.

Avi Goldfarb (25:26):

So, to the extent that a machine learning model has thousands and thousands of parameters, any explanation that the machine is going to give to a human decision maker is probably going to focus on two or three main factors. In some fraction of the time, it’s going to be those two or three main factors, but it might be thousands of others. And that’s going to come off cross explain-ability in medicine, right? If we’re going to test a new drug, every single drug, maybe not every single, but pretty much every drug that hits stage three clinical trials has an explainable reason why it should work. And yet lots of drugs fail stage three clinical trials because we just don’t know that much about how the human body works. This is Zed OB Myers, professor in the Public Health School and a doctor at Berkeley and also an economist has said, for machine learning tools, we should have the same threshold that we do for drugs. We don’t need to explain them, we just need to show in practice that they work and they save lives. And if that’s the case, who cares if we really understand why? And I think to me, that’s a big part of the regulatory transformation, the new systems to understand what kinds of changes and general trust that are real as opposed to creating a false trust through, in this case, explain-ability.

Scott Wallsten (26:48):

So-

Sarah Oh Lam (26:48):

I think, so I wondered about on that line of thought, you mentioned the solo quote that, you know, automation isn’t showing up in the productivity statistics and that its mind boggling that the internet wouldn’t show up in our productivity statistics. Are we going to be able to tell if AI is improving our quality of life? And, you know, maybe it is non-market good, not monetary, but how do you square that circle?

Avi Goldfarb (27:14):

No, that’s a great question. So the solo quote was from 1987, I’m talking about computers from the sixties and seventies and eighties. They hadn’t shown up yet. And then in the mid-nineties we started to see the impact computers and their productivity stats. That lasted about a decade. And there’s an argument that, you know, the internet started to show up toward the end of that and a little bit later. We haven’t seen AI yet, maybe, we’ll, judging on the history of other technologies, it’s not that we’ll never see it. Just that it takes an awfully long time to figure out these new processes. And so, some technologies, you see the boost right away it happens, it’s like a step function boost, and then it flattens out again for what we call general purpose technologies after a paper by Brann and Frankenberg, you know, they have their impact through this positive feedback loop of innovations and producing and using industries.

Avi Goldfarb (28:05):

And that feedback loop just takes time to play out. So, we did see computing in the productivity stats, it just took 40 years. We did see the internet in the productivity stats. It just took something like 25 years. When we see AI in the productivity stats, one version is if we just wait, it’s going to take another 25 to 40 years. My hope is that by understanding these, the history of technology and the need for particular kinds of co invention, that we can accelerate that process and make it, I don’t know if it’s five years or 10 years or something a little bit longer than that, but not 40 years.

Scott Wallsten (28:41):

I think you know, remembering the 40-year issue is important, although it’s frustrating that none of us will be around to run the regressions to measure it. But, going back to the point Sarah raised about the, you know, measuring productivity, you talk in the book about the radiologist issue, that one of the common predictions early on was that radiologists would become obsolete because AIs are so good at recognizing patterns, but that didn’t happen. Is this an example that systems take a long time to change? Is it an example of the, you know, whatever institutions protect doctors keeping AI from succeeding or, I mean, is this a just a function of time? I mean, so what, what explains this where it’s this perfect thing that AI should be able to do?

Avi Goldfarb (29:23):

So I think there’s been three things that haven’t happened with radiology. Okay? The first one may be the most important, which is we still don’t have an AI that can do all the predictions in images that radiologists do. Okay? So, we haven’t been able to create a really “machine radiologist” who can do all the non-interventional things that, that radiologists do. Okay? So that’s barrier number one. Now in particular context we have and why aren’t we using those at scale? Okay? So the technological reason there is a regulatory reason, and I’m actually quite optimistic about the regulatory reason, which is that approving health technology, like medical technologies takes five to 10 years. Okay? This, it was obvious six years ago that we had some hope and that things were going to change. And so we should expect some changes over the next say five years or so.

Avi Goldfarb (30:17):

So that’s number two. Number three is the one you alluded to, which is people tend not to fire themselves. Okay? And so where we’ve seen the biggest impact in radiology in practice, or maybe not the biggest, where we see the huge impact of AI and radiology has almost nothing to do with radiologists. Okay? As part of a radiologist workflow for decades, they’ve talked into a microphone, ok? And then there were teams of humans who used to work at the hospital and more recently were working in India, whose job was to transcribe what the radiologist said. Ok. Now most hospitals no longer have those teams of humans working for them or working with them doing transcription that’s AI based. And so we have seen AI transform the workflow of radiology, but it hasn’t affected the people who make decisions about the workflow of radiology. Who are the radiologists, Okay? So there are a lot of people who used to work in radiology who’ve lost their jobs to machines. Those whole, like the transcription departments they have, they’re just not the radiologists. And there is suggestive evidence, we’ll be a little careful about saying more than that, that it’s very hard to replace, to convince somebody to replace their own job.

Scott Wallsten (31:30):

I wonder if it’s a coincidence that I believe during that same time period there’s been a large increased demand for medical scribes for other reasons, as you know, hospitals and doctors moved to ERs. So, they may not have lost their jobs, but moved, but there was demand in a substitutable job. No, that’s fair. That’s fair. I wonder how to separate those things.

Avi Goldfarb (31:50):

I don’t, I haven’t looked up the stats on whether there’s more medical scribes or fewer. There used to be, but I know that there’re, you know, there’s very few radiology departments that still have those transcription services. Yeah. If you look at the sort of the mapping of like the floor plans for radiology departments over time, there’s two parts that are no longer there. One is a transcription part and the other is the room for the, what’s it called, the, for photography for developing film,

Scott Wallsten 32:17):

Right? Actually, so I don’t know if this is semantics or not, but where would you put that kind of change on your point application or system solution? Ok. I mean, it’s more than point not quite system.

Avi Goldfarb (32:27):

It’s more than, it’s somewhere between point and application, in the sense that there is a workflow and you’re identifying some human tasks within the workflow and you’re taking out the human, you’re putting in the machine. And so, but the ultimate workflow doesn’t really change and the service delivered to the customers doesn’t change either or to the patients in this case. And so it’s much more on the point solution side than the system solution, even though it changes workflows. That’s actually what a lot of the economists who’ve been worried about the impact of AI, Smo Glu, David Otter and others, Pascal Ropo, have focused on things like that where we see, okay, here’s a machine that’s doing something a human does and it just replaces the human, the product isn’t that much better. It’s a little bit cheaper, but not that much cheaper. And a whole bunch of people lost their jobs potentially. So that’s a very pessimistic future. Yeah, the company’s a little bit better off, but in the grand scheme of things, it doesn’t seem like that’s going to be a transformative technology and super exciting.

Scott Wallsten (33:24):

I’m sorry to jump around like this, but I wanted to come back to- you said that explain-ability is a fiction was at least what I wrote down. And that sounds right to me, given how complicated these, how complicated algorithms can be, whether or not the machine learning, you know, or other, but that seems to be a key part of new proposed regulations and laws and on their surface, they sound great, right? Paul should say, well, at least everybody should understand what’s happening. But like you said, that’s kind of impossible. So, what do we do about that?

Avi Goldfarb (33:54):

Yeah, that’s an excellent point. So, one version of it is, so the most cynical version is everyone knows that this is a fiction and it’s a way to, or not everyone, people writing the laws and the people who are going to have to enforce them and follow them, understand it’s a fiction and it’s a fiction to gender trust in the people who don’t really know what’s going on. Ok. There’s several less cynical takes. Ok. Less cynical take number one is, yeah, there’s thousands of parameters, but for most people, most of the time the dominant three things are going to be what’s actually affecting their decisions. So, if we think about, and so in that sense, explain-ability is meaningful for 30 or 50 or 70% of the population for the particular decisions they face, but not for the unusual decisions, which is maybe why you want explain-ability, but for most of the time when an algorithm says, oh, you can’t get credit and you try to figure out why, it’s because you didn’t pay your last six credit card bills.

Avi Goldfarb (34:48):

Like that’s, you know, that’s going to be, the explain-ability is going to work okay right now. Okay. So, then there’s another piece which is explain-ability in real time. This is another less cynical take that’s going to be very hard. Okay. So, it’s just to like every single prediction that a machine algorithm makes to make it explainable, that’s going to be a fiction. Unless we just say, yeah, for the 30 or 70 or whatever percent that the top three things work. But that’s different from saying it should be explainable ex-post. Okay? So there’s reasons to think that after the fact that any machine prediction could be explainable. What I mean by that, if it’s important enough to know why a machine made a particular prediction and why that led to a particular decision, you can spend resources auditing it. Because as whenever you have an algorithm, you can simulate stuff.

Avi Goldfarb (35:38):

And so you can just, so if you have a hypothesis, maybe it was gender discrimination, okay, well you can simulate gender discrimination, figure out what happens. So as long as you have a hypothesis, you can then simulate and explain. And so that’s another reason to be, to think that some form of explain-ability, at least ex post, as in you can’t throw out your algorithm and you have to have the ability to simulate stuff given hypothesis is fair. Nevertheless, I did open with something along the lines of explain-ability is largely fiction. And I think that whenever you’re interpreting those laws or those proposals, it’s important to recognize that these aren’t things that we can provide explanations for at scale in real time.

Scott Wallsten (36:22):

So, it, it almost sounds like you’re describing something like a hospital would use like morbidity and mortality discussion after mistakes. You can’t explain exactly what happened, what’s going to happen in every surgery, but if something goes wrong, you can try to figure it out and you make it at an institution.

Avi Goldfarb (36:39):

Exactly. And that goes back to the starting point of the regulation, which is to recognize there aren’t machine decisions, there’s machine predictions and a whole bunch of human decisions that might lead to a good or bad outcome. And so absolutely.

Sarah Oh Lam (36:53):

I’m curious, I mean, in a whole podcast on AI, we haven’t said GPT3 yet, or stable diffusion or text to video. So, all the, the newer things that have been out in the last month of AI. Yeah. So I mean a lot of those applications are art related, rendering images. It’s creative, it’s like draw a scene from this movie and it creates this beautiful scene. What do you see in, in these new innovations like the direction of, of these AI technologies that are exciting people? Is it going to be, I mean, visual more than predictive or industrial or how, you know, what does that tell you? And then your book is coming out now too, so it’s kind of like, right, what do you think?

Avi Goldfarb (37:37):

Okay, the way people are talking about them today, as point solutions, and that’s a little underwhelming to me. So hey, we have graphic designers and they spend a lot of time drawing stuff and instead we could use Dolly to do a graphic design based in our sentence, and then we can sort of bypass our graphic designer or make our graphic designers much more efficient in scale. Okay, fine. But that’s real point solution. A system solution is going to think through, well, once we can do graphic design at scale in a way that we couldn’t before, what else can graphic designers do? Are there all sorts of aspects of our lives at working at home that might be much better in the presence of at scale efficient graphic design? And can creative designers think through how to do that and ultimately deliver a different kind of value to us in our homes and at work than we currently experienced.

Avi Goldfarb (38:36):

So I think like there’s all sorts of, those technologies are really cool. Okay. So I fully, they’re really cool and the discussions as you’d expect in the early days are, Oh, well let’s figure out which humans do that and let’s replace them. And that’s, I think ultimately going to be underwhelming in terms of the impact on the way we work and live. But it gives us an opportunity to rethink how that could change, you know, systems and figure out new ways to deliver value to end customers and to workers in order to, to make lives better.

Scott Wallsten (39:08):

Yeah, there are all kinds of ways you could spin that out and be fascinating. I mean, good graphic designers have something in their brains that make them good. You know, maybe this gives them another way to express it, express a million ideas more quickly and don’t you, who knows?

Avi Goldfarb (39:21):

Oh, absolutely. So when spreadsheets first came out, we thought it was the end of the accountants. Okay. Because that’s what accountants used to do. Like there was, there was this homework problem that someone who was an accounting professor from the 1960s told me about where they asked people to open up the phone book the white pages where there were, you know, columns of phone numbers on them and to say, Okay, open it up to page 962 and add up all the numbers. Okay. And people did that because that’s what accountants used to do. They used to add up columns of numbers and so they thought, well, with the rival spreadsheet, oh, that’s the end of the accountant. But it turned out that people who were good at the arithmetic were good at using the arithmetic and they figured out all sorts of new ways to create value in terms of, you know, company strategy and taxes and all sorts of other things. So, you know, I’m not worried for the graphic designers, I’m worried for the ones that aren’t thinking about how to leverage the new technology, but there’s going to be a whole category of people who figure out how to use these technologies in order to create, to imbue our lives, for lack of a better way to think about it with graphic design.

Scott Wallsten (40:26):

We’re kind of, we’re running out of time, but before we go, I wonder if you could tell us a little bit about the Creative Destruction Lab. I mean the, the discussion right now seems to feed into that you’ve got all these groups pitching you with ideas to make those kinds of changes. Yeah. So, you, what have you learned by listening to the pitches, the seeing what succeeded and what doesn’t succeed?

Avi Goldfarb (40:44):

So, we’ve learned two things and they’re opposites of each other. Okay? So, the first thing we’ve learned is that investors and, and people who mentor startups often try to push them to identify somebody who’s going to buy what they have because they can see clear measurable value. And in the AI world that often leads to point solutions, Oh, think about your workflow, here’s some humans doing something, let’s replace those humans with the machines.

Scott Wallsten (41:10):

So that’s-

Avi Goldfarb (41:11):

So, there’s this push toward, yeah, so there’s this push toward these little point solutions that generates some value. At the same time, the biggest successes are the ones that figure out a way to create a new kind of value to end customers to rather than be a point solution, to be, to be a system solution. So, you know, Uber is a system solution. It took predictions about how to get from point A to point B, some creative ways to generate predictions about where people want to get picked up from in terms of their, their pricing system and some digital tools through digital dispatch and created a new way to think about transportation. Ok. Lots of other companies took those tools and created point solutions to sell a taxi companies. So, this morning, in fact there was an NPR working paper that came out on AI in the taxi industry showing how an AI tool allowed medium level taxi drivers to be just as good at, or not just as good, to be more like the best taxi drivers at figuring out where people would need to get picked up. So as an AI tool that lowers the gap between the mediocre and the best taxi drivers by 14%, okay, that’s a point solution. It made a difference, it made those people a little bit better off. It’s very different, this system solution that we see in Uber. And so that’s where, like at the lab we see the, the push toward point solutions and yet the biggest value is going beyond that to figure out how do you deliver, you know, a new way to actually transform the way we live and work in some way.

Scott Wallsten (42:51):

Well, that’s probably a good point to wrap it up. I just would tell everybody to be on the lookout for your book Power and Prediction, the Disruptive Economics of Artificial Intelligence. It’s coming out soon. It’s a great read and if you want to learn something about AI and how it’s likely to act as a general-purpose technology over a long period of time, this is the book to read. So, Avi, thank you so much for talking with us.

Avi Goldfarb (43:14):

Okay, thanks so much Scott and Sarah. Take care. Good job.

Share This Article

View More Publications by Avi Goldfarb, Scott Wallsten and Sarah Oh Lam

Recommended Reads

Related Articles

Sign Up for Updates

This field is for validation purposes and should be left unchanged.