fbpx

Catherine Tucker on Algorithmic Bias

Catherine Tucker on Algorithmic Bias

Scott Wallsten:

Hi, and welcome back to Two Think Minimum, the Technology Policy Institute’s podcast. Today is Monday, December 6th, 2021. I’m Scott Wallsten, President of the Technology Policy Institute, and I’m here with my co-host, TPI Senior Fellow & President Emeritus, Tom Lenard. Today, we’re delighted to have as our guest MIT Sloan School of Management professor, Catherine Tucker. Professor Tucker is the Sloan Distinguished Professor of Management Science, Professor of Marketing, Chair of the MIT Sloan Ph.D. Program, a co-founder of the MIT Cryptoeconomics Lab, which studies the applications of blockchain, and also a co-organizer of the Economics of Artificial Intelligence Initiatives sponsored by the Alfred P. Sloan Foundation. Her research interests lie in how technology allows firms to use digital data and machine learning to improve performance, and in the challenges this poses for regulation. Professor Tucker has particular expertise in online advertising, digital health, social media, and electronic privacy. Her research studies the interface between marketing and the economics of technology and law. She holds a BA from the University of Oxford and a Ph.D. in economics from Stanford University. Welcome, Catherine, thank you for joining us. 

Catherine Tucker:

Well, thank you. It’s lovely to be here.

Scott Wallsten:

So, let’s just jump right into it. You’ve done lots and lots of work on the role of algorithms in the online space and in the economy, even if your papers don’t always necessarily have the word algorithm in them, although most do. So, that’s a big issue in policy today. People seem to come at many of the relevant questions with the idea that somehow algorithms are inherently bad. So, tell us, before we get into your research specifically, what is an algorithm and what are the ways we should think about it?

Catherine Tucker:

[Inaudible] asked me this question because an algorithm is so simple to the point where it makes you wonder where we’re sort of going with the policy debate. So, an algorithm is basically anything I can use to aid me in decision-making. It doesn’t even have to be digital. I could have in my head, the algorithm that when I get lost, I always turn right and that’s the plan. So, it’s clear an algorithm doesn’t have to be digital, but usually, when we’re talking about algorithms in tech policy, we’re talking about algorithms that use existing data or information to offer a better guide to what might be an appropriate prediction about what do.

Tom Lenard:

A follow-up question, because I think a lot of the debate in the policy circles is going to be about algorithmic discrimination. So, how would you define algorithmic discrimination? I mean, aren’t all algorithms supposed to discriminate in some sense?

Catherine Tucker:

Well, maybe I’ll describe this in two ways. First of all, the way I use algorithmic discrimination is simply in the way it’s used in the policy debate. It’s not an academic description, but instead, it reflects any time an algorithm seems to be making judgmental predictions, which will reinforce existing social inequality, rather than ameliorating it. So, that’s a sort of simple working definition, which I think is helpful. There’s two things that I’ll give you to sort of help see why though it sounds easy on the basis, it’s not an easy definition. 

And the first is that I think the computer science literature has rediscovered an old literature in economics on statistical discrimination. And this is the really gnarly problem about, “What do you do if you use data, and the data says, if you discriminate this certain way, you’re going to be profitable, but the fact we’re discriminating or the group we’re benefitting, or the group we’re disadvantaging, doesn’t make us feel comfortable as a society.”

And as a result, you know, there’s an old economics question about this, sort of trying to think about, how do we think about this tradeoff between efficiency and equity in these circumstances, and I think it’s fair that if you come across a debate called algorithmic fairness, it’s basically rediscovering this problem that sometimes, in certain scenarios, there is going to be this inherent tradeoff between what we might think of as right, and what we might think of as efficient. 

Now, reflecting this, I’ve just got to tell you this story though, about algorithmic discrimination. I want you to imagine the first time I ever presented my research on this topic to the University of Chicago. I casually was using the word discrimination as it’s used in the policy literature. And as a result, what happened was that every single University of Chicago professor basically stood up one after the other to shout at me, and that’s quite the event if you’re an economist, basically saying this is not discrimination, this is just efficiency, and how dare you label this as discrimination.

You know, and in some sense, it was this embarrassing period after they shout at me for 15 minutes where I sort of pointed out the question mark in my title, saying is this algorithmic discrimination? And that’s what the entire paper is about. But I think it’s fair to say from these sort of three points that we may have a working definition of algorithmic discrimination, we’re still at the stage where academics are debating each other and really continuing a debate that’s been going for many decades.

Scott Wallsten:

One of the things that I really like about your research is that you ask these important questions and then find results that make a lot of sense, except we hadn’t thought about it before. And often you provide things you can do to address if there’s an issue. One of my favorite papers of yours is the STEM advertising online advertising bias. I know that paper is not your newest one, but if you could talk about that a little bit. I like the conclusions, how it shows unexpected ways of leading to something that people expect. And then maybe we can move on and talk a little bit about the role of how rules on advertising targeting affect consumer search.

Catherine Tucker:

Well, of course. Actually, I’m glad you brought up that paper because remember when I told you I was presenting a paper and got shouted at by every single economist? It was that paper. 

Scott Wallsten:

Oh, it was that paper? 

Catherine Tucker:

It was that paper, and you’ll probably understand when I described the paper, why it sort of made economists go, but this is not discrimination, it’s efficiency. So first of all, this paper was set up as a really ambitious study, and I’ll tell you what we were hoping to look at. And then what we ended up looking at. So, what we did was we started a Facebook ad in 190 different countries, and the ad was going to be promoting careers in science, technology, engineering, and math, which, you know, has been traditionally an area where women are underrepresented. And what we had, the reason we decided to launch this in 190 countries was we thought, well, wouldn’t it be interesting if the algorithm picked up something about the degree to which females actually had opportunities and so on, as opposed to the hypothesis you hear in the algorithmic discrimination literature, that algorithms pick up discrimination from a training dataset and then perpetuate it.

So, that was our plan. And so, we ran this ad and what we found was disquieting, and that we found that the ad was shown 20% fewer times to women than men. So, we got the result that we were fearing we were going to get, but then we went to sort of look at the mechanism and we sort of had some ideas from the literature about the mechanism. One idea was in some sense, maybe it was self-inflicted, maybe women, if they saw the ad, didn’t click on it. The algorithm learned that no, that wasn’t true. If the women ever saw the ad, they were more likely to click on it. And then we went, aha. Well, it’s definitely going to be then that the algorithm was picking up something about cultural prejudice. And so we then looked to see which countries where this was worse, expecting to see a piece of all the countries where you associate with women having limited economic opportunities. And we saw the country where it was worst was Canada, and at that point we thought, “No, maybe your hypothesis is wrong. Maybe something else is going on.” 

And then what we worked out was actually what was going on was that women have more expensive eyeballs than men. And that’s particularly so in developed economies where traditionally marketers assume that women control a lot of the major discretionary household purchases. And so as a result, I’m so sorry, Scott, my eyeballs are just more expensive than yours. So as a result, the algorithm wasn’t told whether to target men or women, it was told to target both genders. And it simply went out there and found the most inexpensive eyeballs, which happened to be male ones. And so this was an example where have a disquieting result, which looks like algorithmic discrimination, an ad not been shown to women, which should be shown to women, and being shown instead to men. But instead, it was just a result of the algorithm going out there and trying to save the advertiser a bit of money and being cost-effective.

Tom Lenard:

So, with a result like that, I love that paper too. Because I like simple, intuitively appealing explanations that nobody’s thought of before. But <laugh> with that in mind, if you were talking to, I mean, I know you’ve testified in Congress on this issue, and I do think that this is one of the hot issues in both Congress and the FTC and the Department of Commerce. What would you advise them, if anything, to do, and what would you advise them not to do?

Catherine Tucker:

Well, I think in some sense, not to do is the easier one. So why don’t I talk about that, and then we can go to the harder one, which is what to do. So, I was writing this paper and as you know, you sort of go through an academic review process, and the referees, that’s short for the academic supervisors, told us to do two things. The first thing they told us to do was to show it wasn’t just Facebook, and yes, guess what? We managed to replicate the result on Google, Pinterest, Instagram, just, you know, anywhere you might want, right? Given it’s coming through the fact that female eyeballs are more expensive, you just find it everywhere. The other thing that they asked us to do was to show that we can solve it, and we said, “Oh, that’s going to be easy to solve it.”

And in fact, this is sort of our punchline where we’ve been presenting it. Because what we thought is that once you knew about the next footstep was easy, right? If you’re a recruiter and you really are intent and trying to advertise to an equal number of men and women, then what you should do is pay more for female eyeballs and run a separate campaign, which is deliberately targeted to ensure you’re reaching more females there, and you’re paying more for them. And you know, that always been a great implication when we presented it to advertising executives. They were like, “Yes, that’s going to be great practice. We’ll do that.” And so we thought we were onto a winner here that we had a paper and a solution. And then the reviewers actually say, “Show us your solution.” So, we went to try and do it. And the idea was simply that we were going to run an ad and pay more for women and less for the men, and we found we weren’t allowed to do that.

What happened in the interim is there’d been, I think, some kind of lawsuit or pressure on Facebook, which meant that you could no longer target ads based on gender specifically to men or women, and so as a result, we couldn’t solve it. And so, we were almost in the worst possible place where you’d had some, I would say analog era, well-intentioned regulation comes in, say you can’t target on gender. But the moment you do that is the moment that you can’t actually correct the problem. So now, anytime anyone runs an ad for a job on one of these major digital platforms, because they can’t target by gender, in the end, you’re going to end up with a situation where they’re going to show it to men. And so that’s just sort of an example of what not to do.

Scott Wallsten:

So, just to restate or actually, you know, to say almost the same words, by making it not possible to target by gender, you’ve locked in discrimination against women for at least something like the STEM ad that you were talking about. 

Catherine Tucker:

Exactly. There’s no way now you have of correcting it easily. For me, that’s just sort of striking about why we should all be very worried about piecemeal regulation which comes in, sounds like a good idea, but doesn’t really think about how algorithms are interacting with the broader context.

Scott Wallsten:

We actually try to use your results also in our Twitter followers campaign, where there’s a higher-level question of whether any Twitter followers mean anything or Twitter means anything. But occasionally when we’ve tried to do this, we’ve made two campaigns, one for men, one for women, and spent $3 for every woman… or it was a three-for-one ratio I think we used. And we found that our Twitter followers, at least according to Twitter, became more balanced. It was less tilted towards men. So, I mean, it was what you’d expect, but it did follow the results of your paper.

Catherine Tucker:

Wonderful! What’s great is you can follow the results of my paper, right? Because you’re ultimately a nonprofit looking for followers. 

Scott Wallsten:

Right. 

Catherine Tucker:

The problem is the way the regulations are designed is the people who can’t follow my paper are the ones are advertising in housing, insurance, or jobs, right? Maybe, and I’m not saying that obviously equal balance of genders isn’t very important to Tech Policy Institute’s Twitter followers, but I must admit it concerns me a lot that these really important sectors can’t actually do what you need to do to get an equal balance now.

Scott Wallsten:

What do policymakers say when you bring that to their attention? Do they sort of not engage with it, or it really goes against the narrative that so many have embraced?

Catherine Tucker:

Right. I think it’s fair. I always have papers like this, that I sort of present my result, and then they can be interpreted through two very different lenses depending on what your partisan leanings are. I think it’s fair to say that when I presented my results, the Republican-leaning people were like, it shows you should never try and regulate, right. It just shows you should never regulate. And if you regulate, you’re going to get in the way of cost savings for advertisers. So, that was that one result. And then the result on the other side of the aisle is, or is more regulation needed? That’s a problem, right? You can take it both ways. I think it’s just one of those papers, which anyone, and I don’t know if it shows it’s a good or bad paper, but it’s certainly one of those papers that people have used to support directly contrasting positions.

Scott Wallsten:

Does that mean we’re doomed to just reinforce biases? 

Catherine Tucker:

I think I’ll, you know, get back to being more serious, which is that what I’ve learned from this experience is that a lot of regulation right now doesn’t actually understand, and I’m talking specifically about advertising, since it’s so much at the forefront of this debate, doesn’t really understand how algorithms work, how predictions work, how machine learning works, all this essential, these successful questions. And what we know as economists is that anytime you have regulation, which is specifically directed at a very piecemeal solution, there’s going to be unintended consequences elsewhere. And that’s what we’ve seen so far in this debate. I mean, you can imagine like as a policymaker, if it’s put on your desk and it’s like, do you want people to be able to target ads to women or men, or exclude women or exclude men, no policymaker is going to say “Yes, that sounds like a good plan.” The problem is the consequences of that have been quite profound in that we’ve got this persistent imbalance now in how we can show ads to people who are looking for careers. 

Tom Lenard:

Let me follow up on that. So, you pointed out areas like jobs and housing and things where we actually already have anti-discrimination laws, and for good reason, but should we be concerned if there’s an imbalance just for ads for regular products?

Catherine Tucker:

No, I mean, look, this is what’s strange about my paper, right? Why do we have the situation where women cost more than men anyway? It’s because just for whatever reason of cultural norms, women tend to be more in charge of big purchases and spend more, are in control of household finances and all of these things. And so it’s completely rational. I want to be clear. This is completely rational. I’ve looked at data where people are selling stuff you really don’t care about, like octopus-shaped kitchen blocks. Stuff like this, you don’t mind if men or women see different balances about, and you see it persist there too. But the problem is, because of these spillovers and the fact that actually, if you’re an advertiser, and this is the point of the study, if you’re trying to sell octopus-shaped oven gloves, it makes more sense to target women. It’s more cost-effective and more likely to convert, all of these things, completely rational. The fact you are paying more of those female eyeballs is then going to have spillovers from, I don’t want to call it a frivolous area of the economy, towards a really serious one. And I think we’ve never, ever had to confront this potential for spillovers, between sectors, where we don’t care. In fact, we might think it’s efficient for there to be different treatment of genders, to sectors, where we’re like, “Oh no, that’s terrible.”

Scott Wallsten:

Your paper on restrictions on advertising in consumer search seems to get right to that because you used a very important part of the economy for that one, drugs and the FDA. Tell us a little bit about that paper.

Catherine Tucker:

So, the moment you said that Scott, I realize that you’ve noticed that all my papers sort of end up with this, “Oh, that was unintended, consequence.” So, in this paper we looked at what happened when the FDA sent a various of set of demands that the various pharmaceutical products stopped using paid search ads. And the issue is that if you got a pharmaceutical product, then you all know from the TV ads, you can have about half the time that can be the good stuff. And then half the time is going to be telling you how it’s all going to go dramatically wrong, right? That’s just the way we always regulate these things. You can’t really do that in a paid search ad effectively. So, at the time the FDA said, “No, we can’t do paid search advertising .” And we look to see what happened when you removed paid search advertising by the big pharma companies.

And what we found was that you move that advertising, there’s going to be something else. And the question becomes whether that something else is something which is good or bad for the economy, or good or bad for consumer welfare, and this is what we found. What displaced it was sort of community-led forums, sort of places where patients go to sort of discuss their symptoms. I don’t know how we feel about that. It may or may not be a source of good information, but the other thing which replaced it was Canadian pharmacies. And that’s something where my colleagues in economics, such as Ginger, have written really good papers saying, “Actually, no. Canadian pharmacies are probably not the ones you want to be actually advertising if you’re the FDA.” And so, we went from this strange world where we remove these ads with the intent of trying to make sure that people seeking pharmaceutical information would see more balanced information. And we ended up in a world where all they saw was more biased information in the form of these Canadian pharmacy ads. And so, you know, it’s another case where you try and restrict something, and it sounds good on the face of it, but you’re not thinking really what you’re displacing.

Scott Wallsten:

In that case though, part of it was that there wasn’t enough space in the display ad for them to display the warnings that were required, right? Or the side effects or something like that? So, what would’ve been a preferable outcome in this case? To allow the pharmaceutical companies to advertise without having to note any side effects, or I don’t know what… Which is better for consumer welfare?

Catherine Tucker:

It’s sort of got two options here, right? You either say no one is allowed to advertise against a [inaudible] someone sets for a pharmaceutical product at all. I’m not a constitutional lawyer, but that seems problematic potentially to me though, from a free speech perspective. It also seems problematic because you’re ignoring all of the organic search results, right, which just exist. Or you potentially work with pharmaceutical companies in trying to find a way… you’ve got these regulations, which are designed for a world of TV advertising or radio advertising, right? And then you try and think, “Well, how can we do this more effectively in a world where ads are deliberately sparse and constrained as say on a search engine advertising world. Those sort of seem to be the two routes for me. I think the problematic route though, is a route where you just take away the pay search ads from a specific small subset of firms and don’t think about what’s going to be appearing in that place.

Tom Lenard:

I mean, it seems to me, I can’t recall exactly if this is right, but it seems to me, FDA regulations in print media, they said you had to list all the side effects. And the pharma companies did that, and people were free to ignore it. But then when it moved to TV, and if you apply the same requirements, people were not going to sit and have a minute or two of side effects read to them. So, they adapted the regulations for TV, and I think there are less requirements for TV than for print media. And so maybe, they can adapt them for social media as well. I’m pretty sure they’re not the same for TV as they are for magazines, for example.

Catherine Tucker:

One of the issues, the challenges here, right, is that if you think about paid search advertising in particular, it’s designed to be unobtrusive. It is designed to be nondisruptive. That’s why it works so well. And for all the other advertising media that they regulated in the past, they’re intentionally designed to be disruptive and so on. And so, it’s difficult, I think, for a regulator to get their minds around a very new format or way of doing advertising. And I find it strange to be saying new format or way of doing advertising when we’re talking about paid search, which has been around for two decades now. But I think, you know, we’re still dealing with that kind of lag.

Tom Lenard:

Scott and I were talking about this earlier. There are people who would like to do away with the advertising-supported content on the internet. And I think it’s not a trivial number of people. 

Catherine Tucker:

Well, I always found it really strange when we have these debates in Washington, and you’re in this world a lot more than me, where advertising is, I think pre-thought of as obviously evil, obviously working against customer’s own interests.

Tom Lenard:

Well, that’s not new.

Catherine Tucker:

Yeah. You know, maybe that’s not new, right. But if you think about why it is we want to get away from a bit of the advertising on the internet, it’s because of that viewpoint, right? That advertising is something which definitely has utility, you know, and it’s just strange as an economist where we have all these models and we have all these studies about advertising as information, and a lot of the evolution of digital advertising has really been in some sense, improving the informativeness of advertising to consumers. And so, you know, if you sort of go back to that 1950s model, 1950s writings. 

A lot of the things that people were worried about are somewhat corrected in the digital world, right? Ads are less intrusive. Ads are less disruptive. They’re a lot more informational, rich, and all of these kind of things. And so it’s strange that we immediately presuppose that advertising is bad for consumers. And I think that’s sort the beginning and the end of this debate, right, about whether or not we should end the advertising-supported internet. It sort of presumes that advertising is some sense of evil, which is not clear to me at all.

Scott Wallsten:

In the fifties, I don’t know this area well enough… I mean, advertising has always been used to support whatever the media is. I mean, people have never paid the full cost of newspapers or magazines. It’s all also ad supported. Was there, along with these beliefs that advertising was bad, were there also similar calls to not allow it in newspapers or magazines, or is that a new part of the debate? I don’t know… there’s no reason why either of you would know that necessarily.

Catherine Tucker:

Well, you know, I’m going to throw this to Tom in case he’s got comments on the history, right. But certainly, if you look outside of the USA, there’s often been attempts by state governments, at least to limit advertising, right? I grew up in a country where government-sponsored media deliberately has no advertising, and the government says you can’t have any ads to the other advertising channels, or at least they used to. I don’t necessarily know if this paternalism is new, but Tom should comment.

Tom Lenard:

I don’t know if people proposed… I mean, the concern, I think has always been really in one form or another, that advertising is basically manipulative. It doesn’t provide genuine and information to consumers, useful information to consumers. It just manipulates them. But I don’t know whether in the fifties or earlier times people were talking about prohibiting it in one form or another. I don’t know the history that well.

Catherine Tucker:

One thing which always strikes me as a useful next step in this debate is to better identify what kind of ads we think of as a bad, right? This is my impression. I’m an academic. I’m not really that involved in debate. We go straight for the really horrible examples. Like the things that we don’t like, you know, such as payday lending or something which looks predatory or deceptive. But it strikes me that actually focusing in on what we think of as the bad advertising that we don’t like would actually help us then start to have a better-quality debate rather than starting off as presumption that all advertising is bad.

Scott Wallsten:

I’m not sure how we would define what is bad or good outside of some very extremes, right? Because we can’t decide that for content issues online. I mean, I guess that’s why we have the First Amendment, right? Our default is to not block things, not block speech.

Catherine Tucker:

It’s interesting. So, say something like advertising for crypto exchanges or something like that, right? That’s an area where many digital platforms clamped down, right? They felt it was too [inaudible], potentially too manipulative, all of these things. That’s sort of like a nice example of a middle case, right? Where it’s less obvious, right? There’s some products where you’re like, “No, no, we definitely don’t want that advertised. That’s definitely manipulative,” but there’s many things where there’s a slight unease, but it’s not clear whether or not you can put it in that bucket. But I understand Scott… God, I hate, I think all economists hate binary buckets, right? But at least if we had more of a recognition that there’s a continuum, and we could identify what the axis were of the continuum, which meant it was advertising we didn’t hear were great. Then we could at least have a more sensible debate, which is not let’s get rid of all advertising.

Scott Wallsten:

So, to really put you on the spot, if you were to design a research project to do that, to identify somehow, empirically, what is manipulative and what is not, to try to define that line somehow, how would you do it?

Catherine Tucker:

What a lovely, interesting, inspiring question. You know, the answer to it is going to show why I’m a disappointing researcher, in that basically my way of doing research, as you notice, is always very narrow, right? And so, what I would try and do for example is take that crypto exchange advertising example I just gave, right? Do we see evidence that the ads… what I like about that is, you know, there are going to be exchanges which are definitely more expensive, definitely less reliable, definitely potentially providing a lot more fake information. And do we see advertising working asymmetrically well, for those less good players, right? That would be one sector, but I think those have the consistent number of facts we’d have to get in because what was sort of, in some sense, the pre-supposition of this debate is always that advertising is the friend of the bad players in the market. And that’s not even clear to me as a sort of true starting point. So, I’d probably start there.

Scott Wallsten:

We are almost out of time, but I wanted to go to a slightly different issue just for a minute. You have a paper on the role of delayed data reporting and COVID, which seems particularly interesting and relevant because for those of us who have been obsessively checking the COVID statistics every couple of hours every day for the last 18 months, it’s frustrating that we can’t see it in real-time, right? So, tell us a little bit about what you did there, and what the effects of this lag are. 

Catherine Tucker:

Oh my gosh! So, this is something I feel passionate about. And I must at this point mention that, I don’t know if you’ve recently had a podcast with Joshua Gans, who wrote the first book really from an economist on this point of view, but he basically said, “COVID is an information problem.” And that seems very smart. So, we wrote this paper because me and my students were interested in digital data. We were like, “Well, is part of that information problem to do with digital data flows?” And my gosh, one thing it led me to realize is that when you and I, I’m sure all of us, were like is there going to be a Thanksgiving uptick, and we’re looking at the data. We don’t realize that we’re looking at the data from at least a week ago, 10 days ago. It’s very, very slow. And what’s even worse is it’s slow in very unsystematic ways. There are some states where reporting is still happening by fax machine.

I just want to just repeat that, fax machine. And so as a consequence, we not only have huge delays, but we have unsystematic delays. And why is this important? Well, it’s important, like, let’s imagine I’m running a statistical study. I’m trying to work out if mask mandates work, if other non-pharmaceutical interventions work, I’m doing it on the wrong data. I’m doing it with data with just this huge lag, and we’re trying to measure sharp policy effects. So, I think this is, we can’t even begin to measure… I can’t believe it, in 2021, I don’t see ways given the data we have on cases, that we can even now start to do typical economic evaluations of whether or not a policy works or not because the timing is all awry.

Scott Wallsten:

Is there anybody who’s doing it right? Either a state or a country in terms of a data collection and distribution.

Catherine Tucker:

Oh, I wish I could tell you that. I can tell you we’re not doing it, but…

Scott Wallsten:

That’s not much of a surprise for anyone…

Catherine Tucker:

I wish I had something inspiring, like these countries are good at it. I should do that. But I don’t know that right now. What is amazing to me though, and this sort of goes back to my old stomping ground, is that the one thing I would notice, we looked into [inaudible] that states would be using fax machines, right? And it actually ends up to be having to do with legacy privacy regulations at the state level where they’ve certified fax machines as be privacy compliant with digital entry and a dashboard not being compliant, and so maybe we can learn a little bit from that.

Tom Lenard:

I mean, has, for example, failure of tracking systems and maybe data reporting systems, how much of that is attributable to, if not regulations, just over concern on the part of… Google and Apple tried to set up a tracking system early. I don’t think much came of it, I think. And maybe it was because they were overly concerned about privacy. So, they came up with something that was not very useful. 

Catherine Tucker:

Maybe we do see some linking effects of regulations and a lot of it comes from what I’d say very old school, analog era concerns about how we report data. So, for example, there are various states at the beginning of the pandemic that actually had more granular data reporting and it was more useful. And then they got rid of it because of trying to be more compliant with what they perceived of as privacy regulations. So, we have seen those kinds of movements, but again, the regulations sort of cases at the state level ends up in more of a case study. But in the case studies I’ve seen it’s because you’ve got 2000 era regulation trying to apply it to a 2021 scenario.

Scott Wallsten:

So, do you think, is that the main driver of lack of real time or close to real time data? Or is it something else that we’re just, it’s not the way we’re, we’re used to reporting data. I mean, data from the agencies comes out a year late, which is not a criticism because for most research that’s fine, and you want to get the done right. But in this case it was different. So, is it because of legacy regulations, or is it because we’ve just never bothered to set ourselves up the right way? 

Catherine Tucker:

Let’s be clear. I’ve had such a career looking at how privacy regulations, which were made in the analog era may mess things up, you know? So that’s one of the reasons I was looking at that and I saw a relationship with the fax machines, but you’re completely right. It’s not the major course, right? We now have a pandemic, and we have all the analytical tools which could allow us, in theory, to do far better data analytics than we’ve ever done before, right? But that’s just not the way we’ve ever set up how we do reporting, and maybe, Scott, what you’re saying is like the inspirational thing, right? That we’ve, in the past, there’s been nothing like this, right? No attempt to do real time. And so maybe we’re going to learn from this, the value of real time reporting. 

Scott Wallsten:

We hope <laugh>

Catherine Tucker:

Don’t think harder about the implications, but no you’re right. I mean, let’s be clear, we’re sort of expecting agencies to do something that they were never built to do. 

Scott Wallsten:

Right. And actually, I mean, it seems like that should be one of the most important lessons from the pandemic, the importance of real time data. And we can try to prepare for things in advance, and we should, but nothing turns out to be exactly the way you expected. And the best you can do is try to adapt to it in real time. And, if you don’t have the information to do that, you won’t be able to. Right? So, the thing we should learn is to have the information necessary to make good decisions when it happens.

Catherine Tucker:

Let’s be clear. You know, we’ve been talking about advertising, and advertising in real time, I go to a website, like multiple bidders are going to be bidding from my eyeballs. They’re going to know a lot about me. There’s going to be real time integration assistance, all of this taking place in nanoseconds, right? We complain about latencies of fractions of seconds in the world of online advertising. And the fact we can be so good at doing real time in a world like online advertising, and though it’s my passion, something I’ve studied, ultimately less important than the public health crisis. We can see how real time data has transformed that part of our economy. And my gosh, I hope we learn that for public health.

Scott Wallsten:

Right. I mean, it does seem like we haven’t talked about that enough. So, I hope more people read your paper and pay attention to that issue because it sure seems like you’ve identified something that has not been sufficiently explored in this pandemic and could have driven us to better policy. 

Catherine Tucker:

I don’t know if it’s optimistic that you’re saying could have, and I’m saying can <laugh>, it probably says something about where we both we’re going, but yes. We’re optimistic in our different ways.

Scott Wallsten:

That’s right. <Laugh> Yeah, hopefully, we’ll have another podcast in five years and you’ll be talking about the whole new pandemic data collection system that’s been set up in response to your paper. It’ll be better placed next time, if there is next time, which hopefully there won’t be. 

But so, I think we’re definitely out of time. Catherine, thank you so much for joining us. It’s always fun to talk to you.


Share This Article

View More Publications by Catherine Tucker, Scott Wallsten and Thomas M. Lenard

Recommended Reads

Related Articles

Sign Up for Updates

This field is for validation purposes and should be left unchanged.