fbpx

Posts

MIT Sloan Professor Catherine Tucker on Privacy, Antitrust, and the Value of Data (Two Think Minimum)

by , , and

Tom Lenard: Hi and welcome back to TPI’s podcast, Two Think Minimum. It’s Monday, October 1st, 2019 and I’m Tom Lenard, senior fellow and president emeritus at the technology policy Institute. And today we’re excited to talk with Catherine Tucker. Catherine Tucker is the Sloan Distinguished Professor of Management and Professor of Marketing at MIT Sloan. She’s also chair of the MIT Sloan PhD program. She’s received numerous awards. I’ll list some of them cause it’s very impressive. The NSF Career Award for her work on digital privacy, the Aaron Anderson award for an emerging female marketing scholar and mentor, the Garfield Economic Impact award for her work on electronic medical records, the Paul E. Green Award for contributions to the practice of marketing research, the William F. Odell award for most significant long-term contribution to marketing, and the Informed Society for Marketing Science Long-Term Impact Award for long-term impact on marketing. She’s also the cofounder of the MIT Crypto Economics Lab, which studies the applications of blockchain and also a co-organizer of the Economics of Artificial Intelligence Initiative, sponsored by the Alfred P. Sloan foundation. Last year, she was also a visiting fellow at All Souls College, Oxford. I’m joined today by Sarah Oh, who’s a senior fellow at TPI and who will join in the conversation. Catherine, you were one of the first economists, I think, to study issues involving the use of data online and the related policy issues, privacy issues, also leading into antitrust issues, and I think it’s fair to say it’s some of the most important contributions. How did you get interested in the subject?

Catherine Tucker: So I became interested in it because I was in my dissertation as a PhD student in economics at Stanford. I was studying network effects and if you go back in history all the time to when I was doing my PhD, at that time economists were interested in network effects. Why? Because it was the time of the Microsoft case. My advisor for my dissertation, Tim Bresnahan, was one of the chief economists in charge of that antitrust inquiry. So I was excited about network effects and at the time we were just really thinking about network effects in terms of hardware and software. And so they were very technology-based. I thought I wanted to do something new in my dissertation and research, and so I started looking at the idea of data sharing, how there’s data sharing in the economy, and whether or not that could lead to network effects. Now of course, when you’re trying to measure things as an economist, you’re always looking for what we call exogenous shocks, or stuff that changes your ability, that phenomena. And in my case I noticed that if you are looking at data sharing arrangements, one of the biggest shifters was actually privacy regulation. And so I started off being interested in privacy really as a measurement tool. That is, it’s shut off network effects, it meant I could measure network effects accurately. But then I realized at some point that it was a lot more interesting to actually study privacy rather than network effects. And that’s how I got going on this topic.

Tom Lenard: What are a few of the things that you’ve learned along the way and that your research has taught you that people might find interesting? And there’s a lot that’s interesting.

Catherine Tucker: Let’s maybe start on the sort of straight forward technology area and then we can delve into some other things too. One of the first papers, which I’m excited about, was one where we looked at the sharing of data in healthcare systems. It was there I really started to actually measure the effects of privacy regulation and there were sort of two consequences there. First of all, understanding that in our new day to economy, the nature of privacy regulations are going to play an important role in understanding how a technology spreads. The other thing about that research, which I always hesitate to emphasize because it always feels a little bit strong, is that one of the applications about electronic medical records we were looking at was how they affected the ability of hospitals to do neonatal care. And I think often in our technology debates, we don’t really discuss enough about the benefits of technology. And so I think it was important we are measuring the benefits of technology there. But one of the reasons I got passionate about this topic is in my research, what we were showing was that these privacy regulations, incredibly well-intended, by inhibiting your adoption of medical records in contexts where there were high risk babies, where the data was going to be particularly important, it was actually costing babies’ lives. So that first result, I want to highlight. From there, I’ve gone on to study network effects and data and perhaps less emotionally fraught areas, especially in online advertising. And we can definitely talk about that more in depth.

Tom Lenard: Let’s go back to the electronic medical records. Maybe explain a little bit more what the mechanism is, why privacy regulation might result in more deaths.

Catherine Tucker: Of course. So if you think about a typical birth- let’s be clear, it’s not going to be an issue. So for most of the people listening this, they’re to typical birth, their pregnancy was straightforward, they gave birth easily, their biggest concern was picking out the right stroller and car seat. However, for a segment of the population they are undergoing what we call in a high risk pregnancy. And in this as a whole sequence of potential maternal complications. Now the reason electronic data and digital technologies becomes important in situations where you have maternal complications is that women go into labor randomly and you need to make sure the right data is communicated to the doctor and medical team who’s on standby then. If you do not have that ability to transmit the data about a high-risk pregnancy, it’s far more likely that the complications will not be properly understood by the medical team. A typical example there would be something like placenta previa where there’s a placenta blocking potentially the passage of the baby. Communicating existence of such a complication is incredibly important. If you think about the challenges of a mom having to communicate this, and the complications are really quite high. And so having digital records of that are very important. Now if you have privacy regulation in place, this is going to have two effects. First of all, it’s going to make it more costly and difficult for the hospital to adopt these kind of technologies. The other thing which is going to happen is there going to be limits on how data is shared across different medical teams, which are also going to make the technology less useful. So that’s where it was that I saw this restriction on privacy regulation at the state level. Again, I’m saying very well intended regulation, we actually have this consequence of potentially costing babies lives just because you couldn’t get the data at the right time at the right place.

Tom Lenard: Looking forward, where do you think- we’ll talk a little bit about your new research projects- what do are the important unanswered questions in the area of privacy?

Catherine Tucker: We have so many unanswered questions. Let’s just start with some. So I think one of the things I’ve noticed in my career is that as an economist it is relatively straightforward to measure costs of privacy regulation because you can measure how technology gets inhibited, how economic outcomes change. What we’ve not done well at, so far as economists interested in measurement, is trying to measure any positive benefits from privacy regulation. And I think that’s always a little bit frustrating because of course we want to understand the tradeoffs and when you can measure the cost very easily but the benefits are more nebulous and difficult to measure, it’s difficult to do that way up, so that’s one area I think we need to be exploring. Another area where I think sort of feeds onto this, where I think perhaps we have not yet made careful enough distinctions is questions of data longevity. So much of the privacy debate in Washington is about advertising data. Advertising day two is probably the most short-lasting data that you can imagine. The fact you’re looking for that particular pair of shoes today that’s not going to come up and want you as a piece of data in 20 years’ time. However, some pieces of data potentially do have the ability to hold you in 20 years’ time. And I think we have not yet reasonably grappled with a world where data lives forever and that potentially when people create that data or agree for that data to be shared, that it may have a long-term impact and they may have very different preferences in decades to go. So I think that’s again maybe a reflection of the fact that DC has been so focused on advertising data, but the question of longevity of certain types of data just seems to me a more first order thing for us to be worrying about.

Sarah Oh: Have you thought much about a taxonomy for longevity? So when you were talking about health, I was thinking about biometrics and that’s kind of a unique single dataset for a person, but that’s a little bit different than vulnerable personal information about preferences or of the mind behaviors. Is that appropriate to be thinking- should we be bucketing different types of data into categories? Is that too hard of an exercise?

Catherine Tucker: Well, generally economists sniff at the idea of doing matrices or buckets, but I think Amalia Miller and I came close to it, dangerously close, in a paper we wrote for a National Bureau of Economic Research book. What we said is that, look, if you’re going to worry about data privacy, there were three things in particular you should worry about. Number one is data persistence. That is, as you say for your biometric data, you can’t change that. Your shoe preferences, they can change, but biometrics, I can’t change that. The second thing you need to think about is, well, what is the potential danger of exposing this data? What potential downside is there? So again, biometric data. If it gets revealed that you have potentially a vulnerability to Hodgkin’s disease or something potentially very expensive that would be data you’d be very worried about. The last category, which is a bit more unusual and out there but we think it’s going to be quite important going forward, is there a potential for spillovers for the data? That is, if you think about how we often talk about data, there’s an idea that it’s just tied to one individual. But often now, our data’s actually informative about other individuals. So you’ll buy a market data- I don’t know if you’ve got siblings, but it would tell me a bit about them too. And that’s another category we should be worrying about in terms of saying what data’s worrying and what data is less worrying. As you can sort of see, we’re getting towards bucketing but not quite there.

Tom Lenard: So you just mentioned advertising data and you know as part of the current debate that’s kind of surrounding the tech industry in general. There seems to be a lot of people who are attacking, maybe that’s too strong a word, but have trouble with the whole advertising model on which some of the major tech platforms are built. What’s your view of that advertising model and is it something that we should be worried about?

Catherine Tucker: I don’t worry much about the advertising model and I’ll give you some reasons why, I think both why I’m not worried and why it’s become a huge debate. The first thing to say is that the advertising supported internet, if you think about it, is just amazing. I have a graduate student at MIT who’s calibrated the value which is created by, say the Google platform or the Facebook platform, in terms of unmeasured contribution to the economy each month that is just tremendous. I would love to tell you about Avinash’s research in a little bit more detail. I should highlight that it was actually done with my colleague Erik Brynjolfsson at MIT too. What he found is that he used a variety of survey techniques to calibrate that the value for Facebook for the average user was around $50 a month, which obviously if you aggregate up over the whole year is quite large amount. He also found values such as $3,600 per year for the value created by digital maps. And so I found it just fascinating, and this was with a very large series of online experiments that he did to come up with these numbers, but I do encourage you to read the PNAS article, that’s the Proceedings of the National Academy of Science, it was just published and it’s really fascinating for actually calibrating the unmeasured contribution of digital platforms to the economy.

Tom Lenard: But that also brings up another interesting question. I mean some of the people who criticize the advertising model say, well, these platforms should switch from an advertising to a payment model. Now given these large numbers of the value, would Facebook make more money charging rather than using an advertising model?

Catherine Tucker: This is really interesting. I’m going to sort of say two things. The first is I’m always haunted by the research I want did into paywalls. This was in the early day of paywalls, but it was looking at the effect of the newspaper just putting in a blunt paywall on its website. We measured the day before and the day after, a 90.87% drop in traffic. So I think we have these large sort of figures of values created by these services, but it’s easy to forget that in a world where people are anchored on the idea that a service should be free, it’s going to be really hard to make them pay for it. And so I’m always haunted too, by the example of WhatsApp. The reason I’m haunted by it is I teach pricing at MIT and WhatsApp has always been a terrible thing for me in teaching because it means that my students sort of seem to think that they don’t have to have any plan for how to make money and their startups going to get acquired by Facebook for billions of dollars. But if you think about what WhatsApp actually managed to do was it managed to anchor a lot of people on the idea that messaging should be free. And since then any attempts to actually raise prices beyond zero have been really quite problematic. I always remember I had this student, an executive education student, like earning a really quite large amount of money, explained to the best of the class, how in order to not pay WhatsApp $1 he uninstalled-reinstalled the app, somehow contacted all his contacts to tell them he did that, and he was so proud of the hours he spent saving this $1. So you know we have these estimates of value created, but I think it’d be naive to say that we can easily somehow switch to a pricing model just because when we’re in a world where everyone’s been educated that the just price to the right price of this service is zero and they attempt to raise prices, this is just going to lead to massive switching away to other platforms and it’s just not commercially viable.

Sarah Oh: What do you think of the argument, in Apple v Pepper, one of the arguments was Apple’s app store prices these apps so low at zero or 99 cents that they must have monopoly power because they can price it at zero. How does that square with consumer preferences? Is that even compatible that you could argue that, that a firm has pricing power to price it zero but consumers want that price?

Catherine Tucker: Well, it is strange and perhaps this sort of reflects some of the troubles of trying to think through these issues in a two-sided platform. I think what is certainly correct is to say that consumers for whatever reason have been anchored on the price of zero. That does not seem to reflect any monopoly power, that just seems to reflect the fact that for whatever reason consumers have got used to the idea of paying zero for an app. I just want you to go through the sort of, mentally, the process that say you were looking for a tape measure app. You’re looking at tape measure apps and there’s one that’s $1 on this one that’s free and the $1 one looks better, has got better reviews. You’re still going to be tempted to download that one at zero because you feel stupid for paying a dollar if there’s one that’s zero. I think that’s really the mechanism you’re like, well, I can make do with something less good because it’s zero and I’ll feel stupid if I pay a dollar. It doesn’t sound to me like monopoly power, it just seems that a lot of people who could pay more have got anchored to the idea that apps should be zero in price at least initially. 

Tom Lenard: Are you surprised with the prominence of tech issues in the political debate? Not an economics question.

Catherine Tucker: Not an economics question! It’s just so strange for me quite honestly because I have been studying these things for nearly 20 years now and to be honest, people haven’t been very interested for a long, long time. And what’s more, especially in the area of advertising, the idea that advertizing or the nature of advertising or advertising competition could even be part of the political debate I think is just amazing if you think where we were seven, ten years ago, 

Tom Lenard: There’s always been, in certain circles, a distrust of advertising, even before these debates. 

Catherine Tucker: I think the certainly has been. There’s always been- maybe this is one of the things which gives it so much color. When I start teaching marketing, I always have to do a bit of marketing of marketing because most educated people have this sort of natural sense of distaste, sort of a bit distasteful advertising in some sense. I’ve never clicked on online, it’s somewhat distasteful. It must be about persuading people to buy stuff they don’t need or can’t afford or that kind of narrative. And so I think certainly there’s always been a slightly- sense of distaste around marketing, but I think what’s new and these questions of competition between different marketeers, it’s never been an industry where questions of competitive structure or market structure’s ever really been an issue, and suddenly you have all these lawyers who don’t seem to be aware of this entire marketing literature where we fought about this for decades, pronouncing on the topic of advertising and that’s quite strange to me honestly.

Tom Lenard: Do you think there’s a market failure? I mean it’s significant market failure in the market for privacy.

Catherine Tucker: I think there is a paradox which is almost the opposite of a market failure in the provision of privacy. And it’s this paradox which is befuddling. So it’s what we call the privacy paradox. Let me just give you some background. 

Tom Lenard: I have it down here as a question. 

Catherine Tucker: Let me tell you why I think this is the thing that we need to be grappling with rather than calling privacy a market failure. So we have this situation where people say that they are concerned about privacy and then they appear to behave in ways which suggests that they do not care about privacy. And I have a paper about this called the privacy paradox, which is all about where we asked MIT undergraduates for really some quite personal data. We asked them to share with us the contact details of their preps. Initially we just ask them and then the students who cared about privacy really displayed how they felt about this question in that not only did they refuse to give us data, but sometimes they’d give us false data with expletives in it, whether it’s FYouResearcher@MIT.edu, that’s my friend. And they did, they used the entire word, anyway it was quite clear that they were angry. But then we gave the other half of people a slice of pizza and the moment we gave the slice of pizza, then the people started behaving in a way which was inconsistent with their privacy preferences. And in particular the people who said they cared a lot about privacy stopped swearing at us and just started handing over the data. This is why thinking about market failure in privacy is so confusing and difficult because first of all we have to wrestle with this essential privacy paradox in that- I can’t think of an area where people’s stated preference is as so far away from that actual behavior- and then what do you gear policy around? Do you say it’s the stated preferences that matter; we need to protect people from themselves. I was a little bit worried about that kind of argument. Or do we say we should actually look at how people actually behave, but then we’ll probably be in a regime where there’s no privacy protection given, even for the most sensitive data and that doesn’t seem quite right either. And so I think until we wrestle with this difference between behavior and stated preferences, it’s far too early to be talking about whether it’s a market failure on provision of privacy. Market failure rests- you think about the economic failure of economic theory on there actually being preferences.

Tom Lenard: Of course economists typically, or at least historically, have preferred revealed preferences, behavior.

Catherine Tucker: We have. We haven’t, and I of course have that in, being an economist, that’s how I tend to think. But what was amazing about this paper I wrote about the, as I said, the pizza and the MIT undergraduates, is it actually is being embraced by both sides in that there’s a sense that you’re showing that no, people really don’t care about their privacy, even MIT undergraduates give it away for pizza. And then the other argument is of course if even MIT undergraduates give away their data for pizza, it shows that we really, really need lots of privacy regulations. As economists we tend to go towards reveal preferences, but-

Tom Lenard: And another argument that people is that, well they give away their data, but that’s because they really don’t understand what’s being done with it, which probably applies less to MIT undergraduates than others.

Catherine Tucker: So yes, that that is probably not true in this setting, and that’s one of the interesting things about it. The people express the highest state of privacy preferences. As I say, they evidently understood in the no pizza condition what was going on and that they got quite angry with us, but pizza managed to distract them. So you can actually see, it’s not a question of information now, it’s just that people do seem to behave differently when pizza is on the line.

Tom Lenard: So let’s move a little bit to the related subject of antitrust and competition and the role of data in that. which you’ve also written a lot of about. There are a lot of people now saying that, a basic problem- first of all that there’s a competition problem with tech platforms among other things, but particularly in tech platforms and that it is rooted in large amounts of data that these companies have, which basically is a barrier to entry by any competitors. You’ve written about that. So what’s your take on that?

Catherine Tucker: As I say, I’ve written quite a few articles on this topic. If I was to summarize one key insight I think I have is that when one’s thinking about data and whether or not it could pose a barrier to entry or potentially be some kind of essential facility, the first order question needs to be how unique is that particular data? Because we talk about data as a whole as though it was all a homogenous mass of competitive advantage. Whereas in reality most data is not very useful for advertising and most data is not unique. The lens I encourage people to adopt in these articles is thinking about a person’s digital footprint. Given a digital footprint, which is your behavior of browsing online, how likely is it that only one firm will have insight into a particular occasion where it might be good to advertise to you? If you’re thinking about how this would work, if you think about a situation where you’ve got a leaking pipe and your pipe leaks, and you suddenly need a plumber. Then you’re probably, with all that water gushing down, not going to spend long browsing on the internet. Maybe only one website will ever get to know that you needed a plumber and have that opportunity of advertising to you. On the other hand, if you’re buying something like a car, if you’re anything like most Americans, you will spend some time online. You’ll research it, you’ll try and figure it out, you’ll watch videos. You’ll go to a lot of different websites, so many different websites will have the insight that you’re an auto intent as we call you, or you’re likely to buy a car and therefore the data won’t be unique and won’t be necessarily a source of sustainable competitive advantage. So I think again, having that sort of taxonomy in mind to help you think, when is data going to be unique, therefore when might it be a source of competitive advantage. And I found that so much helpful than the sort of general discussion of somehow big data being a source of barriers to entry. Cause I just think that’s wrong to lump everything together in that way.

Tom Lenard: Even if you happen to have some unique data that has value, you went through the expense of getting it, of investing in it. Shouldn’t you get the advantages that come from that?

Catherine Tucker: This is interesting, isn’t it? Let’s just imagine a uniquely valuable dataset. If I was in any market, what I’d like to do is I’d like to find unique data on people booking last minute charter jets. Why is that? Well, each single lead there is worth thousands of dollars. But the moment you set that up, and I set up a successful website which is where people go for their last-minute charter jet bookings, is the moment there’s going to be huge competition of other firms trying to get access to that unique data. Well, I take you at your point that maybe you should enjoy the benefits of having built a platform which accesses unique data. I also think one has to remember that the moment in that kind of position of competitive advantage is the moment you’ve put a bullseye on your back. For other people to try and work out how also to compete.

Tom Lenard: Well, but the people who are most worried about competition problems in the tech sector, I think we make you do a lot more to address them, they would say that these large tech firms are just firmly entrenched and basically the likelihood of some competitions at least under current conditions is very, very unlikely and therefore we need to do more to facilitate new entry, etc. So I guess my starting question is how firmly entrenched are these platforms and is that a problem? Aside from the fact that you read on favorable articles virtually every day, the people who actually use these platforms seem to be relatively satisfied. I think, but maybe not. 

Catherine Tucker: I’m going to make maybe a controversial point, but it shouldn’t be, is that what really frustrates me in this debate is the idea of talking about the tech platforms. If you analyze Google and Facebook, there’s a whole different set of economic phenomenon you have to understand. Amazon is completely different set of phenomena too, as well as Uber. So one of the first things I’m going to fight against is this idea that somehow we can make a statement such as, “tech platforms are entrenched,” because I think that misses the point. If we go through them one by one and consider them individually, I could go on forever for this podcast so stop me at some point, if you think about Facebook, they are rather vulnerable. And one of the things- I worry about statements made about the entrenchment and one of the reasons I worry about it is I’ve written a variety of papers about how weak network effects are in the social media space. And if you think the world is littered with examples of social media platforms coming in and coming out. I think the two main pressures for that vulnerability- first of all, network effects at least on the user side of social media platforms tend to be very local, that I know my friends, I communicate with them on Facebook, but the moment that they’re going to TikTok, I’m going to notice they’re gone and I’m going to shift immediately. The second reason we’re vulnerable on social media platforms is that network effects aren’t just local. They’re permeated with sociology and it’s always weird for economists to talk about sociology, but there’s a phenomenon which I find interesting a bit removed from economics, which is out of the dissociative group. And the idea is we always resistant to being on a platform, whether a dissociative groups, and I’m sorry to tell you this Tom, but you and I are probably dissociative for a social media platform, you don’t want us on it. I think you’d do a lot better, you call it see us, but she’s a lot younger, but certainly not as disassociated as us. And that’s a constant challenge. In a world where you have a phenomenon such as TikTok, where people like me and Tom are not present, therefore it’s a lot cooler than that also makes these network effects quite fragile. And so that that’s sort of why I worry about Facebook. I’m sorry Tom!

Tom Lenard: I don’t know that I like being called uncool, but I’m sure it’s true. 

Catherine Tucker: How about I just insult myself? I am so dissociative. I’m an older female who drives a minivan. No one wants me on any social network. Anyway, therefore network affects are quite fragile there. Now if we move on to other platforms such as Uber or Lyft, then again we have fragility of network effects but from a different source, and there the source is to do with something we call multihoming, or the fact that any time that you’re using the Lyft or the Uber platform and the price is too high, you’re tempted to check out another platform. And so in this world where we’re constantly hopping between platforms, again, entrenched incumbency as is difficult. Problematic. I should probably stop before I go on for too long or insult any more people in the room.

Sarah Oh: Just to ask about crypto, since you’re also a head of a crypto group at MIT, there has been some concern that Facebook’s Libra project is taking advantage of network effects that they have on their platform when you think about that project. I personally think it seems like it’s fragile, but the success rate, I mean they’ll have to prove whether they can launch it, but is that fear warranted? There’s so many competitors in that space.

Catherine Tucker: I should just be clear here that my coauthor Christian Catalini is actually the chief economist on that project, he’s doing a wonderful job there and so I probably shouldn’t say anything, I’m not going to say anything critical, because I know how hard he’s been working. I think what you see with that project is that you often make claims that somehow by being a large tech platform, it gives you the ability to leverage your data or leverage your platform to help you go into other markets and the challenges that project’s facing right now, I think dispel that. It reminds me a little bit about- do you remember when Google tried to launch a social media network to compete with Facebook, Google+, and how it never went anywhere. , of course hope that Christian will succeed, but I think it certainly a telling example, that you can’t sort of assume that just because you have a large user base it’s going to help you necessarily move into other markets.

Tom Lenard: Last question, but it’s a kind of a big one in policy world. Do you think the United States needs a privacy law and if so, what should it contain and what should it not contain?

Catherine Tucker: Personally one of the merits of the US is that we do regulate privacy sector by sector and I think this does give you flexibility to treat different types of data with different degrees of protection. And I would personally much prefer a world where my genetic data or biometric data is given a huge amount of privacy protection and I very much feel in control of how that data was used or reused and that that data was treated with a greater degree of protection than some temporary shoe browsing date or something like that. A strength of the current system is that we do have a way now of saying that some types of data do need higher or more stringent privacy protections than other ones. That makes me somewhat reluctant to even think about a universal rule, which by its nature would start to treat all data as imminent, which is a little bit more what you see within GDPR. 

Tom Lenard: Well, great. Thank you very much for doing this. For more on what TPI is doing, go to our website at techpolicyinstitute.org. Thank you.

Catherine Tucker bio and links to relevant papers: https://mitsloan.mit.edu/faculty/directory/catherine-tucker

Avinash and Eric’s Research: http://news.mit.edu/2019/online-tools-facebook-count-toward-gdp-0326

Disclosure Statement: https://mitmgmtfaculty.mit.edu/cetucker/disclosure/. Professor Tucker’s disclosure statement lists companies she has consulted for, grants she has received, relationships with academics working at a variety of firms, and entities in which she has a significant financial interest. The statement follows the guidelines set out by MIT, American Economic Review, and NBER.